Updates from: 10/20/2022 01:09:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Enable Authentication Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app.md
The resources referenced by the *https://docsupdatetracker.net/index.html* file are detailed in the following
|[`ui.js`](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/ui.js) | Controls the UI elements. | | | |
-To render the SPA index file, in the *myApp* folder, create a file named *https://docsupdatetracker.net/index.html*, which contains the following HTML snippet.
+To render the SPA index file, in the *myApp* folder, create a file named *https://docsupdatetracker.net/index.html*, which contains the following HTML snippet:
```html <!DOCTYPE html>
To specify your Azure AD B2C user flows, do the following:
In this step, implement the methods to initialize the sign-in flow, API access token acquisition, and the sign-out methods.
-For more information, see the [MSAL PublicClientApplication class reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html), and [Use the Microsoft Authentication Library (MSAL) to sign in the user](../active-directory/develop/tutorial-v2-javascript-spa.md#use-the-microsoft-authentication-library-msal-to-sign-in-the-user) articles.
+For more information, see the [MSAL PublicClientApplication class reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html), and [Use the Microsoft Authentication Library (MSAL) to sign in the user](../active-directory/develop/tutorial-v2-javascript-spa.md#use-the-msal-to-sign-in-the-user) articles.
To sign in the user, do the following:
To call your web API by using the token you acquired, do the following:
## Step 10: Add the UI elements reference
-The SPA app uses JavaScript to control the UI elements. For example, it displays the sign-in and sign-out buttons, and renders the users ID token claims to the screen.
+The SPA app uses JavaScript to control the UI elements. For example, it displays the sign-in and sign-out buttons, and renders the users' ID token claims to the screen.
To add the UI elements reference, do the following:
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
The smart lockout feature uses many factors to determine when an account should
- Passwords such as 12456! and 1234567! (or newAccount1234 and newaccount1234) are so similar that the algorithm interprets them as human error and counts them as a single try. - Larger variations in pattern, such as 12456! and ABCD2!, are counted as separate tries.
-When testing the smart lockout feature, use a distinctive pattern for each password you enter. Consider using password generation web apps, such as `https://passwordsgenerator.net/`.
+When testing the smart lockout feature, use a distinctive pattern for each password you enter. Consider using password generation web apps, such as `https://password-generator.net/`.
When the smart lockout threshold is reached, you'll see the following message while the account is locked: **Your account is temporarily locked to prevent unauthorized use. Try again later**. The error messages can be [localized](localization-string-ids.md#sign-up-or-sign-in-error-messages).
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 08/17/2022 Last updated : 10/17/2022
Applications that support the SCIM profile described in this article can be conn
**To connect an application that supports SCIM:**
-1. Sign in to the [Azure AD portal](https://aad.portal.azure.com). You can get access a free trial for Azure AD with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/office/dev-program)
+1. Sign in to the [Azure AD portal](https://aad.portal.azure.com). You can get access a free trial for Azure AD with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/microsoft-365/dev-program))
1. Select **Enterprise applications** from the left pane. A list of all configured apps is shown, including apps that were added from the gallery. 1. Select **+ New application** > **+ Create your own application**. 1. Enter a name for your application, choose the option "*integrate any other application you don't find in the gallery*" and select **Add** to create an app object. The new app is added to the list of enterprise applications and opens to its app management screen.
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for multifactor authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both multifactor authentication and SSPR. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ) > [!NOTE]
-> Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration.
->
-> After Sept. 30th, 2022, all users will register security information through the combined registration experience.
+> Effective Oct. 1st, 2022, we will begin to enable combined registration for all users in Azure AD tenants created before August 15th, 2020. Tenants created after this date are enabled with combined registration.
This article outlines what combined security registration is. To get started with combined security registration, see the following article:
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
Last updated 02/02/2021 --++
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for Azure AD Multi-Factor Authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both Azure AD Multi-Factor Authentication and SSPR. > [!NOTE]
-> Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
->
-> After Sept. 30th, 2022, all users will register security information through the combined registration experience.
+> Effective Oct. 1st, 2022, we will begin to enable combined registration for all users in Azure AD tenants created before August 15th, 2020. Tenants created after this date are enabled with combined registration.
To make sure you understand the functionality and effects before you enable the new experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
Title: "How to use Continuous Access Evaluation enabled APIs in your applications" description: How to increase app security and resilience by adding support for Continuous Access Evaluation, enabling long-lived access tokens that can be revoked based on critical events and policy evaluation. ---+ Last updated 07/09/2021-++ # Customer intent: As an application developer, I want to learn how to use Continuous Access Evaluation for building resiliency through long-lived, refreshable tokens that can be revoked based on critical events and policy evaluation.
active-directory Mobile Sso Support Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-sso-support-overview.md
Title: Support single sign-on and app protection policies in mobile apps you develop description: Explanation and overview of building mobile applications that support single sign-on and app protection policies using the Microsoft identity platform and integrating with Azure Active Directory. ---+ Last updated 10/14/2020-++ #Customer intent: As an app developer, I want to know how to implement an app that supports single sign-on and app protection policies using the Microsoft identity platform and integrating with Azure Active Directory.
Finally, [add the Intune SDK](/mem/intune/developer/app-sdk-get-started) to your
- [Authorization agents and how to enable them](./msal-android-single-sign-on.md) - [Get started with the Microsoft Intune App SDK](/mem/intune/developer/app-sdk-get-started) - [Configure settings for the Intune App SDK](/mem/intune/developer/app-sdk-ios#configure-settings-for-the-intune-app-sdk)-- [Microsoft Intune protected apps](/mem/intune/apps/apps-supported-intune-apps)
+- [Microsoft Intune protected apps](/mem/intune/apps/apps-supported-intune-apps)
active-directory Msal Client Application Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-application-configuration.md
Currently, the only way to get an app to sign in users with only personal Micros
## Client ID
-The client ID is the unique **Application (client) ID** assigned to your app by Azure AD when the app was registered.
+The client ID is the unique **Application (client) ID** assigned to your app by Azure AD when the app was registered. You can find the **Application (Client) ID** in your Azure subscription by Azure AD => Enterprise applications => Application ID.
## Redirect URI
active-directory Scenario Desktop Acquire Token Username Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-username-password.md
For more information on all the modifiers that can be applied to `AcquireTokenBy
# [Java](#tab/java)
-The following extract is from the [MSAL Java code samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/public-client/).
+The following extract is from the [MSAL Java code samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/msal4j-sdk/src/samples/public-client/UsernamePasswordFlow.java).
```java PublicClientApplication pca = PublicClientApplication.builder(clientId)
active-directory Tutorial Blazor Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-server.md
Title: Tutorial - Create a Blazor Server app that uses the Microsoft identity platform for authentication description: In this tutorial, you set up authentication using the Microsoft identity platform in a Blazor Server app.---++
active-directory Tutorial Blazor Webassembly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-webassembly.md
Title: Tutorial - Sign in users and call a protected API from a Blazor WebAssembly app description: In this tutorial, sign in users and call a protected API using the Microsoft identity platform in a Blazor WebAssembly (WASM) app.---++
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
Title: "Tutorial: Create a JavaScript single-page app that uses the Microsoft identity platform for authentication"
-description: In this tutorial, you build a JavaScript single-page app (SPA) that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
+ Title: "Tutorial: Create a JavaScript single-page application that uses the Microsoft identity platform for authentication"
+description: In this tutorial, you build a JavaScript single-page application (SPA) that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
-# Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application (SPA)
+# Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application
-In this tutorial, build a JavaScript single-page application (SPA) that signs in users and calls Microsoft Graph by using the implicit flow of OAuth 2.0. This SPA uses MSAL.js v1.x, which uses the implicit grant flow for SPAs. For all new applications, use [MSAL.js v2.x and the authorization code flow with PKCE and CORS](tutorial-v2-javascript-auth-code.md), which provides more security than the implicit flow
+In this tutorial, you build a JavaScript single-page application (SPA) that signs in users and calls Microsoft Graph by using the implicit flow of OAuth 2.0. This SPA uses MSAL.js v1.x, which uses the implicit grant flow for SPAs. For all new applications, use [MSAL.js v2.x and the authorization code flow with PKCE and CORS](tutorial-v2-javascript-auth-code.md). The authorization code flow provides more security than the implicit flow.
In this tutorial:
In this tutorial:
> * Create a JavaScript project with `npm` > * Register the application in the Azure portal > * Add code to support user sign-in and sign-out
-> * Add code to call Microsoft Graph API
+> * Add code to call the Microsoft Graph API
> * Test the app
-> * Gain understanding of how the process works behind the scenes
+> * Gain an understanding of how the process works behind the scenes
-At the end of this tutorial, you'll have created the folder structure below (listed in order of creation), along with the *.js* and *.html* files by copying the code blocks in the upcoming sections.
+At the end of this tutorial, you'll have the following folder and file structure (listed in order of creation):
```txt sampleApp/
sampleApp/
## Prerequisites * [Node.js](https://nodejs.org/en/download/) for running a local web server.
-* [Visual Studio Code](https://code.visualstudio.com/download) or other editor for modifying project files.
-* A modern web browser. **Internet Explorer** is **not supported** by the app you build in this tutorial due to the app's use of [ES6](http://www.ecma-international.org/ecma-262/6.0/) conventions.
+* [Visual Studio Code](https://code.visualstudio.com/download) or another editor for modifying project files.
+* A modern web browser. The app that you build in this tutorial uses [ES6](http://www.ecma-international.org/ecma-262/6.0/) conventions and *does not support Internet Explorer*.
-## How the sample app generated by this guide works
+## How the sample app works
-![Shows how the sample app generated by this tutorial works](media/active-directory-develop-guidedsetup-javascriptspa-introduction/javascriptspa-intro.svg)
+![Diagram that shows how the sample app generated by this tutorial works.](media/active-directory-develop-guidedsetup-javascriptspa-introduction/javascriptspa-intro.svg)
-The sample application created by this guide enables a JavaScript SPA to query the Microsoft Graph API. This can also work for a web API that is set up to accept tokens from the Microsoft identity platform. After the user signs in, an access token is requested and added to the HTTP requests through the authorization header. This token will be used to acquire the user's profile and mails via **MS Graph API**.
+The application that you create in this tutorial enables a JavaScript SPA to query the Microsoft Graph API. This querying can also work for a web API that's set up to accept tokens from the Microsoft identity platform. After the user signs in, the SPA requests an access token and adds it to the HTTP requests through the authorization header. The SPA will use this token to acquire the user's profile and emails via the Microsoft Graph API.
-Token acquisition and renewal are handled by the [Microsoft Authentication Library (MSAL) for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js).
+The [Microsoft Authentication Library (MSAL) for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js) handles token acquisition and renewal.
## Set up the web server or project
-> Prefer to download this sample's project instead? [Download the project files](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/archive/quickstart.zip).
->
-> To configure the code sample before you execute it, skip to the [registration step](#register-the-application).
+If you prefer, you can [download the project files](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/archive/quickstart.zip).
+
+To configure the code sample before you run it, skip to the [registration step](#register-the-application).
## Create the project
-Make sure [*Node.js*](https://nodejs.org/en/download/) is installed, and then create a folder to host the application. Name the folder *sampleApp*. In this folder, an [*Express*](https://expressjs.com/) web server is created to serve the *https://docsupdatetracker.net/index.html* file.
+1. Make sure that [Node.js](https://nodejs.org/en/download/) is installed, and then create a folder to host the application. Name the folder *sampleApp*. In this folder, an [Express](https://expressjs.com/) web server is created to serve the *https://docsupdatetracker.net/index.html* file.
-1. Using a terminal (such as Visual Studio Code integrated terminal), locate the project folder, move into it, then type:
+1. By using a terminal (such as the Visual Studio Code integrated terminal), locate the project folder and move into it. Then enter:
-```console
-npm init
-```
+ ```console
+ npm init
+ ```
-2. A series of prompts will appear in order to create the application. Notice that the folder, *sampleApp* is now all lowercase. The items in brackets `()` are generated by default. Feel free to experiment, however for the purposes of this tutorial, you don't need to enter anything, and can press **Enter** to continue to the next prompt.
+2. A series of prompts appears for creation of the application. Notice that the folder *sampleApp* is now all lowercase. The items in parentheses `()` are generated by default.
```console package name: (sampleapp)
npm init
license: (ISC) ```
+
+ Feel free to experiment. However, for the purposes of this tutorial, you don't need to enter anything. Select the Enter key to continue to the next prompt.
-3. The final consent prompt will contain the following output on the assumption no values were entered in the previous step. Press **Enter** and the JSON written to a file called *package.json*.
+3. The final consent prompt contains the following output if you didn't enter any values in the previous step.
```console {
npm init
Is this OK? (yes) ```
-4. Next, install the required dependencies. Express.js is a Node.js module designed to simplify the creation of web servers and APIs. Morgan.js is used to log HTTP requests and errors. Upon installation, the *package-lock.json* file and *node_modules* folder are created.
+ Select the Enter key, and the JSON is written to a file called *package.json*.
+
+4. Install the required dependencies by entering the following code:
```console npm install express --save npm install morgan --save ```
-5. Now, create a *.js* file named *server.js* in your current folder, and add the following code:
+ Express.js is a Node.js module that simplifies the creation of web servers and APIs. Morgan.js is used to log HTTP requests and errors. Installing them creates the *package-lock.json* file and *node_modules* folder.
+
+5. Create a *.js* file named *server.js* in your current folder, and add the following code:
```JavaScript const express = require('express'); const morgan = require('morgan'); const path = require('path');
- //initialize express.
+ //Initialize Express.
const app = express(); // Initialize variables. const port = 3000; // process.env.PORT || 3000;
- // Configure morgan module to log all requests.
+ // Configure the morgan module to log all requests.
app.use(morgan('dev')); // Set the front-end folder to serve public assets.
sampleApp/
ΓööΓöÇΓöÇ server.js ```
-In the next steps you'll create a new folder for the JavaScript SPA, and set up the user interface (UI).
+In the next steps, you'll create a new folder for the JavaScript SPA and set up the user interface (UI).
> [!TIP]
-> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization, and is primarily associated with a domain, like Microsoft.com. If you wish to learn how applications can work with multiple tenants, refer to the [application model](/articles/active-directory/develop/application-model.md).
+> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization. It's primarily associated with a domain, like Microsoft.com. If you want to learn how applications can work with multiple tenants, refer to the [application model](/articles/active-directory/develop/application-model.md).
## Create the SPA UI
-1. Create a new folder, *JavaScriptSPA* and then move into that folder.
+1. Create a new folder, *JavaScriptSPA*, and then move into that folder.
-1. From there, create an *https://docsupdatetracker.net/index.html* file for the SPA. This file implements a UI built with [*Bootstrap 4 Framework*](https://www.javatpoint.com/bootstrap-4-layouts#:~:text=Bootstrap%204%20is%20the%20newest%20version%20of%20Bootstrap.,framework%20directed%20at%20responsive%2C%20mobile-first%20front-end%20web%20development.) and imports script files for configuration, authentication and API call.
+1. Create an *https://docsupdatetracker.net/index.html* file for the SPA. This file implements a UI that's built with the [Bootstrap 4 framework](https://www.javatpoint.com/bootstrap-4-layouts#:~:text=Bootstrap%204%20is%20the%20newest%20version%20of%20Bootstrap.,framework%20directed%20at%20responsive%2C%20mobile-first%20front-end%20web%20development.). The file also imports script files for configuration, authentication, and API calls.
In the *https://docsupdatetracker.net/index.html* file, add the following code:
In the next steps you'll create a new folder for the JavaScript SPA, and set up
<br> <br>
- <!-- importing bootstrap.js and supporting js libraries -->
+ <!-- importing bootstrap.js and supporting .js libraries -->
<script src="https://code.jquery.com/jquery-3.4.1.slim.min.js" integrity="sha384-J6qa4849blE2+poT4WnyKhv5vZF5SrPo0iEjwBvKU7imGFAV0wwj1yYfoRSJoZ+n" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js" integrity="sha384-wfSDF2E50Y2D1uUdj0O3uMBJnjuUD4Ih7YwaYd1iqfktj0Uod8GCExl3Og8ifwB6" crossorigin="anonymous"></script>
In the next steps you'll create a new folder for the JavaScript SPA, and set up
<script type="text/javascript" src="./graphConfig.js"></script> <script type="text/javascript" src="./ui.js"></script>
- <!-- replace next line with authRedirect.js if you would like to use the redirect flow -->
+ <!-- replace the next line with authRedirect.js if you want to use the redirect flow -->
<!-- <script type="text/javascript" src="./authRedirect.js"></script> --> <script type="text/javascript" src="./authPopup.js"></script> <script type="text/javascript" src="./graph.js"></script>
In the next steps you'll create a new folder for the JavaScript SPA, and set up
</html> ```
-2. Now, create a *.js* file named *ui.js*, which accesses and updates the Document Object Model (DOM) elements, and add the following code:
+2. Create a file named *ui.js*, and add this code to access and update the Document Object Model (DOM) elements:
```JavaScript // Select DOM elements to work with
In the next steps you'll create a new folder for the JavaScript SPA, and set up
## Register the application
-Before proceeding further with authentication, register the application on **Azure Active Directory**.
+Before you proceed with authentication, register the application on Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Go to **Azure Active Directory**.
+1. On the left panel, under **Manage**, select **App registrations**. Then, on the top menu bar, select **New registration**.
+1. For **Name**, enter a name for the application (for example, **sampleApp**). You can change the name later if necessary.
+1. Under **Supported account types**, select **Accounts in this organizational directory only**.
+1. In the **Redirect URI** section, select the **Web** platform from the dropdown list.
-1. Sign in to the [*Azure portal*](https://portal.azure.com/).
-1. Navigate to **Azure Active Directory**.
-1. Go to the left panel, and under **Manage**, select **App registrations**, then in the top menu bar, select **New registration**.
-1. Enter a **Name** for the application, for example **sampleApp**. The name can be changed later if necessary.
-1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. In the **Redirect URI** section, select the **Web** platform from the drop-down list. To the right, enter the value of the local host to be used. Enter either of the following options:
- 1. `http://localhost:3000/`
- 1. If you wish to use a custom TCP port, use `http://localhost:<port>/` (where `<port>` is the custom TCP port number).
+ To the right, enter `http://localhost:3000/`.
1. Select **Register**.
-1. This opens the **Overview** page of the application. Note the **Application (client) ID** and **Directory (tenant) ID**. Both of them are needed when the *authConfig.js* file is created in the following steps.
-1. In the left panel, under **Manage**, select **Authentication**.
+
+ The **Overview** page of the application opens. Note the **Application (client) ID** and **Directory (tenant) ID** values. You'll need both of them when you create the *authConfig.js* file in later steps.
+1. Under **Manage**, select **Authentication**.
1. In the **Implicit grant and hybrid flows** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app must sign in users and call an API.
-1. Select **Save**. You can navigate back to the **Overview** panel by selecting it in the left panel.
+1. Select **Save**. You can go back to the **Overview** page by selecting it on the left panel.
-The redirect URI can be changed at anytime by going to the **Overview** page, and selecting **Add a Redirect URI**.
+You can change the redirect URI anytime by going to the **Overview** page and selecting **Add a Redirect URI**.
## Configure the JavaScript SPA
-1. In the *JavaScriptSPA* folder, create a new file, *authConfig.js*, and copy the following code. This code contains the configuration parameters for authentication (Client ID, Tenant ID, Redirect URI).
-
-```javascript
- const msalConfig = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
- redirectUri: "Enter_the_Redirect_URI_Here",
- },
- cache: {
- cacheLocation: "sessionStorage", // This configures where your cache will be stored
- storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
- }
- };
+1. In the *JavaScriptSPA* folder, create a new file, *authConfig.js*. Then copy the following code. This code contains the configuration parameters for authentication (client ID, tenant ID, redirect URI).
- // Add here scopes for id token to be used at MS Identity Platform endpoints.
- const loginRequest = {
- scopes: ["openid", "profile", "User.Read"]
- };
+ ```javascript
+ const msalConfig = {
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
+ redirectUri: "Enter_the_Redirect_URI_Here",
+ },
+ cache: {
+ cacheLocation: "sessionStorage", // This configures where your cache will be stored
+ storeAuthStateInCookie: false, // Set this to "true" if you're having issues on Internet Explorer 11 or Edge
+ }
+ };
- // Add here scopes for access token to be used at MS Graph API endpoints.
- const tokenRequest = {
- scopes: ["Mail.Read"]
- };
-```
+ // Add scopes for the ID token to be used at Microsoft identity platform endpoints.
+ const loginRequest = {
+ scopes: ["openid", "profile", "User.Read"]
+ };
-Modify the values in the `msalConfig` section. You can refer to your app's **Overview** page on Azure for some of these values:
- - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+ // Add scopes for the access token to be used at Microsoft Graph API endpoints.
+ const tokenRequest = {
+ scopes: ["Mail.Read"]
+ };
+ ```
-2. Modify the values in the `msalConfig` section as described below. Refer to the **Overview** page of the application for these values:
- - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered. You can copy this directly from **Azure**.
+2. Modify the values in the `msalConfig` section. Refer to the **Overview** page of the application for these values:
+ - `Enter_the_Application_Id_Here` is the **Application (client) ID** value for the application that you registered.
- - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), refer to [*National clouds*](./authentication-national-cloud.md).
- - Set `Enter_the_Tenant_info_here` to one of the following options:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Directory (tenant) ID** or **Tenant name** (for example, *contoso.microsoft.com*).
- - `Enter_the_Redirect_URI_Here` is the default URL that you set in the previous section, `http://localhost:3000/`.
+ - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For national clouds (for example, China), refer to [National clouds](./authentication-national-cloud.md).
+ - Replace `Enter_the_Tenant_info_here` with the **Directory (tenant) ID** (a GUID) or **Tenant name** value (for example, *contoso.onmicrosoft.com*).
+ - `Enter_the_Redirect_URI_Here` is the default URL that you set in the previous section: `http://localhost:3000/`.
> [!TIP]
-> There are other options for `Enter_the_Tenant_info_here` depending on what you want your application to support.
+> There are other options for `Enter_the_Tenant_info_here`, depending on what you want your application to support:
> - If your application supports *accounts in any organizational directory*, replace this value with **organizations**. > - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**.
-## Use the Microsoft Authentication Library (MSAL) to sign in the user
+## Use the MSAL to sign in the user
-In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, which contains the authentication and token acquisition logic, and add the following code:
+In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, which contains the authentication and token acquisition logic. Add the following code:
```JavaScript const myMSALObj = new Msal.UserAgentApplication(msalConfig);
In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, whi
console.log(error); console.log("silent token acquisition fails. acquiring token using popup");
- // fallback to interaction when silent call fails
+ // fallback to interaction when the silent call fails
return myMSALObj.acquireTokenPopup(request) .then(tokenResponse => { return tokenResponse;
In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, whi
} ```
-## More information
+## Use tokens for validation
+
+The first time a user selects the **Sign In** button, the `signIn` function that you added to the *authPopup.js* file calls MSAL's `loginPopup` function to start the sign-in process. This function opens a pop-up window that prompts the user to enter their credentials.
-The first time a user selects the **Sign In** button, the `signIn` function you added to the *authPopup.js* file calls MSAL's `loginPopup` function to start the sign-in process. This method opens a pop-up window with the *Microsoft identity platform endpoint* to prompt and validate the user's credentials. After a successful sign-in, the user is redirected back to the original *https://docsupdatetracker.net/index.html* page. A token is received, processed by *msal.js*, and the information contained in the token is cached. This token is known as the *ID token* and contains basic information about the user, such as the user display name. If you plan to use any data provided by this token for any purposes, make sure this token is validated by your backend server to guarantee that the token was issued to a valid user for your application.
+After a successful sign-in, the user is redirected back to the original *https://docsupdatetracker.net/index.html* page. The *msal.js* file receives and processes an *ID token*, and the information in the token is cached. The ID token contains basic information about the user, such as the user's display name. If you plan to use any data in the ID token for any purpose, make sure that your back-end server validates the token to guarantee that the token was issued to a valid user for your application.
-The SPA generated by this tutorial calls `acquireTokenSilent` and/or `acquireTokenPopup` to acquire an *access token* used to query the Microsoft Graph API for the user's profile info. If you need a sample that validates the ID token, refer to the following [sample application](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/3-Authorization-II/1-call-api/README.md "GitHub active-directory-javascript-singlepageapp-dotnet-webapi-v2 sample") in GitHub, which uses an ASP.NET web API for token validation.
+The app that you create in this tutorial calls `acquireTokenSilent` and/or `acquireTokenPopup` to acquire an *access token*. The app uses this token to query the Microsoft Graph API for the user's profile info. If you need a sample that validates the ID token, refer to the [sample application](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/3-Authorization-II/1-call-api/README.md "GitHub active-directory-javascript-singlepageapp-dotnet-webapi-v2 sample") in GitHub, which uses an ASP.NET web API for token validation.
### Get a user token interactively
-After the initial sign-in, users shouldn't need to reauthenticate every time they need to request a token to access a resource. Therefore, `acquireTokenSilent` should be used most of the time to acquire tokens. There are situations, however, where you force users to interact with Microsoft identity platform. Examples include when:
+After the initial sign-in, users shouldn't need to reauthenticate every time they need to request a token to access a resource. Most of the time, the app will use `acquireTokenSilent` to acquire tokens. But you might force users to interact with the Microsoft identity platform in situations like these:
- Users need to reenter their credentials because the password has expired.-- An application is requesting access to a resource, and the user's consent is needed.
+- An application is requesting access to a resource and needs the user's consent.
- Two-factor authentication is required. Calling `acquireTokenPopup` opens a pop-up window (or `acquireTokenRedirect` redirects users to the Microsoft identity platform). In that window, users need to interact by confirming their credentials, giving consent to the required resource, or completing the two-factor authentication. ### Get a user token silently
-The `acquireTokenSilent` method handles token acquisition and renewal without any user interaction. After `loginPopup` (or `loginRedirect`) is executed for the first time, `acquireTokenSilent` is the method commonly used to obtain tokens used to access protected resources for subsequent calls. (Calls to request or renew tokens are made silently.) It's worth noting that `acquireTokenSilent` may fail in some cases, such as when a user's password expires. The application can then handle this exception in two ways:
+The `acquireTokenSilent` method handles token acquisition and renewal without any user interaction. After `loginPopup` (or `loginRedirect`) is executed for the first time, subsequent calls use `acquireTokenSilent` to get tokens for accessing protected resources. (Calls to request or renew tokens are made silently.)
-1. Making a call to `acquireTokenPopup` immediately, which triggers a user sign-in prompt. This pattern is commonly used in online applications where there's no unauthenticated content in the application available to the user. The sample generated by this guided setup uses this pattern.
+The `acquireTokenSilent` method might fail in some cases, such as when a user's password expires. The application can handle this exception in two ways:
-1. Making a visual indication to the user that an interactive sign-in is required. The user can then select the right time to sign in, or the application can retry `acquireTokenSilent` at a later time. This is commonly used when the user can use other functionality of the application without being disrupted. For example, there might be unauthenticated content available in the application. In this situation, the user can decide when they want to sign in to access the protected resource, or to refresh the outdated information.
+- Making a call to `acquireTokenPopup` immediately, which triggers a user sign-in prompt. This pattern is commonly used in online applications where no unauthenticated content is available to the user. The sample that you create in this tutorial uses this pattern.
+
+- Making a visual indication to the user that an interactive sign-in is required. The user can then select the right time to sign in, or the application can retry `acquireTokenSilent` at a later time.
+
+ This pattern is commonly used when the user can use other functionality of the application without being disrupted. For example, unauthenticated content might be available in the application. In this situation, the user can decide when they want to sign in to access the protected resource or refresh the outdated information.
> [!NOTE]
-> This tutorial uses the `loginPopup` and `acquireTokenPopup` methods by default. If you are using Internet Explorer as your browser, it is recommended to use `loginRedirect` and `acquireTokenRedirect` methods, due to a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues-on-IE-and-Edge-Browser#issues) related to the way Internet Explorer handles pop-up windows. If you would like to see how to achieve the same result using *Redirect methods*, please see the [sample code](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/blob/quickstart/JavaScriptSPA/authRedirect.js).
+> This tutorial uses the `loginPopup` and `acquireTokenPopup` methods by default. If you're using Internet Explorer as your browser, we recommend that you use the `loginRedirect` and `acquireTokenRedirect` methods because of a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues-on-IE-and-Edge-Browser#issues) with the way Internet Explorer handles pop-up windows.
+>
+> If you want to see how to achieve the same result by using *redirect methods*, see the [sample code](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/blob/quickstart/JavaScriptSPA/authRedirect.js).
-## Call the Microsoft Graph API using the acquired token
+## Call the Microsoft Graph API by using the acquired token
-1. In the *JavaScriptSPA* folder create a *.js* file named *graphConfig.js*, which stores the Representational State Transfer ([REST](/rest/api/azure/)) endpoints. Add the following code:
+1. In the *JavaScriptSPA* folder, create a *.js* file named *graphConfig.js*, which stores the [Representational State Transfer (REST)](/rest/api/azure/) endpoints. Add the following code:
```JavaScript const graphConfig = {
The `acquireTokenSilent` method handles token acquisition and renewal without an
}; ```
- where:
- - `Enter_the_Graph_Endpoint_Here` is the instance of Microsoft Graph API. For the global Microsoft Graph API endpoint, this can be replaced with `https://graph.microsoft.com`. For national cloud deployments, refer to [Graph API Documentation](/graph/deployments).
+ `Enter_the_Graph_Endpoint_Here` is the instance of the Microsoft Graph API. For the global Microsoft Graph API endpoint, you can replace this with `https://graph.microsoft.com`. For national cloud deployments, refer to the [Microsoft Graph API documentation](/graph/deployments).
-1. Next, create a *.js* file named *graph.js*, which will make a REST call to the Microsoft Graph API. This is a way of accessing web services in a simple and flexible way without having any processing. Add the following code:
+1. Create a file named *graph.js*, which will make a REST call to the Microsoft Graph API. The SPA can then access web services in a simple and flexible way without any processing. Add the following code:
```javascript function callMSGraph(endpoint, token, callback) {
The `acquireTokenSilent` method handles token acquisition and renewal without an
### More information about REST calls against a protected API
-In the sample application created by this guide, the `callMSGraph()` method is used to make an HTTP `GET` request against a protected resource that requires a token. The request then returns the content to the caller. This method adds the acquired token in the *HTTP Authorization header*. For the sample application created by this guide, the resource is the Microsoft Graph API `me` endpoint, which displays the user's profile information.
+The sample application that you create in this tutorial uses the `callMSGraph()` method to make an HTTP `GET` request against a protected resource that requires a token. The request then returns the content to the caller.
+
+This method adds the acquired token in the *HTTP Authorization header*. For the sample application, the resource is the Microsoft Graph API `me` endpoint, which displays the user's profile information.
## Test the code
-Now that the code is set up, it needs to be tested.
+Now that you've set up the code, you need to test it:
-1. The server needs to be configured to listen to a TCP port that's based on the location of the *https://docsupdatetracker.net/index.html* file. For Node.js, the web server can be started to listen to the port that is specified in the previous section by running the following commands at a command-line prompt from the *JavaScriptSPA* folder:
+1. Configure the server to listen to a TCP port that's based on the location of the *https://docsupdatetracker.net/index.html* file. For Node.js, you can start the web server to listen to the port that you specified earlier. Run the following commands at a command-line prompt from the *JavaScriptSPA* folder:
```bash npm install npm start ```
-1. In the browser, enter `http://localhost:3000` (or `http://localhost:<port>` if a custom port was chosen). You should see the contents of the *https://docsupdatetracker.net/index.html* file and a **Sign In** button on the top right of the screen.
-
+1. In the browser, enter `http://localhost:3000`. You should see the contents of the *https://docsupdatetracker.net/index.html* file and a **Sign In** button on the upper right of the screen.
> [!IMPORTANT]
-> Be sure to enable popups and redirects for your site in your browser settings.
+> Be sure to enable pop-ups and redirects for your site in your browser settings.
-After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You'll now be prompted to sign in with the Microsoft identity platform:
+After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You're prompted to sign in with the Microsoft identity platform.
### Provide consent for application access The first time that you sign in to your application, you're prompted to grant it access to your profile and sign you in. Select **Accept** to continue. ### View application results
-After you sign in, you can select **Read More under your displayed name, and your user profile information is returned in the Microsoft Graph API response that's displayed:
+After you sign in, you can select **Read More** under your displayed name. Your user profile information is returned in the displayed Microsoft Graph API response.
### More information about scopes and delegated permissions
-The Microsoft Graph API requires the `User.Read` scope to read a user's profile. By default, this scope is automatically added in every application that's registered on the registration portal. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require more scopes. For example, the Microsoft Graph API requires the `Mail.Read` scope in order to list the userΓÇÖs mails.
+The Microsoft Graph API requires the `User.Read` scope to read a user's profile. By default, this scope is automatically added in every application that's registered on the registration portal. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require more scopes. For example, the Microsoft Graph API requires the `Mail.Read` scope to list the user's emails.
> [!NOTE] > The user might be prompted for additional consents as you increase the number of scopes.
The Microsoft Graph API requires the `User.Read` scope to read a user's profile.
## Next steps
-Delve deeper into single-page application (SPA) development on the Microsoft identity platform in our multi-part scenario series.
+Delve deeper into SPA development on the Microsoft identity platform in the first part of a scenario series:
> [!div class="nextstepaction"] > [Scenario: Single-page application](scenario-spa-overview.md)
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md
# Microsoft identity platform UserInfo endpoint
-Part of the OpenID Connect (OIDC) standard, the [UserInfo endpoint](https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) is returns information about an authenticated user. In the Microsoft identity platform, the UserInfo endpoint is hosted by Microsoft Graph at https://graph.microsoft.com/oidc/userinfo.
+As part of the OpenID Connect (OIDC) standard, the [UserInfo endpoint](https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) returns information about an authenticated user. In the Microsoft identity platform, the UserInfo endpoint is hosted by Microsoft Graph at https://graph.microsoft.com/oidc/userinfo.
## Find the .well-known configuration endpoint You can find the UserInfo endpoint programmatically by reading the `userinfo_endpoint` field of the OpenID configuration document at `https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration`. We don't recommend hard-coding the UserInfo endpoint in your applications. Instead, use the OIDC configuration document to find the endpoint at runtime.
-The UserInfo endpoint is typically called automatically by [OIDC-compliant libraries](https://openid.net/developers/certified/) to get information about the user.From the [list of claims identified in the OIDC standard](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims), the Microsoft identity platform produces the name claims, subject claim, and email when available and consented to.
+The UserInfo endpoint is typically called automatically by [OIDC-compliant libraries](https://openid.net/developers/certified/) to get information about the user. From the [list of claims identified in the OIDC standard](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims), the Microsoft identity platform produces the name claims, subject claim, and email when available and consented to.
## Consider using an ID token instead
If you require more details about the user like manager or job title, call the [
## Calling the UserInfo endpoint
-UserInfo is a standard OAuth bearer token API hosted by Microsoft Graph. Call the UserInfo endpoint as you would any Microsoft Graph API by using the access token your application received when it requested access to Microsoft Graph. The UserInfo endpoint returns a JSON response containing claims about the user.
+UserInfo is a standard OAuth bearer token API hosted by Microsoft Graph. Call the UserInfo endpoint as you would call any Microsoft Graph API by using the access token your application received when it requested access to Microsoft Graph. The UserInfo endpoint returns a JSON response containing claims about the user.
### Permissions
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJub25jZSI6Il…
```jsonc { "sub": "OLu859SGc2Sr9ZsqbkG-QbeLgJlb41KcdiPoLYNpSFA",
- "name": "Mikah Ollenburg", // names all require the ΓÇ£profileΓÇ¥ scope.
+ "name": "Mikah Ollenburg", // all names require the ΓÇ£profileΓÇ¥ scope.
"family_name": " Ollenburg", "given_name": "Mikah", "picture": "https://graph.microsoft.com/v1.0/me/photo/$value",
- "email": "mikoll@contoso.com" //requires the ΓÇ£emailΓÇ¥ scope.
+ "email": "mikoll@contoso.com" // requires the ΓÇ£emailΓÇ¥ scope.
} ```
active-directory Five Steps To Full Application Integration With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad.md
Title: Five steps for integrating all your apps with Azure AD description: This guide explains how to integrate all your applications with Azure AD. In each step, we explain the value and provide links to resources that will explain the technical details. --++ Last updated 08/05/2020- # Five steps for integrating all your apps with Azure AD
active-directory Resilience App Development Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-app-development-overview.md
--++ Last updated 11/23/2020
active-directory Resilience Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-client-app.md
--++ Last updated 11/23/2020
active-directory Resilience Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-daemon-app.md
--++ Last updated 11/23/2020
When a request times out applications should not retry immediately. Implement an
- [Build resilience into applications that sign-in users](resilience-client-app.md) - [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md)-- [Build resilience in your CIAM systems](resilience-b2c.md)
+- [Build resilience in your CIAM systems](resilience-b2c.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information about how to better secure your organization by using autom
In September 2021, we have added following 44 new applications in our App gallery with Federation support
-[Studybugs](https://studybugs.com/signin), [Yello](https://yello.co/yello-for-microsoft-teams/), [LawVu](../saas-apps/lawvu-tutorial.md), [Formate eVo Mail](https://www.document-genetics.co.uk/formate-evo-erp-output-management), [Revenue Grid](https://app.revenuegrid.com/login), [Orbit for Office 365](https://azuremarketplace.microsoft.com/marketplace/apps/aad.orbitforoffice365?tab=overview), [Upmarket](https://app.upmarket.ai/), [Alinto Protect](https://protect.alinto.net/), [Cloud Concinnity](https://cloudconcinnity.com/), [Matlantis](https://matlantis.com/), [ModelGen for Visio (MG4V)](https://crecy.com.au/model-gen/), [NetRef: Classroom Management](https://oauth.net-ref.com/microsoft/sso), [VergeSense](../saas-apps/vergesense-tutorial.md), [iAuditor](../saas-apps/iauditor-tutorial.md), [Secutraq](https://secutraq.net/login), [Active and Thriving](../saas-apps/active-and-thriving-tutorial.md), [Inova](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=1bacdba3-7a3b-410b-8753-5cc0b8125f81&response_type=code&redirect_uri=https:%2f%2fbroker.partneringplace.com%2fpartner-companion%2f&code_challenge_method=S256&code_challenge=YZabcdefghijklmanopqrstuvwxyz0123456789._-~&scope=1bacdba3-7a3b-410b-8753-5cc0b8125f81/.default), [TerraTrue](../saas-apps/terratrue-tutorial.md), [Beyond Identity Admin Console](../saas-apps/beyond-identity-admin-console-tutorial.md), [Visult](https://visult.app), [ENGAGE TAG](https://app.engagetag.com/), [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-tutorial.md), [CrowdStrike Falcon Platform](../saas-apps/crowdstrike-falcon-platform-tutorial.md), [MY Emergency Control](https://my-emergency.co.uk/app/auth/login), [AlexisHR](../saas-apps/alexishr-tutorial.md), [Teachme Biz](../saas-apps/teachme-biz-tutorial.md), [Zero Networks](../saas-apps/zero-networks-tutorial.md), [Mavim iMprove](https://improve.mavimcloud.com/), [Azumuta](https://app.azumuta.com/login?microsoft=true), [Frankli](https://beta.frankli.io/login), [Amazon Managed Grafana](../saas-apps/amazon-managed-grafana-tutorial.md), [Productive](../saas-apps/productive-tutorial.md), [Create!Webフロー](../saas-apps/createweb-tutorial.md), [Evercate](https://evercate.com/us/sign-up/), [Ezra Coaching](../saas-apps/ezra-coaching-tutorial.md), [Baldwin Safety and Compliance](../saas-apps/baldwin-safety-&-compliance-tutorial.md), [Nulab Pass (Backlog,Cacoo,Typetalk)](../saas-apps/nulab-pass-tutorial.md), [Metatask](../saas-apps/metatask-tutorial.md), [Contrast Security](../saas-apps/contrast-security-tutorial.md), [Animaker](../saas-apps/animaker-tutorial.md), [Traction Guest](../saas-apps/traction-guest-tutorial.md), [True Office Learning - LIO](../saas-apps/true-office-learning-lio-tutorial.md), [Qiita Team](../saas-apps/qiita-team-tutorial.md)
+[Studybugs](https://studybugs.com/signin), [Yello](https://yello.co/yello-for-microsoft-teams/), [LawVu](../saas-apps/lawvu-tutorial.md), [Formate eVo Mail](https://www.document-genetics.co.uk/formate-evo-erp-output-management), [Revenue Grid](https://app.revenuegrid.com/login), [Orbit for Office 365](https://azuremarketplace.microsoft.com/marketplace/apps/aad.orbitforoffice365?tab=overview), [Upmarket](https://app.upmarket.ai/), [Alinto Protect](https://protect.alinto.net/), [Cloud Concinnity](https://cloudconcinnity.com/), [Matlantis](https://matlantis.com/), [ModelGen for Visio (MG4V)](https://crecy.com.au/model-gen/), [NetRef: Classroom Management](https://oauth.net-ref.com/microsoft/sso), [VergeSense](../saas-apps/vergesense-tutorial.md), [iAuditor](../saas-apps/iauditor-tutorial.md), [Secutraq](https://secutraq.net/login), [Active and Thriving](../saas-apps/active-and-thriving-tutorial.md), [Inova](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=1bacdba3-7a3b-410b-8753-5cc0b8125f81&response_type=code&redirect_uri=https:%2f%2fbroker.partneringplace.com%2fpartner-companion%2f&code_challenge_method=S256&code_challenge=YZabcdefghijklmanopqrstuvwxyz0123456789._-~&scope=1bacdba3-7a3b-410b-8753-5cc0b8125f81/.default), [TerraTrue](../saas-apps/terratrue-tutorial.md), [Beyond Identity Admin Console](../saas-apps/beyond-identity-admin-console-tutorial.md), [Visult](https://visult.app), [ENGAGE TAG](https://app.engagetag.com/), [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-tutorial.md), [CrowdStrike Falcon Platform](../saas-apps/crowdstrike-falcon-platform-tutorial.md), [MY Emergency Control](https://my-emergency.co.uk/app/auth/login), [AlexisHR](../saas-apps/alexishr-tutorial.md), [Teachme Biz](../saas-apps/teachme-biz-tutorial.md), [Zero Networks](../saas-apps/zero-networks-tutorial.md), [Mavim iMprove](https://improve.mavimcloud.com/), [Azumuta](https://app.azumuta.com/login?microsoft=true), [Frankli](https://beta.frankli.io/login), [Amazon Managed Grafana](../saas-apps/amazon-managed-grafana-tutorial.md), [Productive](../saas-apps/productive-tutorial.md), [Create!Webフロー](../saas-apps/createweb-tutorial.md), [Evercate](https://evercate.com/), [Ezra Coaching](../saas-apps/ezra-coaching-tutorial.md), [Baldwin Safety and Compliance](../saas-apps/baldwin-safety-&-compliance-tutorial.md), [Nulab Pass (Backlog,Cacoo,Typetalk)](../saas-apps/nulab-pass-tutorial.md), [Metatask](../saas-apps/metatask-tutorial.md), [Contrast Security](../saas-apps/contrast-security-tutorial.md), [Animaker](../saas-apps/animaker-tutorial.md), [Traction Guest](../saas-apps/traction-guest-tutorial.md), [True Office Learning - LIO](../saas-apps/true-office-learning-lio-tutorial.md), [Qiita Team](../saas-apps/qiita-team-tutorial.md)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
active-directory E2open Cm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/e2open-cm-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with e2open CM-Global'
+description: Learn how to configure single sign-on between Azure Active Directory and e2open CM-Global.
++++++++ Last updated : 10/12/2022++++
+# Tutorial: Azure AD SSO integration with e2open CM-Global
+
+In this tutorial, you'll learn how to integrate e2open CM-Global with Azure Active Directory (Azure AD). When you integrate e2open CM-Global with Azure AD, you can:
+
+* Control in Azure AD who has access to e2open CM-Global.
+* Enable your users to be automatically signed-in to e2open CM-Global with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* e2open CM-Global single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* e2open CM-Global supports **SP** initiated SSO.
+
+## Add e2open CM-Global from the gallery
+
+To configure the integration of e2open CM-Global into Azure AD, you need to add e2open CM-Global from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **e2open CM-Global** in the search box.
+1. Select **e2open CM-Global** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for e2open CM-Global
+
+Configure and test Azure AD SSO with e2open CM-Global using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in e2open CM-Global.
+
+To configure and test Azure AD SSO with e2open CM-Global, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure e2open CM-Global SSO](#configure-e2open-cm-global-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create e2open CM-Global test user](#create-e2open-cm-global-test-user)** - to have a counterpart of B.Simon in e2open CM-Global that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **e2open CM-Global** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `http://pingone.com/<cmglobalCustomGUID>`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://sso.connect.pingidentity.com/sso/sp/ACS.saml2`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://sso.connect.pingidentity.com/sso/sp/initsso?saasid=<saasid>&idpid=<idpid>`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [e2open CM-Global support team](mailto:customersupport@e2open.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up e2open CM-Global** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to e2open CM-Global.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **e2open CM-Global**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure e2open CM-Global SSO
+
+To configure single sign-on on **e2open CM-Global** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [e2open CM-Global support team](mailto:customersupport@e2open.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create e2open CM-Global test user
+
+In this section, you create a user called Britta Simon in e2open CM-Global. Work with [e2open CM-Global support team](mailto:customersupport@e2open.com) to add the users in the e2open CM-Global platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to e2open CM-Global Sign-on URL where you can initiate the login flow.
+
+* Go to e2open CM-Global Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the e2open CM-Global tile in the My Apps, this will redirect to e2open CM-Global Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure e2open CM-Global you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Fuse Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fuse-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Fuse | Microsoft Docs'
+ Title: Azure Active Directory integration with Fuse
description: Learn how to configure single sign-on between Azure Active Directory and Fuse.
- Previously updated : 06/03/2021+ Last updated : 10/19/2022
-# Tutorial: Azure Active Directory integration with Fuse
+# Azure Active Directory integration with Fuse
-In this tutorial, you'll learn how to integrate Fuse with Azure Active Directory (Azure AD). When you integrate Fuse with Azure AD, you can:
+In this article, you'll learn how to integrate Fuse with Azure Active Directory (Azure AD). Fuse is a learning platform that enables learners within an organization to access the necessary knowledge and expertise they need to improve their skills at work. When you integrate Fuse with Azure AD, you can:
-* Control in Azure AD who has access to Fuse.
-* Enable your users to be automatically signed-in to Fuse with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
+- Control in Azure AD who has access to Fuse.
+- Enable your users to be automatically signed-in to Fuse with their Azure AD accounts.
+- Manage your accounts in one central location - the Azure portal.
-## Prerequisites
-
-To get started, you need the following items:
+You'll configure and test Azure AD single sign-on for Fuse in a test environment. Fuse supports **SP** initiated single sign-on.
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Fuse single sign-on (SSO) enabled subscription.
-
-## Scenario description
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Fuse supports **SP** initiated SSO.
+## Prerequisites
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+To integrate Azure Active Directory with Fuse, you need:
-## Add Fuse from the gallery
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+- Fuse single sign-on (SSO) enabled subscription.
-To configure the integration of Fuse into Azure AD, you need to add Fuse from the gallery to your list of managed SaaS apps.
+## Add application and assign a test user
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Fuse** in the search box.
-1. Select **Fuse** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Before you begin the process of configuring single sign-on, you need to add the Fuse application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+### Add Fuse from the Azure AD gallery
-## Configure and test Azure AD SSO for Fuse
+Add Fuse from the Azure AD application gallery to configure single sign-on with Fuse. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
-Configure and test Azure AD SSO with Fuse using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Fuse.
+### Create and assign Azure AD test user
-To configure and test Azure AD SSO with Fuse, perform the following steps:
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Fuse SSO](#configure-fuse-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Fuse test user](#create-fuse-test-user)** - to have a counterpart of B.Simon in Fuse that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
-## Configure Azure AD SSO
+## Configure Azure AD single sign-on
-Follow these steps to enable Azure AD SSO in the Azure portal.
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
1. In the Azure portal, on the **Fuse** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, perform the following step:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
+1. On the **Basic SAML Configuration** section, in the **Sign-on URL** text box, the appropriate URL using the following pattern:
`https://{tenantname}.fuseuniversal.com/` > [!NOTE] > The value is not real. Update the value with the actual Sign-On URL. Contact [Fuse Client support team](mailto:support@fusion-universal.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
![The Certificate download link](common/certificatebase64.png)
-6. On the **Set up Fuse** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set up Fuse** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Fuse.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Fuse**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Fuse SSO
+## Configure Fuse single sign-on
-To configure single sign-on on **Fuse** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Fuse support team](mailto:support@fusion-universal.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Fuse** side, send the downloaded **Certificate (Base64)** and the copied URLs from Azure portal to [Fuse support team](mailto:support@fusion-universal.com). The support team will use the copied URLs to configure the single sign-on on the application.
### Create Fuse test user
-In this section, you create a user called Britta Simon in Fuse. Work with [Fuse support team](mailto:support@fusion-universal.com) to add the users in the Fuse platform. Users must be created and activated before you use single sign-on.
+To be able to test and use single sign-on, you have to create and activate users in the fuse application.
-## Test SSO
+In this section, you create a user called Britta Simon in Fuse that corresponds with the Azure AD user you already created in the previous section. Work with [Fuse support team](mailto:support@fusion-universal.com) to add the user in the Fuse platform.
-In this section, you test your Azure AD single sign-on configuration with following options.
+## Test single sign-on
-* Click on **Test this application** in Azure portal. This will redirect to Fuse Sign-on URL where you can initiate the login flow.
+In this section, you test your Azure AD single sign-on configuration with the following options.
-* Go to Fuse Sign-on URL directly and initiate the login flow from there.
+- In the **Test single sign-on with Fuse** section on the **SAML-based Sign-on** pane, select **Test this application** in Azure portal. You'll be redirected to Fuse Sign-on URL where you can initiate the sign-in flow.
+- Go to Fuse Sign-on URL directly and initiate the sign-in flow from application's side.
+- You can use Microsoft My Apps. When you select the Fuse tile in the My Apps, you'll be redirected to Fuse Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-* You can use Microsoft My Apps. When you click the Fuse tile in the My Apps, this will redirect to Fuse Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+## Additional resources
+- [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+- [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md)
## Next steps Once you configure Fuse you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Optimizely Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/optimizely-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Optimizely | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Optimizely'
description: Learn how to configure single sign-on between Azure Active Directory and Optimizely.
Previously updated : 05/24/2021 Last updated : 10/19/2022
-# Tutorial: Azure Active Directory integration with Optimizely
+# Tutorial: Azure AD SSO integration with Optimizely
In this tutorial, you'll learn how to integrate Optimizely with Azure Active Directory (Azure AD). When you integrate Optimizely with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
`urn:auth0:optimizely:contoso` > [!NOTE]
- > These values are not the real. You will update the value with the actual Sign-on URL and Identifier, which is explained later in the tutorial. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. You will update these values with the actual Sign-on URL and Identifier which is explained later in the tutorial. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. Your Optimizely application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open **User Attributes** dialog.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Optimizely SSO
-1. To configure single sign-on on **Optimizely** side, contact your Optimizely Account Manager and provide the downloaded **Certificate (Base64)** and appropriate copied URLs.
-
-2. In response to your email, Optimizely provides you with the Sign On URL (SP-initiated SSO) and the Identifier (Service Provider Entity ID) values.
-
- a. Copy the **SP-initiated SSO URL** provided by Optimizely, and paste into the **Sign On URL** textbox in **Basic SAML Configuration** section on Azure portal.
-
- b. Copy the **Service Provider Entity ID** provided by Optimizely, and paste into the **Identifier** textbox in **Basic SAML Configuration** section on Azure portal.
-
-3. In a different browser window, sign-on to your Optimizely application.
-
-4. Click you account name in the top right corner and then **Account Settings**.
-
- ![Screenshot that shows the account name selected in the top-right corner, with "Account Settings" selected from the menu.](./media/optimizely-tutorial/settings.png)
-
-5. In the Account tab, check the box **Enable SSO** under Single Sign On in the **Overview** section.
-
- ![Azure AD Single Sign-On](./media/optimizely-tutorial/account.png)
-
-6. Click **Save**.
+To configure single sign-on on the Optimizely side, contact your Optimizely Customer Success Manager or [file an online ticket for Optimizely Experimentation Support](https://support.optimizely.com/hc/articles/4410284179469-File-online-tickets-for-support) directly.
### Create Optimizely test user
-In this section, you create a user called Britta Simon in Optimizely.
-
-1. On the home page, select **Collaborators** tab.
-
-2. To add new collaborator to the project, click **New Collaborator**.
-
- ![Screenshot that shows the Optimizely home page with the "Collaborators" tab and "New Collaborator" button selected.](./media/optimizely-tutorial/collaborator.png)
-
-3. Fill in the email address and assign them a role. Click **Invite**.
-
- ![Creating an Azure AD test user](./media/optimizely-tutorial/invite-collaborator.png)
-
-4. They receive an email invite. Using the email address, they have to log in to Optimizely.
+Contact your Optimizely Customer Success Manager or [file an online ticket for Optimizely Experimentation Support](https://support.optimizely.com/hc/articles/4410284179469-File-online-tickets-for-support) directly to add the users in the Optimizely platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
Title: 'Tutorial: Configure ServiceNow for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+ Title: Configure ServiceNow for automatic user provisioning with Azure Active Directory
description: Learn how to automatically provision and deprovision user accounts from Azure AD to ServiceNow.
- Previously updated : 05/10/2021+ Last updated : 10/19/2022
-# Tutorial: Configure ServiceNow for automatic user provisioning
+# Configure ServiceNow for automatic user provisioning
-This tutorial describes the steps that you perform in both ServiceNow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When Azure AD is configured, it automatically provisions and deprovisions users and groups to [ServiceNow](https://www.servicenow.com/) by using the Azure AD provisioning service.
-
-For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This article describes the steps that you'll take in both ServiceNow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When Azure AD is configured, it automatically provisions and deprovisions users and groups to [ServiceNow](https://www.servicenow.com/) by using the Azure AD provisioning service.
+For more information on the Azure AD automatic user provisioning service, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported+ > [!div class="checklist"]
-> * Create users in ServiceNow
-> * Remove users in ServiceNow when they don't need access anymore
-> * Keep user attributes synchronized between Azure AD and ServiceNow
-> * Provision groups and group memberships in ServiceNow
-> * Allow [single sign-on](servicenow-tutorial.md) to ServiceNow (recommended)
+> - Create users in ServiceNow
+> - Remove users in ServiceNow when they don't need access anymore
+> - Keep user attributes synchronized between Azure AD and ServiceNow
+> - Provision groups and group memberships in ServiceNow
+> - Allow [single sign-on](servicenow-tutorial.md) to ServiceNow (recommended)
## Prerequisites
-The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (Application Administrator, Cloud Application Administrator, Application Owner, or Global Administrator)
-* A [ServiceNow instance](https://www.servicenow.com/) of Calgary or higher
-* A [ServiceNow Express instance](https://www.servicenow.com/) of Helsinki or higher
-* A user account in ServiceNow with the admin role
+- An Azure AD user account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- A [ServiceNow instance](https://www.servicenow.com/) of Calgary or higher
+- A [ServiceNow Express instance](https://www.servicenow.com/) of Helsinki or higher
+- A user account in ServiceNow with the admin role
## Step 1: Plan your provisioning deployment
-1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and ServiceNow](../app-provisioning/customize-application-attributes.md).
+
+- Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+- Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+- Determine what data to [map between Azure AD and ServiceNow](../app-provisioning/customize-application-attributes.md).
## Step 2: Configure ServiceNow to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
![Screenshot that shows a ServiceNow instance.](media/servicenow-provisioning-tutorial/servicenow-instance.png)
-2. Obtain credentials for an admin in ServiceNow. Go to the user profile in ServiceNow and verify that the user has the admin role.
+1. Obtain credentials for an admin in ServiceNow. Go to the user profile in ServiceNow and verify that the user has the admin role.
![Screenshot that shows a ServiceNow admin role.](media/servicenow-provisioning-tutorial/servicenow-admin-role.png)
Add ServiceNow from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application, or based on attributes of the user or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the [steps to assign users and groups to the application](../manage-apps/assign-user-or-group-access-portal.md). If you choose to scope who will be provisioned based solely on attributes of the user or group, you can [use a scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-Keep these tips in mind:
+Keep the following tips in mind:
-* When you're assigning users and groups to ServiceNow, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+- When you're assigning users and groups to ServiceNow, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
-* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+- If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5: Configure automatic user provisioning to ServiceNow
To configure automatic user provisioning for ServiceNow in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
- ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-
-2. In the list of applications, select **ServiceNow**.
-
- ![Screenshot that shows a list of applications.](common/all-applications.png)
+ ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-3. Select the **Provisioning** tab.
+1. In the list of applications, select **ServiceNow**.
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
+1. Select the **Provisioning** tab.
-4. Set **Provisioning Mode** to **Automatic**.
+1. Set **Provisioning Mode** to **Automatic**.
- ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
-
-5. In the **Admin Credentials** section, enter your ServiceNow admin credentials and username. Select **Test Connection** to ensure that Azure AD can connect to ServiceNow. If the connection fails, ensure that your ServiceNow account has admin permissions and try again.
+1. In the **Admin Credentials** section, enter your ServiceNow admin credentials and username. Select **Test Connection** to ensure that Azure AD can connect to ServiceNow. If the connection fails, ensure that your ServiceNow account has admin permissions and try again.
![Screenshot that shows the Service Provisioning page, where you can enter admin credentials.](./media/servicenow-provisioning-tutorial/servicenow-provisioning.png)
-6. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
-
- ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
+1. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
-7. Select **Save**.
+1. Select **Save**.
-8. In the **Mappings** section, select **Synchronize Azure Active Directory Users to ServiceNow**.
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to ServiceNow**.
-9. Review the user attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in ServiceNow for update operations.
+1. Review the user attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in ServiceNow for update operations.
If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the ServiceNow API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
-10. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to ServiceNow**.
-
-11. Review the group attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in ServiceNow for update operations. Select the **Save** button to commit any changes.
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to ServiceNow**.
-12. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Review the group attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in ServiceNow for update operations. Select the **Save** button to commit any changes.
-13. To enable the Azure AD provisioning service for ServiceNow, change **Provisioning Status** to **On** in the **Settings** section.
+1. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
- ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
+1. To enable the Azure AD provisioning service for ServiceNow, change **Provisioning Status** to **On** in the **Settings** section.
-14. Define the users and groups that you want to provision to ServiceNow by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and groups that you want to provision to ServiceNow by choosing the desired values in **Scope** in the **Settings** section.
![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
-15. When you're ready to provision, select **Save**.
-
- ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
+1. When you're ready to provision, select **Save**.
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles. Subsequent cycles occur about every 40 minutes, as long as the Azure AD provisioning service is running. ## Step 6: Monitor your deployment+ After you've configured provisioning, use the following resources to monitor your deployment: - Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
After you've configured provisioning, use the following resources to monitor you
- If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. [Learn more about quarantine states](../app-provisioning/application-provisioning-quarantine-status.md). ## Troubleshooting tips
-* When you're provisioning certain attributes (such as **Department** and **Location**) in ServiceNow, the values must already exist in a reference table in ServiceNow. If they don't, you'll get an **InvalidLookupReference** error.
+
+- When you're provisioning certain attributes (such as **Department** and **Location**) in ServiceNow, the values must already exist in a reference table in ServiceNow. If they don't, you'll get an **InvalidLookupReference** error.
For example, you might have two locations (Seattle, Los Angeles) and three departments (Sales, Finance, Marketing) in a certain table in ServiceNow. If you try to provision a user whose department is "Sales" and whose location is "Seattle," that user will be provisioned successfully. If you try to provision a user whose department is "Sales" and whose location is "LA," the user won't be provisioned. The location "LA" must be added to the reference table in ServiceNow, or the user attribute in Azure AD must be updated to match the format in ServiceNow.
-* If you get an **EntryJoiningPropertyValueIsMissing** error, review your [attribute mappings](../app-provisioning/customize-application-attributes.md) to identify the matching attribute. This value must be present on the user or group you're trying to provision.
-* To understand any requirements or limitations (for example, the format to specify a country code for a user), review the [ServiceNow SOAP API](https://docs.servicenow.com/bundle/rome-application-development/page/integrate/web-services-apis/reference/r_DirectWebServiceAPIFunctions.html).
-* Provisioning requests are sent by default to https://{your-instance-name}.service-now.com/{table-name}. If you need a custom tenant URL, you can provide the entire URL as the instance name.
-* The **ServiceNowInstanceInvalid** error indicates a problem communicating with the ServiceNow instance. Here's the text of the error:
+- If you get an **EntryJoiningPropertyValueIsMissing** error, review your [attribute mappings](../app-provisioning/customize-application-attributes.md) to identify the matching attribute. This value must be present on the user or group you're trying to provision.
+- To understand any requirements or limitations (for example, the format to specify a country code for a user), review the [ServiceNow SOAP API](https://docs.servicenow.com/bundle/rome-application-development/page/integrate/web-services-apis/reference/r_DirectWebServiceAPIFunctions.html).
+- Provisioning requests are sent by default to https://{your-instance-name}.service-now.com/{table-name}. If you need a custom tenant URL, you can provide the entire URL as the instance name.
+- The **ServiceNowInstanceInvalid** error indicates a problem communicating with the ServiceNow instance. Here's the text of the error:
`Details: Your ServiceNow instance name appears to be invalid. Please provide a current ServiceNow administrative user name and password along with the name of a valid ServiceNow instance.`
After you've configured provisioning, use the following resources to monitor you
![Screenshot that shows the option for authorizing SOAP requests.](media/servicenow-provisioning-tutorial/servicenow-webservice.png)
- If you still can't resolve your problem, contact ServiceNow support and ask them to turn on SOAP debugging to help troubleshoot.
+ If you still can't resolve your problem, contact ServiceNow support, and ask them to turn on SOAP debugging to help troubleshoot.
-* The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allow list of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
+- The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allowlist of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
-* Self-hosted ServiceNow instances are not supported.
+- Self-hosted ServiceNow instances aren't supported.
## Additional resources
-* [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What are application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+- [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+- [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+- [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Snowflake Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (Application Administrator, Cloud Application Administrator, Application Owner, or Global Administrator) * [A Snowflake tenant](https://www.Snowflake.com/pricing/)
-* A user account in Snowflake with admin permissions
+* At least one user in Snowflake with the **ACCOUNTADMIN** role.
## Step 1: Plan your provisioning deployment
Before you configure Snowflake for automatic user provisioning with Azure AD, yo
select system$generate_scim_access_token('AAD_PROVISIONING'); ```
-2. Use the ACCOUNTADMIN role.
+1. Use the ACCOUNTADMIN role.
![Screenshot of a worksheet in the Snowflake UI with the SCIM access token called out.](media/Snowflake-provisioning-tutorial/step-2.png)
-3. Create the custom role AAD_PROVISIONER. All users and roles in Snowflake created by Azure AD will be owned by the scoped down AAD_PROVISIONER role.
+1. Create the custom role AAD_PROVISIONER. All users and roles in Snowflake created by Azure AD will be owned by the scoped down AAD_PROVISIONER role.
![Screenshot showing the custom role.](media/Snowflake-provisioning-tutorial/step-3.png)
-4. Let the ACCOUNTADMIN role create the security integration using the AAD_PROVISIONER custom role.
+1. Let the ACCOUNTADMIN role create the security integration using the AAD_PROVISIONER custom role.
![Screenshot showing the security integrations.](media/Snowflake-provisioning-tutorial/step-4.png)
-5. Create and copy the authorization token to the clipboard and store securely for later use. Use this token for each SCIM REST API request and place it in the request header. The access token expires after six months and a new access token can be generated with this statement.
+1. Create and copy the authorization token to the clipboard and store securely for later use. Use this token for each SCIM REST API request and place it in the request header. The access token expires after six months and a new access token can be generated with this statement.
![Screenshot showing the token generation.](media/Snowflake-provisioning-tutorial/step-5.png)
To configure automatic user provisioning for Snowflake in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
- ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
+ ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-2. In the list of applications, select **Snowflake**.
+1. In the list of applications, select **Snowflake**.
- ![Screenshot that shows a list of applications.](common/all-applications.png)
+ ![Screenshot that shows a list of applications.](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
+ ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set **Provisioning Mode** to **Automatic**.
+1. Set **Provisioning Mode** to **Automatic**.
- ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
+ ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
-5. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
+1. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
+ >[!NOTE]
+ >The Snowflake SCIM endpoint consists of the Snowflake account URL appended with `/scim/v2/`. For example, if your Snowflake account name is `acme` and your Snowflake account is in the `east-us-2` Azure region, the **Tenant URL** value is `https://acme.east-us-2.azure.snowflakecomputing.com/scim/v2`.
Select **Test Connection** to ensure that Azure AD can connect to Snowflake. If the connection fails, ensure that your Snowflake account has admin permissions and try again.
- ![Screenshot that shows boxes for tenant URL and secret token, along with the Test Connection button.](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot that shows boxes for tenant URL and secret token, along with the Test Connection button.](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
- ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
+ ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
-7. Select **Save**.
+1. Select **Save**.
-8. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Snowflake**.
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Snowflake**.
-9. Review the user attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Snowflake for update operations. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Snowflake for update operations. Select the **Save** button to commit any changes.
|Attribute|Type| |||
To configure automatic user provisioning for Snowflake in Azure AD:
|userName|String| |name.givenName|String| |name.familyName|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:defaultRole|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:defaultWarehouse|String|
+ |externalId|String|
-10. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Snowflake**.
+ >[!NOTE]
+ >Snowflake supported custom extension user attributes during SCIM provisioning:
+ >* DEFAULT_ROLE
+ >* DEFAULT_WAREHOUSE
+ >* DEFAULT_SECONDARY_ROLES
+ >* SNOWFLAKE NAME AND LOGIN_NAME FIELDS TO BE DIFFERENT
-11. Review the group attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Snowflake for update operations. Select the **Save** button to commit any changes.
+ > How to set up Snowflake custom extension attributes in Azure AD SCIM user provisioning is explained [here](https://community.snowflake.com/s/article/HowTo-How-to-Set-up-Snowflake-Custom-Attributes-in-Azure-AD-SCIM-for-Default-Roles-and-Default-Warehouses).
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Snowflake**.
+
+1. Review the group attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Snowflake for update operations. Select the **Save** button to commit any changes.
|Attribute|Type| ||| |displayName|String| |members|Reference|
-12. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for Snowflake, change **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Snowflake, change **Provisioning Status** to **On** in the **Settings** section.
- ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
+ ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
-14. Define the users and groups that you want to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and groups that you want to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section.
If this option is not available, configure the required fields under **Admin Credentials**, select **Save**, and refresh the page.
- ![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
+ ![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
-15. When you're ready to provision, select **Save**.
+1. When you're ready to provision, select **Save**.
- ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
+ ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization of all users and groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs. Subsequent syncs occur about every 40 minutes, as long as the Azure AD provisioning service is running.
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
editor:
Previously updated : 06/02/2022 Last updated : 08/20/2022
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ## Prerequisites To link your DID to your domain, you need to have completed the following.
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
In this article:
- What happens to your data? - Effect on existing verifiable credentials. + ## Prerequisites - Complete verifiable credentials onboarding.
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ## Prerequisites - Complete verifiable credentials onboarding with Web as the selected trust system.
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [idTokens attestation](rules-and-display-definitions-model.md#idtokenattestation-type) produces an issuance flow where you're required to do an interactive sign-in to an OpenID Connect (OIDC) identity provider in Microsoft Authenticator. Claims in the ID token that the identity provider returns can be used to populate the issued verifiable credential. The claims mapping section in the rules definition specifies which claims are used. ## Create a custom credential with the idTokens attestation type
active-directory How To Use Quickstart Selfissued https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-selfissued.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) type produces an issuance flow where you're required to manually enter values for the claims in Microsoft Authenticator. ## Create a custom credential with the selfIssued attestation type
active-directory How Use Vcnetwork https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-use-vcnetwork.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ## Prerequisites To use the Entra Verified ID Network, you need to have completed the following.
To use the Entra Verified ID Network, you need to have completed the following.
## What is the Entra Verified ID Network?
-In our scenario, Proseware is a verifier. Woodgrove is the issuer. The verifier needs to know Woodgrove's issuer DID and the verifiable credential (VC) type that represents Woodgrove employees before it can create a presentation request for a verified credential for Woodgrove employees. The necessary information may come from some kind of manual exchange between the companies, this approach would be both a manual and a complex. The Entra Verified ID Network makes this process much easier. Woodgrove, as an issuer, can publish credential types to the Entra Verified ID Network and Proseware, as the verifier, can search for published credential types and schemas in the Entra Verified ID Network. Using this information, Woodgrove can create a [presentation request](presentation-request-api.md#presentation-request-payload) and easily invoke the Request Service API.
+In our scenario, Proseware is a verifier. Woodgrove is the issuer. The verifier needs to know Woodgrove's issuer DID and the verifiable credential (VC) type that represents Woodgrove employees before it can create a presentation request for a verified credential for Woodgrove employees. The necessary information may come from some kind of manual exchange between the companies, but this approach would be both a manual and a complex. The Entra Verified ID Network makes this process much easier. Woodgrove, as an issuer, can publish credential types to the Entra Verified ID Network and Proseware, as the verifier, can search for published credential types and schemas in the Entra Verified ID Network. Using this information, Woodgrove can create a [presentation request](presentation-request-api.md#presentation-request-payload) and easily invoke the Request Service API.
-![Diagram of Microsoft DID implementation overview](media/decentralized-identifier-overview/did-overview.png)
## How do I use the Entra Verified ID Network? 1. In the start page of Microsoft Entra Verified ID in the Azure portal, you have a Quickstart named **Verification request**. Clicking on **start** will take you to a page where you can browse the Verifiable Credentials Network
- ![Screenshot of the Verified ID Network Quickstart](media/how-use-vcnetwork/vcnetwork-quickstart.png)
+ :::image type="content" source="media/how-use-vcnetwork/vcnetwork-quickstart.png" alt-text="Screenshot of the Verified ID Network Quickstart.":::
1. When you select on the **Select first issuer**, a panel opens on the right side of the screen where you can search for issuers by their linked domains. So if you are looking for something from Woodgrove, you just type `woodgrove` in the search textbox. When you select an issuer in the list, the available credential types will show in the lower part labeled Step 2. Check the type you want to use and select the Add button to get back to the first screen. If the expected linked domain isn't in the list it means that the linked domain isn't verified yet. If the list of credentials is empty, it means that the issuer has verified the linked domain but haven't published any credential types yet.
- ![Screenshot of Verified ID Network Search and select](media/how-use-vcnetwork/vcnetwork-search-select.png)
+ :::image type="content" source="media/how-use-vcnetwork/vcnetwork-search-select.png" alt-text="Screenshot of Verified ID Network Search and select.":::
1. In the first screen we now have Woodgrove in the issuer list and the next step is to select the **Review** button.
- ![Verified ID Network list of isuers](media/how-use-vcnetwork/vcnetwork-issuer-list.png)
+ :::image type="content" source="media/how-use-vcnetwork/vcnetwork-issuer-list.png" alt-text="Screenshot of verified ID Network list of issuers.":::
1. The Review screen displays a skeleton presentation request JSON payload for the Request Service API. The important pieces of information are the DID inside the **acceptedIssuers** collection and the **type** value. This information is needed to create a presentation request. The request prompts the user for a credential of a certain type issued by a trusted organization.
- ![Verified ID Network issuers details](media/how-use-vcnetwork/vcnetwork-issuer-details.png)
+ :::image type="content" source="media/how-use-vcnetwork/vcnetwork-issuer-details.png" alt-text="Verified ID Network issuers details.":::
## How do I make my linked domain searchable?
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Microsoft Entra Verified ID](decentralized-identifier-overview.md) and the [FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial. This architectural overview introduces the capabilities and components of the Microsoft Entra Verified ID service. For more detailed information on issuance and validation, see
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ItΓÇÖs important to plan your issuance solution so that in addition to issuing credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt done so, we recommend you view the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md) for foundational information. ## Scope of guidance
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + MicrosoftΓÇÖs Microsoft Entra Verified ID (Azure AD VC) service enables you to trust proofs of user identity without expanding your trust boundary. With Azure AD VC, you create accounts or federate with another identity provider. When a solution implements a verification exchange using verifiable credentials, it enables applications to request credentials that aren't bound to a specific domain. This approach makes it easier to request and verify credentials at scale. If you havenΓÇÖt already, we suggest you review the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md). You may also want to review [Plan your Microsoft Entra Verified ID issuance solution](plan-issuance-solution.md).
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
In this article, you learn how to:
The following diagram illustrates the Microsoft Entra Verified ID architecture and the component you configure.
-![Diagram that illustrates the Azure A D Verifiable Credentials architecture.](media/verifiable-credentials-configure-issuer/verifiable-credentials-architecture.png)
## Prerequisites
The following diagram illustrates the Microsoft Entra Verified ID architecture a
- To clone the repository that hosts the sample app, install [GIT](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download), or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization,read this [FAQ](verifiable-credentials-faq.md#i-cannot-use-ngrok-what-do-i-do).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization, read this [FAQ](verifiable-credentials-faq.md#i-cannot-use-ngrok-what-do-i-do).
- A mobile device with Microsoft Authenticator: - Android version 6.2206.3973 or later installed. - iOS version 6.6.2 or later installed.
In this step, you create the verified credential expert card by using Microsoft
The following screenshot demonstrates how to create a new credential:
- ![Screenshot that shows how to create a new credential.](media/verifiable-credentials-configure-issuer/how-create-new-credential.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/how-create-new-credential.png" alt-text="Screenshot that shows how to create a new credential.":::
## Gather credentials and environment details
Now that you have a new credential, you're going to gather some information abou
1. In Verifiable Credentials, select **Issue credential**.
- ![Screenshot that shows how to select the newly created verified credential.](media/verifiable-credentials-configure-issuer/issue-credential-custom-view.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/issue-credential-custom-view.png" alt-text="Screenshot that shows how to select the newly created verified credential.":::
1. Copy the **authority**, which is the Decentralized Identifier, and record it for later.
Create a client secret for the registered application that you created. The samp
1. Copy the **Application (client) ID**, and store it for later.
- ![Screenshot that shows how to copy the app registration ID.](media/verifiable-credentials-configure-issuer/copy-app-id.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/copy-app-id.png" alt-text="Screenshot that shows how to copy the app registration ID.":::
1. From the main menu, under **Manage**, select **Certificates & secrets**.
Now you're ready to issue your first verified credential expert card by running
1. Open the HTTPS URL generated by ngrok.
- ![Screenshot that shows how to get the ngrok public URL.](media/verifiable-credentials-configure-issuer/ngrok-url.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/ngrok-url.png" alt-text="Screenshot that shows how to get the ngrok public URL.":::
1. From a web browser, select **Get Credential**.
- ![Screenshot that shows how to choose get the credential from the sample app.](media/verifiable-credentials-configure-issuer/get-credentials.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/get-credentials.png" alt-text="Screenshot that shows how to choose to get the credential from the sample app.":::
1. Using your mobile device, scan the QR code with the Authenticator app. You can also scan the QR code directly from your camera, which will open the Authenticator app for you.
- ![Screenshot that shows how to scan the Q R code.](media/verifiable-credentials-configure-issuer/scan-issuer-qr-code.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/scan-issuer-qr-code.png" alt-text="Screenshot that shows how to scan the QR code.":::
1. At this time, you'll see a message warning that this app or website might be risky. Select **Advanced**.
- ![Screenshot that shows how to respond to the warning message.](media/verifiable-credentials-configure-issuer/at-risk.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/at-risk.png" alt-text="Screenshot that shows how to respond to the warning message.":::
1. At the risky website warning, select **Proceed anyways (unsafe)**. You're seeing this warning because your domain isn't linked to your decentralized identifier (DID). To verify your domain, follow [Link your domain to your decentralized identifier (DID)](how-to-dnsbind.md). For this tutorial, you can skip the domain registration, and select **Proceed anyways (unsafe).**
- ![Screenshot that shows how to proceed with the risky warning.](media/verifiable-credentials-configure-issuer/proceed-anyway.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/proceed-anyway.png" alt-text="Screenshot that shows how to proceed with the risky warning.":::
1. You'll be prompted to enter a PIN code that is displayed in the screen where you scanned the QR code. The PIN adds an extra layer of protection to the issuance. The PIN code is randomly generated every time an issuance QR code is displayed.
- ![Screenshot that shows how to type the pin code.](media/verifiable-credentials-configure-issuer/enter-verification-code.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/enter-verification-code.png" alt-text="Screenshot that shows how to type the pin code.":::
1. After you enter the PIN number, the **Add a credential** screen appears. At the top of the screen, you see a **Not verified** message (in red). This warning is related to the domain validation warning mentioned earlier. 1. Select **Add** to accept your new verifiable credential.
- ![Screenshot that shows how to add your new credential.](media/verifiable-credentials-configure-issuer/new-verifiable-credential.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/new-verifiable-credential.png" alt-text="Screenshot that shows how to add your new credential.":::
Congratulations! You now have a verified credential expert verifiable credential.
- ![Screenshot that shows a newly added verifiable credential.](media/verifiable-credentials-configure-issuer/verifiable-credential-has-been-added.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/verifiable-credential-has-been-added.png" alt-text="Screenshot that shows a newly added verifiable credential.":::
Go back to the sample app. It shows you that a credential successfully issued.
- ![Screenshot that shows a successfully issued verifiable credential.](media/verifiable-credentials-configure-issuer/credentials-issued.png)
-
-## Verify the verified credential expert card
-
-Now you are ready to verify your verified credential expert card by running the sample application again.
-
-1. Hit the back button in your browser to return to the sample app home page.
-
-1. Select **Verify credentials**.
-
- ![Screenshot that shows how to select the verify credential button.](media/verifiable-credentials-configure-issuer/verify-credential.png)
-
-1. Using the authenticator app, scan the QR code, or scan it directly from your mobile camera.
-
-1. When you see the warning message, select **Advanced**. Then select **Proceed anyways (unsafe)**.
-
-1. Approve the presentation request by selecting **Allow**.
-
- ![Screenshot that shows how to approve the verifiable credentials new presentation request.](media/verifiable-credentials-configure-issuer/approve-presentation-request.jpg)
-
-1. After you approve the presentation request, you can see that the request has been approved. You can also check the log. To see the log, select the verifiable credential.
-
- ![Screenshot that shows a verified credential expert card.](media/verifiable-credentials-configure-issuer/verifable-credential-info.png)
-
-1. Then select **Recent Activity**.
-
- ![Screenshot that shows the recent activity button that takes you to the credential history.](media/verifiable-credentials-configure-issuer/verifable-credential-history.jpg)
-
-1. You can now see the recent activities of your verifiable credential.
-
- ![Screenshot that shows the history of the verifiable credential.](media/verifiable-credentials-configure-issuer/verify-credential-history.jpg)
-
-1. Go back to the sample app. It shows you that the presentation of the verifiable credentials was received.
- ![Screenshot that shows that a presentation was received.](media/verifiable-credentials-configure-issuer/verifiable-credential-expert-verification.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/credentials-issued.png" alt-text="Screenshot that shows a successfully issued verifiable credential.":::
## Verifiable credential names
In real scenarios, your application pulls the user details from an identity prov
public async Task<ActionResult> issuanceRequest() { ...- // Here you could change the payload manifest and change the first name and last name. payload["claims"]["given_name"] = "Megan"; payload["claims"]["family_name"] = "Bowen";
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
After you create your key vault, Verifiable Credentials generates a set of keys
1. To save the changes, select **Save**.
-### Set access policies for the Verifiable credentials service request service principal
-
-The Verifiable credentials service request is the Request Service API, and it needs access to Key Vault in order to sign issuance and presentation requests.
-
-1. Select **+ Add Access Policy** and select the service principal **Verifiable Credentials Service Request** with AppId **3db474b9-6a0c-4840-96ac-1fceb342124f**.
-
-1. For **Key permissions**, select permissions **Get** and **Sign**.
-
- :::image type="content" source="media/verifiable-credentials-configure-tenant/set-key-vault-sp-access-policy.png" alt-text="screenshot of key vault granting access to a security principal":::
-
-1. To save the changes, select **Add**.
-- ## Set up Verified ID To set up Verified ID, follow these steps:
To set up Verified ID, follow these steps:
>[!IMPORTANT] > The only way to change the trust system is to opt-out of the Verified ID service and redo the onboarding. - 1. Select **Save and get started**. :::image type="content" source="media/verifiable-credentials-configure-tenant/verifiable-credentials-getting-started.png" alt-text="Screenshot that shows how to set up Verifiable Credentials.":::
+### Set access policies for the Verified ID service principals
+
+When you set up Verified ID in the previous step, the access policies in Azure Key Vault are automatically updated to give service principals for Verified ID the required permissions.
+If you ever are in need of manually resetting the permissions, the access policy should look like below.
+
+| Service Principal | AppId | Key Permissions |
+| -- | -- | -- |
+| Verifiable Credentials Service | bb2a64ee-5d29-4b07-a491-25806dc854d3 | Get, Sign |
+| Verifiable Credentials Service Request | 3db474b9-6a0c-4840-96ac-1fceb342124f | Sign |
++ ## Register an application in Azure AD Your application needs to get access tokens when it wants to call into Microsoft Entra Verified ID so it can issue or verify credentials. To get access tokens, you have to register an application and grant API permission for the Verified ID Request Service. For example, use the following steps for a web application:
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
This page contains commonly asked questions about Verifiable Credentials and Dec
- [Conceptual questions about decentralized identity](#conceptual-questions) - [Questions about using Verifiable Credentials preview](#using-the-preview) + ## The basics ### What is a DID?
aks Aks Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-resource-health.md
- Title: Check for Resource Health events impacting your AKS cluster (Preview)
-description: Check the health status of your AKS cluster using Azure Resource Health.
--- Previously updated : 08/18/2020---
-# Check for Resource Health events impacting your AKS cluster (Preview)
--
-When running your container workloads on AKS, you want to ensure you can troubleshoot and fix problems as soon as they arise to minimize the impact on the availability of your workloads. [Azure Resource Health](../service-health/resource-health-overview.md) gives you visibility into various health events that may cause your AKS cluster to be unavailable.
--
-## Open Resource Health
-
-### To access Resource Health for your AKS cluster:
--- Navigate to your AKS cluster in the [Azure portal](https://portal.azure.com).-- Select **Resource Health** in the left navigation.-
-### To access Resource Health for all clusters on your subscription:
--- Search for **Service Health** in the [Azure portal](https://portal.azure.com) and navigate to it.-- Select **Resource health** in the left navigation.-- Select your subscription and set the resource type to Azure Kubernetes Service (AKS).-
-![Screenshot shows the Resource health for your A K S clusters.](./media/aks-resource-health/resource-health-check.png)
-
-## Check the health status
-
-Azure Resource Health helps you diagnose and get support for service problems that affect your Azure resources. Resource Health reports on the current and past health of your resources and helps you determine if the problem is caused by a user-initiated action or a platform event.
-
-Resource Health receives signals for your managed cluster to determine the cluster's health state. It examines the health state of your AKS cluster and reports actions required for each health signal. These signals range from auto-resolving issues, planned updates, unplanned health events, and unavailability caused by user-initiated actions. These signals are classified using the Azure Resource HealthΓÇÖs health status: *Available*, *Unavailable*, *Unknown*, and *Degraded*.
--- **Available**: When there are no known issues affecting your clusterΓÇÖs health, Resource Health reports your cluster as *Available*.--- **Unavailable**: When there is a platform or non-platform event affecting your cluster's health, Resource Health reports your cluster as *Unavailable*.--- **Unknown**: When there is a temporary connection loss to your cluster's health metrics, Resource Health reports your cluster as *Unknown*.--- **Degraded**: When there is a health issue requiring your action, Resource Health reports your cluster as *Degraded*.-
-Note that the Resource Health for an AKS cluster is different than the Resource Health of its individual resources (*Virtual Machines, ScaleSet Instances, Load Balancer, etc...*).
-For additional details on what each health status indicates, visit [Resource Health overview](../service-health/resource-health-overview.md#health-status).
-
-### View historical data
-
-You can also view the past 30 days of historical Resource Health information in the **Health history** section.
-
-## Next steps
-
-Run checks on your cluster to further troubleshoot cluster issues by using [AKS Diagnostics](./concepts-diagnostics.md).
aks Concepts Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-diagnostics.md
- Title: Azure Kubernetes Service (AKS) Diagnostics Overview
-description: Learn about self-diagnosing clusters in Azure Kubernetes Service.
--- Previously updated : 03/29/2021---
-# Azure Kubernetes Service Diagnostics (preview) overview
-
-Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics is an intelligent, self-diagnostic experience that:
-* Helps you identify and resolve problems in your cluster.
-* Is cloud-native.
-* Requires no extra configuration or billing cost.
-
-This feature is now in public preview.
-
-## Open AKS Diagnostics
-
-To access AKS Diagnostics:
-
-1. Navigate to your Kubernetes cluster in the [Azure portal](https://portal.azure.com).
-1. Click on **Diagnose and solve problems** in the left navigation, which opens AKS Diagnostics.
-1. Choose a category that best describes the issue of your cluster, like _Cluster Node Issues_, by:
- * Using the keywords in the homepage tile.
- * Typing a keyword that best describes your issue in the search bar.
-
-![Homepage](./media/concepts-diagnostics/aks-diagnostics-homepage.png)
-
-## View a diagnostic report
-
-After you click on a category, you can view a diagnostic report specific to your cluster. Diagnostic reports intelligently call out any issues in your cluster with status icons. You can drill down on each topic by clicking **More Info** to see a detailed description of:
-* Issues
-* Recommended actions
-* Links to helpful docs
-* Related-metrics
-* Logging data
-
-Diagnostic reports generate based on the current state of your cluster after running various checks. They can be useful for pinpointing the problem of your cluster and understanding next steps to resolve the issue.
-
-![Diagnostic Report](./media/concepts-diagnostics/diagnostic-report.png)
-
-![Expanded Diagnostic Report](./media/concepts-diagnostics/node-issues.png)
-
-## Cluster Insights
-
-The following diagnostic checks are available in **Cluster Insights**.
-
-### Cluster Node Issues
-
-Cluster Node Issues checks for node-related issues that cause your cluster to behave unexpectedly.
--- Node readiness issues-- Node failures-- Insufficient resources-- Node missing IP configuration-- Node CNI failures-- Node not found-- Node power off-- Node authentication failure-- Node kube-proxy stale-
-### Create, read, update & delete (CRUD) operations
-
-CRUD Operations checks for any CRUD operations that cause issues in your cluster.
--- In-use subnet delete operation error-- Network security group delete operation error-- In-use route table delete operation error-- Referenced resource provisioning error-- Public IP address delete operation error-- Deployment failure due to deployment quota-- Operation error due to organization policy-- Missing subscription registration-- VM extension provisioning error-- Subnet capacity-- Quota exceeded error-
-### Identity and security management
-
-Identity and Security Management detects authentication and authorization errors that prevent communication to your cluster.
--- Node authorization failures-- 401 errors-- 403 errors-
-## Next steps
-
-* Collect logs to help you further troubleshoot your cluster issues by using [AKS Periscope](https://aka.ms/aksperiscope).
-
-* Read the [triage practices section](/azure/architecture/operator-guides/aks/aks-triage-practices) of the AKS day-2 operations guide.
-
-* Post your questions or feedback at [UserVoice](https://feedback.azure.com/d365community/forum/aabe212a-f724-ec11-b6e6-000d3a4f0da0) by adding "[Diag]" in the title.
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
Azure Kubernetes Service (AKS) provides additional, supported functionality for
## Add-ons
-Add-ons are a fully supported way to provide extra capabilities for your AKS cluster. Add-ons' installation, configuration, and lifecycle is managed by AKS. Use `az aks addon` to install an add-on or manage the add-ons for your cluster.
+Add-ons are a fully supported way to provide extra capabilities for your AKS cluster. Add-ons' installation, configuration, and lifecycle is managed by AKS. Use `az aks enable-addons` to install an add-on or manage the add-ons for your cluster.
The following rules are used by AKS for applying updates to installed add-ons:
The below table shows a few examples of open-source and third-party integrations
[azure/k8s-artifact-substitute]: https://github.com/Azure/k8s-artifact-substitute [azure/aks-create-action]: https://github.com/Azure/aks-create-action [azure/aks-github-runner]: https://github.com/Azure/aks-github-runner
-[github-actions-aks]: kubernetes-action.md
+[github-actions-aks]: kubernetes-action.md
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
The following table lists the platform metrics collected for AKS. Follow each l
|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | |-|--| | Managed clusters | [Microsoft.ContainerService/managedClusters](../azure-monitor/essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters)
-| Connected clusters | [microsoft.kubernetes/connectedClusters](../azure-monitor/essentials/metrics-supported.md#microsoftkubernetesconnectedclusters)
+| Connected clusters | [microsoft.kubernetes/connectedClusters](../azure-monitor/essentials/metrics-supported.md)
| Virtual machines| [Microsoft.Compute/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachines) | | Virtual machine scale sets | [Microsoft.Compute/virtualMachineScaleSets](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)| | Virtual machine scale sets virtual machines | [Microsoft.Compute/virtualMachineScaleSets/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines)|
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
- Title: Connect to Azure Kubernetes Service (AKS) cluster nodes
-description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks.
-- Previously updated : 02/25/2022---
-#Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
--
-# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access AKS nodes using SSH, including Windows Server nodes. You can also [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp]. For security purposes, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
-
-This article shows you how to create a connection to an AKS node.
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-This article also assumes you have an SSH key. You can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. If you use PuTTY Gen to create the key pair, save the key pair in an OpenSSH format rather than the default PuTTy private key format (.ppk file).
-
-You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-## Create an interactive shell connection to a Linux node
-
-To create an interactive shell connection to a Linux node, use `kubectl debug` to run a privileged container on your node. To list your nodes, use `kubectl get nodes`:
-
-```output
-$ kubectl get nodes -o wide
-
-NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
-aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
-```
-
-Use `kubectl debug` to run a container image on the node to connect to it.
-
-```azurecli-interactive
-kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
-```
-
-This command starts a privileged container on your node and connects to it.
-
-```output
-$ kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
-Creating debugging pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx with container debugger on node aks-nodepool1-12345678-vmss000000.
-If you don't see a command prompt, try pressing enter.
-root@aks-nodepool1-12345678-vmss000000:/#
-```
-
-This privileged container gives access to the node.
-
-> [!NOTE]
-> You can interact with the node session by running `chroot /host` from the privileged container.
-
-### Remove Linux node access
-
-When done, `exit` the interactive shell session. After the interactive container session closes, delete the pod used for access with `kubectl delete pod`.
-
-```output
-kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
-```
-
-## Create the SSH connection to a Windows node
-
-At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
-
-To connect to another node in the cluster, use `kubectl debug`. For more information, see [Create an interactive shell connection to a Linux node][ssh-linux-kubectl-debug].
-
-To create the SSH connection to the Windows Server node from another node, use the SSH keys provided when you created the AKS cluster and the internal IP address of the Windows Server node.
-
-Open a new terminal window and use `kubectl get pods` to get the name of the pod started by `kubectl debug`.
-
-```output
-$ kubectl get pods
-
-NAME READY STATUS RESTARTS AGE
-node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 1/1 Running 0 21s
-```
-
-In the above example, *node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx* is the name of the pod started by `kubectl debug`.
-
-Using `kubectl port-forward`, you can open a connection to the deployed pod:
-
-```
-$ kubectl port-forward node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 2022:22
-Forwarding from 127.0.0.1:2022 -> 22
-Forwarding from [::1]:2022 -> 22
-```
-
-The above example begins forwarding network traffic from port 2022 on your development computer to port 22 on the deployed pod. When using `kubectl port-forward` to open a connection and forward network traffic, the connection remains open until you stop the `kubectl port-forward` command.
-
-Open a new terminal and use `kubectl get nodes` to show the internal IP address of the Windows Server node:
-
-```output
-$ kubectl get nodes -o wide
-
-NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
-aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
-```
-
-In the above example, *10.240.0.67* is the internal IP address of the Windows Server node.
-
-Create an SSH connection to the Windows Server node using the internal IP address. The default username for AKS nodes is *azureuser*. Accept the prompt to continue with the connection. You are then provided with the bash prompt of your Windows Server node:
-
-```output
-$ ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' azureuser@10.240.0.67
-
-The authenticity of host '10.240.0.67 (10.240.0.67)' can't be established.
-ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG.
-Are you sure you want to continue connecting (yes/no)? yes
-
-[...]
-
-Microsoft Windows [Version 10.0.17763.1935]
-(c) 2018 Microsoft Corporation. All rights reserved.
-
-azureuser@aksnpwin000000 C:\Users\azureuser>
-```
-
-The above example connects to port 22 on the Windows Server node through port 2022 on your development computer.
-
-> [!NOTE]
-> If you prefer to use password authentication, use `-o PreferredAuthentications=password`. For example:
->
-> ```console
-> ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' -o PreferredAuthentications=password azureuser@10.240.0.67
-> ```
-
-### Remove SSH access
-
-When done, `exit` the SSH session, stop any port forwarding, and then `exit` the interactive container session. After the interactive container session closes, delete the pod used for SSH access with `kubectl delete pod`.
-
-```output
-kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
-```
-
-## Next steps
-
-If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
-
-<!-- INTERNAL LINKS -->
-[view-kubelet-logs]: kubelet-logs.md
-[view-master-logs]: monitor-aks-reference.md#resource-logs
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[aks-windows-rdp]: rdp.md
-[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md
-[ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md
-[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
To increase the node limit beyond 1000, you must have the following pre-requisit
> [!NOTE] > You can't use NPM with clusters greater than 500 Nodes - ## Node pool scaling considerations and best practices
-* For system node pools, use the *Standard_D16ds_v5* SKU or equivalent core/memory VM SKUs to provide sufficient compute resources for *kube-system* pods.
+* For system node pools, use the *Standard_D16ds_v5* SKU or equivalent core/memory VM SKUs with ephemeral OS disks to provide sufficient compute resources for *kube-system* pods.
* Create at-least five user node pools to scale up to 5,000 nodes since there's a 1000 nodes per node pool limit. * Use cluster autoscaler wherever possible when running at-scale AKS clusters to ensure dynamic scaling of node pools based on the demand for compute resources. * When scaling beyond 1000 nodes without cluster autoscaler, it's recommended to scale in batches of a maximum 500 to 700 nodes at a time. These scaling operations should also have 2 mins to 5-mins sleep time between consecutive scale-ups to prevent Azure API throttling.
+> [!NOTE]
+> You can't use [Stop and Start feature][Stop and Start feature] on clusters enabled with the greater than 1000 node limit
+ ## Cluster upgrade best practices * AKS clusters have a hard limit of 5000 nodes. This limit prevents clusters from upgrading that are running at this limit since there's no more capacity do a rolling update with the max surge property. We recommend scaling the cluster down below 3000 nodes before doing cluster upgrades to provide extra capacity for node churn and minimize control plane load.
To increase the node limit beyond 1000, you must have the following pre-requisit
<!-- LINKS - Internal --> [quotas-skus-regions]: quotas-skus-regions.md [cluster upgrades]: upgrade-cluster.md
+[Stop and Start feature]: start-stop-cluster.md
aks Troubleshoot Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshoot-linux.md
- Title: Linux performance tools-
-description: Learn how to use Linux performance tools to troubleshoot and resolve common problems when using Azure Kubernetes Service (AKS).
----- Previously updated : 02/10/2020---
-# Linux Performance Troubleshooting
-
-Resource exhaustion on Linux machines is a common issue and can manifest through a wide variety of symptoms. This document provides a high-level overview of the tools available to help diagnose such issues.
-
-Many of these tools accept an interval on which to produce rolling output. This output format typically makes spotting patterns much easier. Where accepted, the example invocation will include `[interval]`.
-
-Many of these tools have an extensive history and wide set of configuration options. This page provides only a simple subset of invocations to highlight common problems. The canonical source of information is always the reference documentation for each particular tool. That documentation will be much more thorough than what is provided here.
-
-## Guidance
-
-Be systematic in your approach to investigating performance issues. Two common approaches are USE (utilization, saturation, errors) and RED (rate, errors, duration). RED is typically used in the context of services for request-based monitoring. USE is typically used for monitoring resources: for each resource in a machine, monitor utilization, saturation, and errors. The four main kinds of resources on any machine are cpu, memory, disk, and network. High utilization, saturation, or error rates for any of these resources indicates a possible problem with the system. When a problem exists, investigate the root cause: why is disk IO latency high? Are the disks or virtual machine SKU throttled? What processes are writing to the devices, and to what files?
-
-Some examples of common issues and indicators to diagnose them:
-- IOPS throttling: use iostat to measure per-device IOPS. Ensure no individual disk is above its limit, and the sum for all disks is less than the limit for the virtual machine.-- Bandwidth throttling: use iostat as for IOPS, but measuring read/write throughput. Ensure both per-device and aggregate throughput are below the bandwidth limits.-- SNAT exhaustion: this can manifest as high active (outbound) connections in SAR. -- Packet loss: this can be measured by proxy via TCP retransmit count relative to sent/received count. Both `sar` and `netstat` can show this information.-
-## General
-
-These tools are general purpose and cover basic system information. They are a good starting point for further investigation.
-
-### uptime
-
-```
-$ uptime
- 19:32:33 up 17 days, 12:36, 0 users, load average: 0.21, 0.77, 0.69
-```
-
-uptime provides system uptime and 1, 5, and 15-minute load averages. These load averages roughly correspond to threads doing work or waiting for uninterruptible work to complete. In absolute these numbers can be difficult to interpret, but measured over time they can tell us useful information:
--- 1-minute average > 5-minute average means load is increasing.-- 1-minute average < 5-minute average means load is decreasing.-
-uptime can also illuminate why information is not available: the issue may have resolved on its own or by a restart before the user could access the machine.
-
-Load averages higher than the number of CPU threads available may indicate a performance issue with a given workload.
-
-### dmesg
-
-```
-$ dmesg | tail
-$ dmesg --level=err | tail
-```
-
-dmesg dumps the kernel buffer. Events like OOMKill add an entry to the kernel buffer. Finding an OOMKill or other resource exhaustion messages in dmesg logs is a strong indicator of a problem.
-
-### top
-
-```
-$ top
-Tasks: 249 total, 1 running, 158 sleeping, 0 stopped, 0 zombie
-%Cpu(s): 2.2 us, 1.3 sy, 0.0 ni, 95.4 id, 1.0 wa, 0.0 hi, 0.2 si, 0.0 st
-KiB Mem : 65949064 total, 43415136 free, 2349328 used, 20184600 buff/cache
-KiB Swap: 0 total, 0 free, 0 used. 62739060 avail Mem
-
- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
-116004 root 20 0 144400 41124 27028 S 11.8 0.1 248:45.45 coredns
- 4503 root 20 0 1677980 167964 89464 S 5.9 0.3 1326:25 kubelet
- 1 root 20 0 120212 6404 4044 S 0.0 0.0 48:20.38 systemd
- ...
-```
-
-`top` provides a broad overview of current system state. The headers provide some useful aggregate information:
--- state of tasks: running, sleeping, stopped.-- CPU utilization, in this case mostly showing idle time.-- total, free, and used system memory.-
-`top` may miss short-lived processes; alternatives like `htop` and `atop` provide similar interfaces while fixing some of these shortcomings.
-
-## CPU
-
-These tools provide CPU utilization information. This is especially useful with rolling output, where patterns become easy to spot.
-
-### mpstat
-
-```
-$ mpstat -P ALL [interval]
-Linux 4.15.0-1064-azure (aks-main-10212767-vmss000001) 02/10/20 _x86_64_ (8 CPU)
-
-19:49:03 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
-19:49:04 all 1.01 0.00 0.63 2.14 0.00 0.13 0.00 0.00 0.00 96.11
-19:49:04 0 1.01 0.00 1.01 17.17 0.00 0.00 0.00 0.00 0.00 80.81
-19:49:04 1 1.98 0.00 0.99 0.00 0.00 0.00 0.00 0.00 0.00 97.03
-19:49:04 2 1.01 0.00 0.00 0.00 0.00 1.01 0.00 0.00 0.00 97.98
-19:49:04 3 0.00 0.00 0.99 0.00 0.00 0.99 0.00 0.00 0.00 98.02
-19:49:04 4 1.98 0.00 1.98 0.00 0.00 0.00 0.00 0.00 0.00 96.04
-19:49:04 5 1.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 98.00
-19:49:04 6 1.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 98.00
-19:49:04 7 1.98 0.00 0.99 0.00 0.00 0.00 0.00 0.00 0.00 97.03
-```
-
-`mpstat` prints similar CPU information to top, but broken down by CPU thread. Seeing all cores at once can be useful for detecting highly imbalanced CPU usage, for example when a single threaded application uses one core at 100% utilization. This problem may be more difficult to spot when aggregated over all CPUs in the system.
-
-### vmstat
-
-```
-$ vmstat [interval]
-procs --memory- swap-- --io- -system-- cpu--
- r b swpd free buff cache si so bi bo in cs us sy id wa st
- 2 0 0 43300372 545716 19691456 0 0 3 50 3 3 2 1 95 1 0
-```
-
-`vmstat` provides similar information `mpstat` and `top`, enumerating number of processes waiting on CPU (r column), memory statistics, and percent of CPU time spent in each work state.
-
-## Memory
-
-Memory is a very important, and thankfully easy, resource to track. Some tools can report both CPU and memory, like `vmstat`. But tools like `free` may still be useful for quick debugging.
-
-### free
-
-```
-$ free -m
- total used free shared buff/cache available
-Mem: 64403 2338 42485 1 19579 61223
-Swap: 0 0 0
-```
-
-`free` presents basic information about total memory as well as used and free memory. `vmstat` may be more useful even for basic memory analysis due to its ability to provide rolling output.
-
-## Disk
-
-These tools measure disk IOPS, wait queues, and total throughput.
-
-### iostat
-
-```
-$ iostat -xy [interval] [count]
-$ iostat -xy 1 1
-Linux 4.15.0-1064-azure (aks-main-10212767-vmss000001) 02/10/20 _x86_64_ (8 CPU)
-
-avg-cpu: %user %nice %system %iowait %steal %idle
- 3.42 0.00 2.92 1.90 0.00 91.76
-
-Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
-loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-sda 0.00 56.00 0.00 65.00 0.00 504.00 15.51 0.01 3.02 0.00 3.02 0.12 0.80
-scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-```
-
-`iostat` provides deep insights into disk utilization. This invocation passes `-x` for extended statistics, `-y` to skip the initial output printing system averages since boot, and `1 1` to specify we want 1-second interval, ending after one block of output.
-
-`iostat` exposes many useful statistics:
--- `r/s` and `w/s` are reads per second and writes per second. The sum of these values is IOPS.-- `rkB/s` and `wkB/s` are kilobytes read/written per second. The sum of these values is throughput.-- `await` is the average iowait time in milliseconds for queued requests.-- `avgqu-sz` is the average queue size over the provided interval.-
-On an Azure VM:
--- the sum of `r/s` and `w/s` for an individual block device may not exceed that disk's SKU limits.-- the sum of `rkB/s` and `wkB/s` for an individual block device may not exceed that disk's SKU limits-- the sum of `r/s` and `w/s` for all block devices may not exceed the limits for the VM SKU.-- the sum of `rkB/s` and `wkB/s for all block devices may not exceed the limits for the VM SKU.-
-Note that the OS disk counts as a managed disk of the smallest SKU corresponding to its capacity. For example, a 1024GB OS Disk corresponds to a P30 disk. Ephemeral OS disks and temporary disks do not have individual disk limits; they are only limited by the full VM limits.
-
-Non-zero values of await or avgqu-sz are also good indicators of IO contention.
-
-## Network
-
-These tools measure network statistics like throughput, transmission failures, and utilization. Deeper analysis can expose fine-grained TCP statistics about congestion and dropped packets.
-
-### sar
-
-```
-$ sar -n DEV [interval]
-22:36:57 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
-22:36:58 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-22:36:58 azv604be75d832 1.00 9.00 0.06 1.04 0.00 0.00 0.00 0.00
-22:36:58 azure0 68.00 79.00 27.79 52.79 0.00 0.00 0.00 0.00
-22:36:58 azv4a8e7704a5b 202.00 207.00 37.51 21.86 0.00 0.00 0.00 0.00
-22:36:58 azve83c28f6d1c 21.00 30.00 24.12 4.11 0.00 0.00 0.00 0.00
-22:36:58 eth0 314.00 321.00 70.87 163.28 0.00 0.00 0.00 0.00
-22:36:58 azva3128390bff 12.00 20.00 1.14 2.29 0.00 0.00 0.00 0.00
-22:36:58 azvf46c95ddea3 10.00 18.00 31.47 1.36 0.00 0.00 0.00 0.00
-22:36:58 enP1s1 74.00 374.00 29.36 166.94 0.00 0.00 0.00 0.00
-22:36:58 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-22:36:58 azvdbf16b0b2fc 9.00 19.00 3.36 1.18 0.00 0.00 0.00 0.00
-```
-
-`sar` is a powerful tool for a wide range of analysis. While this example uses its ability to measure network stats, it is equally powerful for measuring CPU and memory consumption. This example invokes `sar` with `-n` flag to specify the `DEV` (network device) keyword, displaying network throughput by device.
--- The sum of `rxKb/s` and `txKb/s` is total throughput for a given device. When this value exceeds the limit for the provisioned Azure NIC, workloads on the machine will experience increased network latency.-- `%ifutil` measures utilization for a given device. As this value approaches 100%, workloads will experience increased network latency.-
-```
-$ sar -n TCP,ETCP [interval]
-Linux 4.15.0-1064-azure (aks-main-10212767-vmss000001) 02/10/20 _x86_64_ (8 CPU)
-
-22:50:08 active/s passive/s iseg/s oseg/s
-22:50:09 2.00 0.00 19.00 24.00
-
-22:50:08 atmptf/s estres/s retrans/s isegerr/s orsts/s
-22:50:09 0.00 0.00 0.00 0.00 0.00
-
-Average: active/s passive/s iseg/s oseg/s
-Average: 2.00 0.00 19.00 24.00
-
-Average: atmptf/s estres/s retrans/s isegerr/s orsts/s
-Average: 0.00 0.00 0.00 0.00 0.00
-```
-
-This invocation of `sar` uses the `TCP,ETCP` keywords to examine TCP connections. The third column of the last row, "retrans", is the number of TCP retransmits per second. High values for this field indicate an unreliable network connection. In The first and third rows, "active" means a connection originated from the local device, while "remote" indicates an incoming connection. A common issue on Azure is SNAT port exhaustion, which `sar` can help detect. SNAT port exhaustion would manifest as high "active" values, since the problem is due to a high rate of outbound, locally-initiated TCP connections.
-
-As `sar` takes an interval, it prints rolling output and then prints final rows of output containing the average results from the invocation.
-
-### netstat
-
-```
-$ netstat -s
-Ip:
- 71046295 total packets received
- 78 forwarded
- 0 incoming packets discarded
- 71046066 incoming packets delivered
- 83774622 requests sent out
- 40 outgoing packets dropped
-Icmp:
- 103 ICMP messages received
- 0 input ICMP message failed.
- ICMP input histogram:
- destination unreachable: 103
- 412802 ICMP messages sent
- 0 ICMP messages failed
- ICMP output histogram:
- destination unreachable: 412802
-IcmpMsg:
- InType3: 103
- OutType3: 412802
-Tcp:
- 11487089 active connections openings
- 592 passive connection openings
- 1137 failed connection attempts
- 404 connection resets received
- 17 connections established
- 70880911 segments received
- 95242567 segments send out
- 176658 segments retransmited
- 3 bad segments received.
- 163295 resets sent
-Udp:
- 164968 packets received
- 84 packets to unknown port received.
- 0 packet receive errors
- 165082 packets sent
-UdpLite:
-TcpExt:
- 5 resets received for embryonic SYN_RECV sockets
- 1670559 TCP sockets finished time wait in fast timer
- 95 packets rejects in established connections because of timestamp
- 756870 delayed acks sent
- 2236 delayed acks further delayed because of locked socket
- Quick ack mode was activated 479 times
- 11983969 packet headers predicted
- 25061447 acknowledgments not containing data payload received
- 5596263 predicted acknowledgments
- 19 times recovered from packet loss by selective acknowledgements
- Detected reordering 114 times using SACK
- Detected reordering 4 times using time stamp
- 5 congestion windows fully recovered without slow start
- 1 congestion windows partially recovered using Hoe heuristic
- 5 congestion windows recovered without slow start by DSACK
- 111 congestion windows recovered without slow start after partial ack
- 73 fast retransmits
- 26 retransmits in slow start
- 311 other TCP timeouts
- TCPLossProbes: 198845
- TCPLossProbeRecovery: 147
- 480 DSACKs sent for old packets
- 175310 DSACKs received
- 316 connections reset due to unexpected data
- 272 connections reset due to early user close
- 5 connections aborted due to timeout
- TCPDSACKIgnoredNoUndo: 8498
- TCPSpuriousRTOs: 1
- TCPSackShifted: 3
- TCPSackMerged: 9
- TCPSackShiftFallback: 177
- IPReversePathFilter: 4
- TCPRcvCoalesce: 1501457
- TCPOFOQueue: 9898
- TCPChallengeACK: 342
- TCPSYNChallenge: 3
- TCPSpuriousRtxHostQueues: 17
- TCPAutoCorking: 2315642
- TCPFromZeroWindowAdv: 483
- TCPToZeroWindowAdv: 483
- TCPWantZeroWindowAdv: 115
- TCPSynRetrans: 885
- TCPOrigDataSent: 51140171
- TCPHystartTrainDetect: 349
- TCPHystartTrainCwnd: 7045
- TCPHystartDelayDetect: 26
- TCPHystartDelayCwnd: 862
- TCPACKSkippedPAWS: 3
- TCPACKSkippedSeq: 4
- TCPKeepAlive: 62517
-IpExt:
- InOctets: 36416951539
- OutOctets: 41520580596
- InNoECTPkts: 86631440
- InECT0Pkts: 14
-```
-
-`netstat` can introspect a wide variety of network stats, here invoked with summary output. There are many useful fields here depending on the issue. One useful field in the TCP section is "failed connection attempts". This may be an indication of SNAT port exhaustion or other issues making outbound connections. A high rate of retransmitted segments (also under the TCP section) may indicate issues with packet delivery.
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
- Title: Troubleshoot common Azure Kubernetes Service problems
-description: Learn how to troubleshoot and resolve common problems when using Azure Kubernetes Service (AKS)
-- Previously updated : 09/24/2021--
-# AKS troubleshooting
-
-When you create or manage Azure Kubernetes Service (AKS) clusters, you might occasionally come across problems. This article details some common problems and troubleshooting steps.
-
-## In general, where do I find information about debugging Kubernetes problems?
-
-Try the [official guide to troubleshooting Kubernetes clusters](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
-There's also a [troubleshooting guide](https://github.com/feiskyer/kubernetes-handbook/blob/master/en/troubleshooting/index.md), published by a Microsoft engineer for troubleshooting pods, nodes, clusters, and other features.
-
-## I'm getting a `quota exceeded` error during creation or upgrade. What should I do?
-
- [Request more cores](../azure-portal/supportability/regional-quota-requests.md).
-
-## I'm getting an `insufficientSubnetSize` error while deploying an AKS cluster with advanced networking. What should I do?
-
-This error indicates a subnet in use for a cluster no longer has available IPs within its CIDR for successful resource assignment. For Kubenet clusters, the requirement is sufficient IP space for each node in the cluster. For Azure CNI clusters, the requirement is sufficient IP space for each node and pod in the cluster.
-Read more about the [design of Azure CNI to assign IPs to pods](configure-azure-cni.md#plan-ip-addressing-for-your-cluster).
-
-These errors are also surfaced in [AKS Diagnostics](concepts-diagnostics.md), which proactively surfaces issues such as an insufficient subnet size.
-
-The following three (3) cases cause an insufficient subnet size error:
-
-1. AKS Scale or AKS Node pool scale
- 1. If using Kubenet, when the `number of free IPs in the subnet` is **less than** the `number of new nodes requested`.
- 1. If using Azure CNI, when the `number of free IPs in the subnet` is **less than** the `number of nodes requested times (*) the node pool's --max-pod value`.
-
-1. AKS Upgrade or AKS Node pool upgrade
- 1. If using Kubenet, when the `number of free IPs in the subnet` is **less than** the `number of buffer nodes needed to upgrade`.
- 1. If using Azure CNI, when the `number of free IPs in the subnet` is **less than** the `number of buffer nodes needed to upgrade times (*) the node pool's --max-pod value`.
-
- By default AKS clusters set a max surge (upgrade buffer) value of one (1), but this upgrade behavior can be customized by setting the [max surge value of a node pool, which will increase the number of available IPs needed to complete an upgrade.
-
-1. AKS create or AKS Node pool add
- 1. If using Kubenet, when the `number of free IPs in the subnet` is **less than** the `number of nodes requested for the node pool`.
- 1. If using Azure CNI, when the `number of free IPs in the subnet` is **less than** the `number of nodes requested times (*) the node pool's --max-pod value`.
-
-The following mitigation can be taken by creating new subnets. The permission to create a new subnet is required for mitigation due to the inability to update an existing subnet's CIDR range.
-
-1. Rebuild a new subnet with a larger CIDR range sufficient for operation goals:
- 1. Create a new subnet with a new desired non-overlapping range.
- 1. Create a new node pool on the new subnet.
- 1. Drain pods from the old node pool residing in the old subnet to be replaced.
- 1. Delete the old subnet and old node pool.
-
-## My pod is stuck in CrashLoopBackOff mode. What should I do?
-
-There might be various reasons for the pod being stuck in that mode. You might look into:
-
-* The pod itself, by using `kubectl describe pod <pod-name>`.
-* The logs, by using `kubectl logs <pod-name>`.
-
-For more information about how to troubleshoot pod problems, see [Debugging Pods](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods) in the Kubernetes documentation.
-
-## I'm receiving `TCP timeouts` when using `kubectl` or other third-party tools connecting to the API server
-AKS has HA control planes that scale vertically and horizontally according to the number of cores to ensure its Service Level Objectives (SLOs) and Service Level Agreements (SLAs). If you're experiencing connections timing out, check the below:
--- **Are all your API commands timing out consistently or only a few?** If it's only a few, your `konnectivity-agent` pod, `tunnelfront` pod or `aks-link` pod, responsible for node -> control plane communication, might not be in a running state. Make sure the nodes hosting this pod aren't over-utilized or under stress. Consider moving them to their own [`system` node pool](use-system-pools.md).-- **Have you opened all required ports, FQDNs, and IPs noted on the [AKS restrict egress traffic docs](limit-egress-traffic.md)?** Otherwise several commands calls can fail. The AKS secure, tunneled communication between api-server and kubelet (through the *konnectivity-agent*) will require some of these to work.-- **Have you blocked the Application-Layer Protocol Negotiation TLS extension?** *konnectivity-agent* requires this extension to establish a connection between the control plane and nodes.-- **Is your current IP covered by [API IP Authorized Ranges](api-server-authorized-ip-ranges.md)?** If you're using this feature and your IP is not included in the ranges your calls will be blocked. -- **Do you have a client or application leaking calls to the API server?** Make sure to use watches instead of frequent get calls and that your third-party applications aren't leaking such calls. For example, a bug in the Istio mixer causes a new API Server watch connection to be created every time a secret is read internally. Because this behavior happens at a regular interval, watch connections quickly accumulate, and eventually cause the API Server to become overloaded no matter the scaling pattern. https://github.com/istio/istio/issues/19481-- **Do you have many releases in your helm deployments?** This scenario can cause both tiller to use too much memory on the nodes, as well as a large amount of `configmaps`, which can cause unnecessary spikes on the API server. Consider configuring `--history-max` at `helm init` and leverage the new Helm 3. More details on the following issues:
- - https://github.com/helm/helm/issues/4821
- - https://github.com/helm/helm/issues/3500
- - https://github.com/helm/helm/issues/4543
-- **[Is internal traffic between nodes being blocked?](#im-receiving-tcp-timeouts-such-as-dial-tcp-node_ip10250-io-timeout)**-
-## I'm receiving `TCP timeouts`, such as `dial tcp <Node_IP>:10250: i/o timeout`
-
-These timeouts may be related to internal traffic between nodes being blocked. Verify that this traffic is not being blocked, such as by [network security groups](concepts-security.md#azure-network-security-groups) on the subnet for your cluster's nodes.
-
-## I'm trying to enable Kubernetes role-based access control (Kubernetes RBAC) on an existing cluster. How can I do that?
-
-Enabling Kubernetes role-based access control (Kubernetes RBAC) on existing clusters isn't supported at this time, it must be set when creating new clusters. Kubernetes RBAC is enabled by default when using CLI, Portal, or an API version later than `2020-03-01`.
-
-## I can't get logs by using kubectl logs or I can't connect to the API server. I'm getting "Error from server: error dialing backend: dial tcp…". What should I do?
-
-Ensure ports 22, 9000 and 1194 are open to connect to the API server. Check whether the `tunnelfront` or `aks-link` pod is running in the *kube-system* namespace using the `kubectl get pods --namespace kube-system` command. If it isn't, force deletion of the pod and it will restart.
-
-## I'm getting `"tls: client offered only unsupported versions"` from my client when connecting to AKS API. What should I do?
-
-The minimum supported TLS version in AKS is TLS 1.2.
-
-## I'm using Alias minor version, but I can't seem to upgrade in the same minor version? Why?
-
-When upgrading by alias minor version, only a higher minor version is supported. For example, upgrading from 1.14.x to 1.14 will not trigger an upgrade to the latest GA 1.14 patch, but upgrading to 1.15 will trigger an upgrade to the latest GA 1.15 patch.
-
-## My application is failing with `argument list too long`
-
-You may receive an error message similar to:
-
-```
-standard_init_linux.go:228: exec user process caused: argument list too long
-```
-
-There are two potential causes:
-- The argument list provided to the executable is too long-- The set of environment variables provided to the executable is too big-
-If you have many services deployed in one namespace, it can cause the environment variable list to become too large, and will produce the above error message when Kubelet tries to run the executable. The error is caused by Kubelet injecting environment variables recording the host and port for each active service, so that services can use this information to locate one another (read more about this [in the Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service)).
-
-As a workaround, you can disable this Kubelet behaviour by setting `enableServiceLinks: false` inside your [Pod spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#podspec-v1-core). **However**, if your service relies on these environment variables to locate other services, then this will cause it to fail. One fix is to use DNS for service resolution rather than environment variables (using [CoreDNS](https://kubernetes.io/docs/tasks/administer-cluster/coredns/)). Another option is to reduce the number of services that are active.
-
-## I'm trying to upgrade or scale and am getting a `"Changing property 'imageReference' is not allowed"` error. How do I fix this problem?
-
-You might be getting this error because you've modified the tags in the agent nodes inside the AKS cluster. Modify or delete tags and other properties of resources in the MC_* resource group can lead to unexpected results. Altering the resources under the MC_* group in the AKS cluster breaks the service-level objective (SLO).
-
-## I'm receiving errors that my cluster is in failed state and upgrading or scaling will not work until it is fixed
-
-*This troubleshooting assistance is directed from https://aka.ms/aks-cluster-failed*
-
-This error occurs when clusters enter a failed state for multiple reasons. Follow the steps below to resolve your cluster failed state before retrying the previously failed operation:
-
-1. Until the cluster is out of `failed` state, `upgrade` and `scale` operations won't succeed. Common root issues and resolutions include:
- * Scaling with **insufficient compute (CRP) quota**. To resolve, first scale your cluster back to a stable goal state within quota. Then follow these [steps to request a compute quota increase](../azure-portal/supportability/regional-quota-requests.md) before trying to scale up again beyond initial quota limits.
- * Scaling a cluster with advanced networking and **insufficient subnet (networking) resources**. To resolve, first scale your cluster back to a stable goal state within quota. Then follow [these steps to request a resource quota increase](../azure-resource-manager/templates/error-resource-quota.md#solution) before trying to scale up again beyond initial quota limits.
-2. Once the underlying cause of upgrade failure is addressed, retry the original operation. This retry operation should bring your cluster to the succeeded state.
-
-## I'm receiving errors when trying to upgrade or scale that state my cluster is being upgraded or has failed upgrade
-
-*This troubleshooting assistance is directed from https://aka.ms/aks-pending-upgrade*
-
- You can't have a cluster or node pool simultaneously upgrade and scale. Instead, each operation type must complete on the target resource before the next request on that same resource. As a result, operations are limited when active upgrade or scale operations are occurring or attempted.
-
-To help diagnose the issue run `az aks show -g myResourceGroup -n myAKSCluster -o table` to retrieve detailed status on your cluster. Based on the result:
-
-* If cluster is actively upgrading, wait until the operation finishes. If it succeeded, retry the previously failed operation again.
-* If cluster has failed upgrade, follow steps outlined in previous section.
-
-## I'm receiving an error due to "PodDrainFailure"
-
-This error is due to the requested operation being blocked by a PodDisruptionBudget (PDB) that has been set on the deployments within the cluster. To learn more about how PodDisruptionBudgets work, please visit check out [the official Kubernetes example](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pdb-example).
-
-You may use this command to find the PDBs applied on your cluster:
-
-```
-kubectl get poddisruptionbudgets --all-namespaces
-```
-or
-```
-kubectl get poddisruptionbudgets -n {namespace of failed pod}
-```
-Please view the label selector to see the exact pods that are causing this failure.
-
-There are a few ways this error can occur:
-1. Your PDB may be too restrictive such as having a high minAvailable pod count, or low maxUnavailable pod count. You can change it by updating the PDB with less restrictive.
-2. During an upgrade, the replacement pods may not be ready fast enough. You can investigate your Pod Readiness times to attempt to fix this situation.
-3. The deployed pods may not work with the new upgraded node version, causing Pods to fail and fall below the PDB.
-
->[!NOTE]
- > If the pod is failing from the namespace 'kube-system', please contact support. This is a namespace managed by AKS.
-
-For more information about PodDisruptionBudgets, please check out the [official Kubernetes guide on configuring a PDB](https://kubernetes.io/docs/tasks/run-application/configure-pdb/).
-
-## Can I move my cluster to a different subscription or my subscription with my cluster to a new tenant?
-
-If you've moved your AKS cluster to a different subscription or the cluster's subscription to a new tenant, the cluster won't function because of missing cluster identity permissions. **AKS doesn't support moving clusters across subscriptions or tenants** because of this constraint.
-
-## I'm receiving errors trying to use features that require virtual machine scale sets
-
-*This troubleshooting assistance is directed from aka.ms/aks-vmss-enablement*
-
-You may receive errors that indicate your AKS cluster isn't on a virtual machine scale set, such as the following example:
-
-**AgentPool `<agentpoolname>` has set auto scaling as enabled but isn't on Virtual Machine Scale Sets**
-
-Features such as the cluster autoscaler or multiple node pools require virtual machine scale sets as the `vm-set-type`.
-
-Follow the *Before you begin* steps in the appropriate doc to correctly create an AKS cluster:
-
-* [Use the cluster autoscaler](cluster-autoscaler.md)
-* [Create and use multiple node pools](use-multiple-node-pools.md)
-
-## What naming restrictions are enforced for AKS resources and parameters?
-
-*This troubleshooting assistance is directed from aka.ms/aks-naming-rules*
-
-Naming restrictions are implemented by both the Azure platform and AKS. If a resource name or parameter breaks one of these restrictions, an error is returned that asks you provide a different input. The following common naming guidelines apply:
-
-* Cluster names must be 1-63 characters. The only allowed characters are letters, numbers, dashes, and underscore. The first and last character must be a letter or a number.
-* The AKS Node/*MC_* resource group name combines resource group name and resource name. The autogenerated syntax of `MC_resourceGroupName_resourceName_AzureRegion` must be no greater than 80 chars. If needed, reduce the length of your resource group name or AKS cluster name. You may also [customize your node resource group name](cluster-configuration.md#custom-resource-group-name)
-* The *dnsPrefix* must start and end with alphanumeric values and must be between 1-54 characters. Valid characters include alphanumeric values and hyphens (-). The *dnsPrefix* can't include special characters such as a period (.).
-* AKS Node Pool names must be all lowercase and be 1-11 characters for Linux node pools and 1-6 characters for Windows node pools. The name must start with a letter and the only allowed characters are letters and numbers.
-* The *admin-username*, which sets the administrator username for Linux nodes, must start with a letter, may only contain letters, numbers, hyphens, and underscores, and has a maximum length of 64 characters.
-
-## I'm receiving errors when trying to create, update, scale, delete or upgrade cluster, that operation is not allowed as another operation is in progress.
-
-*This troubleshooting assistance is directed from aka.ms/aks-pending-operation*
-
-Cluster operations are limited when a previous operation is still in progress. To retrieve a detailed status of your cluster, use the `az aks show -g myResourceGroup -n myAKSCluster -o table` command. Use your own resource group and AKS cluster name as needed.
-
-Based on the output of the cluster status:
-
-* If the cluster is in any provisioning state other than *Succeeded* or *Failed*, wait until the operation (*Upgrading / Updating / Creating / Scaling / Deleting / Migrating*) finishes. When the previous operation has completed, retry your latest cluster operation.
-
-* If the cluster has a failed upgrade, follow the steps outlined [I'm receiving errors that my cluster is in failed state and upgrading or scaling will not work until it is fixed](#im-receiving-errors-that-my-cluster-is-in-failed-state-and-upgrading-or-scaling-will-not-work-until-it-is-fixed).
-
-## Received an error saying my service principal wasn't found or is invalid when I try to create a new cluster.
-
-When creating an AKS cluster, it requires a service principal or managed identity to create resources on your behalf. AKS can automatically create a new service principal at cluster creation time or receive an existing one. When using an automatically created one, Azure Active Directory needs to propagate it to every region so the creation succeeds. When the propagation takes too long, the cluster will fail validation to create as it can't find an available service principal to do so.
-
-Use the following workarounds for this issue:
-* Use an existing service principal, which has already propagated across regions and exists to pass into AKS at cluster create time.
-* If using automation scripts, add time delays between service principal creation and AKS cluster creation.
-* If using Azure portal, return to the cluster settings during create and retry the validation page after a few minutes.
-
-## I'm getting `"AADSTS7000215: Invalid client secret is provided."` when using AKS API. What should I do?
-
-This issue is due to the expiration of service principal credentials. [Update the credentials for an AKS cluster.](update-credentials.md)
-
-## I'm getting `"The credentials in ServicePrincipalProfile were invalid."` or `"error:invalid_client AADSTS7000215: Invalid client secret is provided."`
-This is caused by special characters in the value of the client secret that have not been escaped properly. Refer to [escape special characters when updating AKS Service Principal credentials.](update-credentials.md#update-aks-cluster-with-new-service-principal-credentials)
-
-## I can't access my cluster API from my automation/dev machine/tooling when using API server authorized IP ranges. How do I fix this problem?
-
-To resolve this issue, ensure `--api-server-authorized-ip-ranges` includes the IP(s) or IP range(s) of automation/dev/tooling systems being used. Refer section 'How to find my IP' in [Secure access to the API server using authorized IP address ranges](api-server-authorized-ip-ranges.md).
-
-## I'm unable to view resources in Kubernetes resource viewer in Azure portal for my cluster configured with API server authorized IP ranges. How do I fix this problem?
-
-The [Kubernetes resource viewer](kubernetes-portal.md) requires `--api-server-authorized-ip-ranges` to include access for the local client computer or IP address range (from which the portal is being browsed). Refer section 'How to find my IP' in [Secure access to the API server using authorized IP address ranges](api-server-authorized-ip-ranges.md).
-
-## I'm receiving errors after restricting egress traffic
-
-When restricting egress traffic from an AKS cluster, there are [required and optional recommended](limit-egress-traffic.md) outbound ports / network rules and FQDN / application rules for AKS. If your settings are in conflict with any of these rules, certain `kubectl` commands won't work correctly. You may also see errors when creating an AKS cluster.
-
-Verify that your settings aren't conflicting with any of the required or optional recommended outbound ports / network rules and FQDN / application rules.
-
-## I'm receiving "429 - Too Many Requests" errors
-
-When a kubernetes cluster on Azure (AKS or no) does a frequent scale up/down or uses the cluster autoscaler (CA), those operations can result in a large number of HTTP calls that in turn exceed the assigned subscription quota leading to failure. The errors will look like
-
-```
-Service returned an error. Status=429 Code=\"OperationNotAllowed\" Message=\"The server rejected the request because too many requests have been received for this subscription.\" Details=[{\"code\":\"TooManyRequests\",\"message\":\"{\\\"operationGroup\\\":\\\"HighCostGetVMScaleSet30Min\\\",\\\"startTime\\\":\\\"2020-09-20T07:13:55.2177346+00:00\\\",\\\"endTime\\\":\\\"2020-09-20T07:28:55.2177346+00:00\\\",\\\"allowedRequestCount\\\":1800,\\\"measuredRequestCount\\\":2208}\",\"target\":\"HighCostGetVMScaleSet30Min\"}] InnerError={\"internalErrorCode\":\"TooManyRequestsReceived\"}"}
-```
-
-These throttling errors are described in detail [here](../azure-resource-manager/management/request-limits-and-throttling.md) and [here](/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors)
-
-The recommendation from AKS Engineering Team is to ensure you are running version at least 1.18.x, which contains many improvements. More details can be found on these improvements [here](https://github.com/Azure/AKS/issues/1413) and [here](https://github.com/kubernetes-sigs/cloud-provider-azure/issues/247).
-
-Given these throttling errors are measured at the subscription level, they might still happen if:
-- There are 3rd party applications making GET requests (for example, monitoring applications, and so on). The recommendation is to reduce the frequency of these calls.-- There are numerous AKS clusters / node pools using virtual machine scale sets. Try to split your number of clusters into different subscriptions, in particular if you expect them to be very active (for example, an active cluster autoscaler) or have multiple clients (for example, rancher, terraform, and so on).-
-## My cluster's provisioning status changed from Ready to Failed with or without me performing an operation. What should I do?
-
-If your cluster's provisioning status changes from *Ready* to *Failed* with or without you performing any operations, but the applications on your cluster are continuing to run, this issue may be resolved automatically by the service and your applications should not be affected.
-
-If your cluster's provisioning status remains as *Failed* or the applications on your cluster stop working, [submit a support request](https://azure.microsoft.com/support/options/#submit).
-
-## My watch is stale or Azure AD Pod Identity NMI is returning status 500
-
-If you're using Azure Firewall like on this [example](limit-egress-traffic.md#restrict-egress-traffic-using-azure-firewall), you may encounter this issue as the long lived TCP connections via firewall using Application Rules currently have a bug (to be resolved in Q1CY21) that causes the Go `keepalives` to be terminated on the firewall. Until this issue is resolved, you can mitigate by adding a Network rule (instead of application rule) to the AKS API server IP.
-
-## When resuming my cluster after a stop operation, why is my node count not in the autoscaler min and max range?
-
-If you are using cluster autoscaler, when you start your cluster back up your current node count may not be between the min and max range values you set. This behavior is expected. The cluster starts with the number of nodes it needs to run its workloads, which isn't impacted by your autoscaler settings. When your cluster performs scaling operations, the min and max values will impact your current node count and your cluster will eventually enter and remain in that desired range until you stop your cluster.
-
-## Windows containers have connectivity issues after a cluster upgrade operation
-
-For older clusters with Calico network policies applied before Windows Calico support, Windows Calico will be enabled by default after a cluster upgrade. After Windows Calico is enabled on Windows, you may have connectivity issues if the Calico network policies denied ingress/egress. You can mitigate this issue by creating a new Calico policy on the cluster that allows all ingress/egress for Windows using either PodSelector or IPBlock.
-
-## Azure Storage and AKS Troubleshooting
-
-### Failure when setting uid and `GID` in mountOptions for Azure Disk
-
-Azure Disk uses the ext4,xfs filesystem by default and mountOptions such as uid=x,gid=x can't be set at mount time. For example if you tried to set mountOptions uid=999,gid=999, would see an error like:
-
-```console
-Warning FailedMount 63s kubelet, aks-nodepool1-29460110-0 MountVolume.MountDevice failed for volume "pvc-d783d0e4-85a1-11e9-8a90-369885447933" : azureDisk - mountDevice:FormatAndMount failed with mount failed: exit status 32
-Mounting command: systemd-run
-Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m436970985 --scope -- mount -t xfs -o dir_mode=0777,file_mode=0777,uid=1000,gid=1000,defaults /dev/disk/azure/scsi1/lun2 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m436970985
-Output: Running scope as unit run-rb21966413ab449b3a242ae9b0fbc9398.scope.
-mount: wrong fs type, bad option, bad superblock on /dev/sde,
- missing codepage or helper program, or other error
-```
-
-You can mitigate the issue by doing one the options:
-
-* [Configure the security context for a pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) by setting uid in runAsUser and gid in fsGroup. For example, the following setting will set pod run as root, make it accessible to any file:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: security-context-demo
-spec:
- securityContext:
- runAsUser: 0
- fsGroup: 0
-```
-
- >[!NOTE]
- > Since gid and uid are mounted as root or 0 by default. If gid or uid are set as non-root, for example 1000, Kubernetes will use `chown` to change all directories and files under that disk. This operation can be time consuming and may make mounting the disk very slow.
-
-* Use `chown` in initContainers to set `GID` and `UID`. For example:
-
-```yaml
-initContainers:
-- name: volume-mount
- image: mcr.microsoft.com/dotnet/runtime-deps:6.0
- command: ["sh", "-c", "chown -R 100:100 /data"]
- volumeMounts:
- - name: <your data volume>
- mountPath: /data
-```
-
-### Large number of Azure Disks causes slow attach/detach
-
-When the numbers of Azure Disk attach/detach operations targeting a single node VM is larger than 10, or larger than 3 when targeting single virtual machine scale set pool they may be slower than expected as they are done sequentially. This issue is a known limitation with in-tree Azure Disk driver. [Azure Disk CSI driver](https://github.com/kubernetes-sigs/azuredisk-csi-driver) solved this issue with attach/detach disk in batch operation.
-
-### Azure Disk detach failure leading to potential node VM in failed state
-
-In some edge cases, an Azure Disk detach may partially fail and leave the node VM in a failed state.
-
-If your node is in a failed state, you can mitigate by manually updating the VM status using one of the below:
-
-* For an availability set-based cluster:
- ```azurecli
- az vm update -n <VM_NAME> -g <RESOURCE_GROUP_NAME>
- ```
-
-* For a VMSS-based cluster:
- ```azurecli
- az vmss update-instances -g <RESOURCE_GROUP_NAME> --name <VMSS_NAME> --instance-id <ID>
- ```
-
-## Azure Files and AKS Troubleshooting
-
-### Azure Files CSI storage driver fails to mount a volume with a secret not in default namespace
-
-If you have configured an Azure Files CSI driver persistent volume or storage class with a storage
-access secrete in a namespace other than *default*, the pod does not search in its own namespace
-and returns an error when trying to mount the volume.
-
-This issue has been fixed in the 2022041 release. To mitigate this issue, you have two options:
-
-1. Upgrade the agent node image to the latest release.
-1. Specify the *secretNamespace* setting when configuring the persistent volume configuration.
-
-### What are the default mountOptions when using Azure Files?
-
-Recommended settings:
-
-| Kubernetes version | fileMode and dirMode value|
-|--|:--:|
-| 1.12.2 and later | 0777 |
-
-Mount options can be specified on the storage class object. The following example sets *0777*:
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: azurefile
-provisioner: kubernetes.io/azure-file
-mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=1000
- - gid=1000
- - mfsymlinks
- - nobrl
- - cache=none
-parameters:
- skuName: Standard_LRS
-```
-
-Some additional useful *mountOptions* settings:
-
-* `mfsymlinks` will make Azure Files mount (cifs) support symbolic links
-* `nobrl` will prevent sending byte range lock requests to the server. This setting is necessary for certain applications that break with cifs style mandatory byte range locks. Most cifs servers don't yet support requesting advisory byte range locks. If not using *nobrl*, applications that break with cifs style mandatory byte range locks may cause error messages similar to:
- ```console
- Error: SQLITE_BUSY: database is locked
- ```
-
-### Error "could not change permissions" when using Azure Files
-
-When running PostgreSQL on the Azure Files plugin, you may see an error similar to:
-
-```console
-initdb: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
-fixing permissions on existing directory /var/lib/postgresql/data
-```
-
-This error is caused by the Azure Files plugin using the cifs/SMB protocol. When using the cifs/SMB protocol, the file and directory permissions couldn't be changed after mounting.
-
-To resolve this issue, use `subPath` together with the Azure Disk plugin.
-
-> [!NOTE]
-> For ext3/4 disk type, there is a lost+found directory after the disk is formatted.
-
-### Azure Files has high latency compared to Azure Disk when handling many small files
-
-In some case, such as handling many small files, you may experience high latency when using Azure Files when compared to Azure Disk.
-
-### Error when enabling "Allow access allow access from selected network" setting on storage account
-
-If you enable *allow access from selected network* on a storage account that's used for dynamic provisioning in AKS, you'll get an error when AKS creates a file share:
-
-```console
-persistentvolume-controller (combined from similar events): Failed to provision volume with StorageClass "azurefile": failed to create share kubernetes-dynamic-pvc-xxx in account xxx: failed to create file share, err: storage: service returned error: StatusCode=403, ErrorCode=AuthorizationFailure, ErrorMessage=This request is not authorized to perform this operation.
-```
-
-This error is because of the Kubernetes *persistentvolume-controller* not being on the network chosen when setting *allow access from selected network*.
-
-You can mitigate the issue by using [static provisioning with Azure Files](azure-files-volume.md).
-
-### Azure Files mount fails because of storage account key changed
-
-If your storage account key has changed, you may see Azure Files mount failures.
-
-You can mitigate by manually updating the `azurestorageaccountkey` field manually in an Azure file secret with your base64-encoded storage account key.
-
-To encode your storage account key in base64, you can use `base64`. For example:
-
-```console
-echo X+ALAAUgMhWHL7QmQ87E1kSfIqLKfgC03Guy7/xk9MyIg2w4Jzqeu60CVw2r/dm6v6E0DWHTnJUEJGVQAoPaBc== | base64
-```
-
-To update your Azure secret file, use `kubectl edit secret`. For example:
-
-```console
-kubectl edit secret azure-storage-account-{storage-account-name}-secret
-```
-
-After a few minutes, the agent node will retry the Azure File mount with the updated storage key.
--
-### Cluster autoscaler fails to scale with error failed to fix node group sizes
-
-If your cluster autoscaler isn't scaling up/down and you see an error like the below on the [cluster autoscaler logs][view-master-logs].
-
-```console
-E1114 09:58:55.367731 1 static_autoscaler.go:239] Failed to fix node group sizes: failed to decrease aks-default-35246781-vmss: attempt to delete existing nodes
-```
-
-This error is because of an upstream cluster autoscaler race condition. In such a case, cluster autoscaler ends with a different value than the one that is actually in the cluster. To get out of this state, disable and re-enable the [cluster autoscaler][cluster-autoscaler].
--
-### Why do upgrades to Kubernetes 1.16 fail when using node labels with a kubernetes.io prefix
-
-As of Kubernetes 1.16 [only a defined subset of labels with the kubernetes.io prefix](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) can be applied by the kubelet to nodes. AKS cannot remove active labels on your behalf without consent, as it may cause downtime to impacted workloads.
-
-As a result, to mitigate this issue you can:
-
-1. Upgrade your cluster control plane to 1.16 or higher
-2. Add a new nodepoool on 1.16 or higher without the unsupported kubernetes.io labels
-3. Delete the older node pool
-
-AKS is investigating the capability to mutate active labels on a node pool to improve this mitigation.
--
-<!-- LINKS - internal -->
-[view-master-logs]: monitor-aks-reference.md#resource-logs
-[cluster-autoscaler]: cluster-autoscaler.md
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 9/15/2022 Last updated : 10/19/2022 # Migrate to App Service Environment v3
Scenario: An existing app running on an App Service Environment v1 or App Servic
For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
-Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
### Checklist before migrating apps
Note that multiple App Service Environments can't exist in a single subnet. If y
App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment will need to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
-## Back up and restore
-
-The [back up and restore](../manage-backup.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [details](../manage-backup.md#automatic-vs-custom-backups) of this feature.
-
-The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. You can select a backup and use that to restore the app to an App Service in your App Service Environment v3.
--
-|Benefits |Limitations |
-|||
-|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups) |
-|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription |
-|In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. |
-|Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported |
-|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Using a [private endpoint enabled storage account](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported |
-|Can create empty web apps to restore to in your new environment before you start restoring to speed up the process | |
- ## Clone your app to an App Service Environment v3 [Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#whats-included-in-an-automatic-backup).
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
## Manually create your apps on an App Service Environment v3
-If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment.
+If the above feature doesn't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment.
You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, navigate to your App Service and go to **Export template** under **Automation**.
Once your migration and any testing with your new environment is complete, delet
Zone pinning isn't a supported feature on App Service Environment v3. - **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+- **Is backup and restore supported for moving apps from App Service Environment v2 to v3?**
+ The [back up and restore](../manage-backup.md) feature doesn't support restoring apps between App Service Environment versions (an app running on App Service Environment v2 can't be restored on an App Service Environment v3).
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
### Try Form Recognizer
See how data, including name, job title, address, email, and company name, is ex
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API:
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
The following resources are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
| _**Composed model**_ |<ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>JavaScript SDK</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>| ::: moniker-end
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | Model ID| |||:|
-|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
+|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
### Try Form Recognizer
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
### Try Form Recognizer
Extract data, including name, birth date, machine-readable zone, and expiration
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API:
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
### Try Form Recognizer
Keys can also exist in isolation when the model detects that a key exists, with
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API: > [!div class="nextstepaction"]
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
## Try Form Recognizer
For large multi-page documents, use the `pages` query parameter to indicate spec
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API:
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Learn how to use Form Recognizer v3.0 in your applications by following our [**F
* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with our [Form Recognizer sample tool](https://fott-2-1.azurewebsites.net/)
-* Complete a [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md) and get started creating a document processing app in the development language of your choice.
+* Complete a [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
### Try Form Recognizer
See how data, including time and date of transactions, merchant information, and
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API:
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
Once you've defined your table tag, tag the cell values.
Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
-* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](./quickstarts/try-sdk-rest-api.md).
+* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api).
* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by adding and labeling more forms, then retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed. * The list of tags, and the estimated accuracy per tag.
In this quickstart, you've learned how to use the Form Recognizer Sample Labelin
> [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md) * [What is Form Recognizer?](overview.md)
-* [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
+* [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Use the links in the table to learn more about each model and browse the API ref
| Model| Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
In this migration guide, you've learned how to upgrade your existing Form Recogn
* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) * [What is Form Recognizer?](overview.md)
-* [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
+* [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
pip package version 3.1.0b4
> [Learn more about Layout extraction](concept-layout.md)
-* **Client library update** - The latest versions of the [client libraries](./quickstarts/try-sdk-rest-api.md) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.
+* **Client library update** - The latest versions of the [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.
* **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md) * **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages. * **Quality improvements** - Extraction improvements including single digit extraction improvements.
pip package version 3.1.0b4
**v2.0** includes the following update:
-* The [client libraries](./quickstarts/try-sdk-rest-api.md) for NET, Python, Java, and JavaScript have entered General Availability.
+* The [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for NET, Python, Java, and JavaScript have entered General Availability.
**New samples** are available on GitHub.
The JSON responses for all API calls have new formats. Some keys and values have
## Next steps
-Complete a [quickstart](./quickstarts/try-sdk-rest-api.md) to get started writing a forms processing app with Form Recognizer in the development language of your choice.
+Complete a [quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) to get started writing a forms processing app with Form Recognizer in the development language of your choice.
## See also
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runbooks.md
When you test a runbook, the [Draft version](#publish-a-runbook) is executed and
Even though the Draft version is being run, the runbook still executes normally and performs any actions against resources in the environment. For this reason, you should only test runbooks on non-production resources.
+> [!NOTE]
+> All runbook execution actions are logged in the **Activity Log** of the automation account with the operation name **Create an Azure Automation job**. However, runbook execution in a test pane where the draft version of the runbook is executed would be logged in the activity logs with the operation name **Write an Azure Automation runbook draft**. Select **Operation** and **JSON** tab to see the scope ending with *../runbooks/(runbook name)/draft/testjob*.
+ The procedure to test each [type of runbook](automation-runbook-types.md) is the same. There's no difference in testing between the textual editor and the graphical editor in the Azure portal. 1. Open the Draft version of the runbook in either the [textual editor](automation-edit-textual-runbook.md) or the [graphical editor](automation-graphical-authoring-intro.md).
automation Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md
The following examples of runbook scripts fetch the Resource Manager resources b
# [Run As account](#tab/run-as-account)
-```powershell
+```powershell-interactive
$connectionName = "AzureRunAsConnection" try {
The following examples of runbook scripts fetch the Resource Manager resources b
>[!NOTE] > Enable appropriate RBAC permissions for the system identity of this Automation account. Otherwise, the runbook might fail.
- ```powershell
+ ```powershell-interactive
try { "Logging in to Azure..."
The following examples of runbook scripts fetch the Resource Manager resources b
``` # [User-assigned managed identity](#tab/ua-managed-identity)
-```powershell
+```powershell-interactive
try {
availability-zones Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service.md
description: Learn how to migrate Azure App Service to availability zone support
Previously updated : 08/03/2022 Last updated : 10/19/2022
Azure App Service can be deployed into [Availability Zones (AZ)](../availability
An App Service lives in an App Service plan (ASP), and the App Service plan exists in a single scale unit. App Services are zonal services, which means that App Services can be deployed using one of the following methods: - For App Services that aren't configured to be zone redundant, the VM instances are placed in a single zone that is selected by the platform in the selected region.-- For App Services that are configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a VM instance capacity larger than three is specified and the number of instances is divisible by three, the instances will be spread evenly. Otherwise, instance counts beyond 3*N will get spread across the remaining one or two zones.+
+- For App Services that are configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a VM instance capacity larger than three is specified and the number of instances is a multiple of three (3 * N), the instances will be spread evenly. However, if the number of instances is not a multiple of three, the remainder of the instances will get spread across the remaining one or two zones.
## Prerequisites
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
## Partners
-### Cisco
-
-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
-|--|--|--|--|--|
-|Cisco Hyperflex on VMware <br/> Cisco IKS ESXi 6.7 U3 |1.21.13|v1.9.0_2022-07-12|16.0.312.4243| Not validated |
- ### Dell |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
azure-arc Manage Automatic Vm Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md
If you continue to have trouble upgrading an extension, you can [disable automat
## Supported extensions
-Automatic extension upgrade supports the following extensions (and more are added periodically):
+Automatic extension upgrade supports the following extensions:
-- Azure Monitor Agent - Linux and Windows-- Azure Security agent - Linux and Windows
+- Azure Monitor agent - Linux and Windows
+- Log Analytics agent (OMS agent) - Linux only
- Dependency agent ΓÇô Linux and Windows
+- Azure Security agent - Linux and Windows
- Key Vault Extension - Linux only-- Log Analytics agent (OMS agent) - Linux only
+- Azure Update Management Center - Linux and Windows
+- Azure Automation Hybrid Runbook Worker - Linux and Windows
+- Azure Arc-enabled SQL Server agent - Windows only
+
+More extensions will be added over time. Extensions that do not support automatic extension upgrade today are still configured to enable automatic upgrades by default. This setting will have no effect until the extension publisher chooses to support automatic upgrades.
## Manage automatic extension upgrade
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
Title: Connect machines at scale using group policy description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using group policy. Previously updated : 05/25/2022 Last updated : 10/18/2022
try
"Installation Complete" >> $logpath }
- & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --config "$InstallationFolder\$ConfigFilename" >> $logpath
+ & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --config "$InstallationFolder\$ConfigFilename" --correlation-id "478b97c2-9310-465a-87df-f21e66c2b248" >> $logpath
if ($LASTEXITCODE -ne 0) { throw "Failed during azcmagent connect: $LASTEXITCODE" }
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
description: In this QuickStart, you will learn how to use the helper script to
Previously updated : 09/14/2022 Last updated : 10/19/2022
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
The primary use case for Durable Functions is simplifying complex, stateful coor
### <a name="chaining"></a>Pattern #1: Function chaining
-In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the output of one function is applied to the input of another function.
+In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the output of one function is applied to the input of another function. The use of queues between each function ensures that the system stays durable and scalable, even though there is a flow of control from one function to the next.
+ ![A diagram of the function chaining pattern](./media/durable-functions-concepts/function-chaining.png)
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
For explanations of the common and event-specific properties, see [Event propert
## Next steps
+* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-functions-eventgrid-extension/issues)
* [Dispatch an Event Grid event](./functions-bindings-event-grid-output.md) [EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
The Event Grid output binding is only available for Functions 2.x and higher. Ev
## Next steps
+* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-functions-eventgrid-extension/issues)
* [Event Grid trigger][trigger] * [Event Grid output binding][binding] * [Run a function when an Event Grid event is dispatched](./functions-bindings-event-grid-trigger.md)
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
## Introduction
-[Azure Functions](./functions-overview.md) allows you to implement your system's logic into readily-available blocks of code. These code blocks are called "functions".
+[Azure Functions](./functions-overview.md) allows you to implement your system's logic as event-driven, readily-available blocks of code. These code blocks are called "functions".
Use the following resources to get started.
azure-functions Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md
Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running.
-You focus on the pieces of code that matter most to you, and Azure Functions handles the rest.<br /><br />
+You focus on the code that matters most to you, in the most productive language for you, and Azure Functions handles the rest.<br /><br />
> [!VIDEO https://www.youtube.com/embed/8-jz5f_JyEQ]
The following are a common, _but by no means exhaustive_, set of scenarios for A
| | | | **Build a web API** | Implement an endpoint for your web applications using the [HTTP trigger](./functions-bindings-http-webhook.md) | | **Process file uploads** | Run code when a file is uploaded or changed in [blob storage](./functions-bindings-storage-blob.md) |
-| **Build a serverless workflow** | Chain a series of functions together using [durable functions](./durable/durable-functions-overview.md) |
+| **Build a serverless workflow** | Create an event-driven workflow from a series of functions using [durable functions](./durable/durable-functions-overview.md) |
| **Respond to database changes** | Run custom logic when a document is created or updated in [Azure Cosmos DB](./functions-bindings-cosmosdb-v2.md) | | **Run scheduled tasks** | Execute code on [pre-defined timed intervals](./functions-bindings-timer.md) | | **Create reliable message queue systems** | Process message queues using [Queue Storage](./functions-bindings-storage-queue.md), [Service Bus](./functions-bindings-service-bus.md), or [Event Hubs](./functions-bindings-event-hubs.md) | | **Analyze IoT data streams** | Collect and process [data from IoT devices](./functions-bindings-event-iot.md) | | **Process data in real time** | Use [Functions and SignalR](./functions-bindings-signalr-service.md) to respond to data in the moment |
+These scenarios allow you to build event-driven systems using modern architectural patterns.
+ As you build your functions, you have the following options and resources available: - **Use your preferred language**: Write functions in [C#, Java, JavaScript, PowerShell, or Python](./supported-languages.md), or use a [custom handler](./functions-custom-handlers.md) to use virtually any other language.
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.AppConfiguration/configurationStores |Yes | No | [Azure App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | |Microsoft.AppPlatform/spring | Yes | No | [Azure Spring Cloud](../essentials/metrics-supported.md#microsoftappplatformspring) | |Microsoft.Automation/automationAccounts | Yes| No | [Azure Automation accounts](../essentials/metrics-supported.md#microsoftautomationautomationaccounts) |
-|Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](../essentials/metrics-supported.md#microsoftavsprivateclouds) |
+|Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](../essentials/metrics-supported.md) |
|Microsoft.Batch/batchAccounts | Yes | No | [Azure Batch accounts](../essentials/metrics-supported.md#microsoftbatchbatchaccounts) |
-|Microsoft.Bing/accounts | Yes | No | [Bing accounts](../essentials/metrics-supported.md#microsoftbingaccounts) |
+|Microsoft.Bing/accounts | Yes | No | [Bing accounts](../essentials/metrics-supported.md#microsoftmapsaccounts) |
|Microsoft.BotService/botServices | Yes | No | [Azure Bot Service](../essentials/metrics-supported.md#microsoftbotservicebotservices) | |Microsoft.Cache/redis | Yes | Yes | [Azure Cache for Redis](../essentials/metrics-supported.md#microsoftcacheredis) | |Microsoft.Cache/redisEnterprise | Yes | No | [Azure Cache for Redis Enterprise](../essentials/metrics-supported.md#microsoftcacheredisenterprise) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Compute/cloudServices/roles | Yes | No | [Azure Cloud Services roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | |Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Azure Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) | |Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Azure Virtual Machine Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) |
-|Microsoft.ConnectedVehicle/platformAccounts | Yes | No |[Connected Vehicle Platform Accounts](../essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) |
+|Microsoft.ConnectedVehicle/platformAccounts | Yes | No |[Connected Vehicle Platform Accounts](../essentials/metrics-supported.md) |
|Microsoft.ContainerInstance/containerGroups | Yes| No | [Container groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | |Microsoft.ContainerRegistry/registries | No | No | [Azure Container Registry](../essentials/metrics-supported.md#microsoftcontainerregistryregistries) | |Microsoft.ContainerService/managedClusters | Yes | No | [Managed clusters](../essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Peering/peeringServices | Yes | No | [Azure Peering Service](../essentials/metrics-supported.md#microsoftpeeringpeeringservices) | |Microsoft.PowerBIDedicated/capacities | No | No | [Power BI dedicated capacities](../essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | |Microsoft.Purview/accounts | Yes | No | [Azure Purview accounts](../essentials/metrics-supported.md#microsoftpurviewaccounts) |
-|Microsoft.RecoveryServices/vaults | Yes | Yes | [Recovery Services vaults](../essentials/metrics-supported.md#microsoftrecoveryservicesvaults) |
+|Microsoft.RecoveryServices/vaults | Yes | Yes | [Recovery Services vaults](../essentials/metrics-supported.md) |
|Microsoft.Relay/namespaces | Yes | No | [Relays](../essentials/metrics-supported.md#microsoftrelaynamespaces) | |Microsoft.Search/searchServices | No | No | [Search services](../essentials/metrics-supported.md#microsoftsearchsearchservices) | |Microsoft.ServiceBus/namespaces | Yes | No | [Azure Service Bus](../essentials/metrics-supported.md#microsoftservicebusnamespaces) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Synapse/workspaces | Yes | No | [Azure Synapse Analytics](../essentials/metrics-supported.md#microsoftsynapseworkspaces) | |Microsoft.Synapse/workspaces/bigDataPools | Yes | No | [Azure Synapse Analytics Apache Spark pools](../essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) | |Microsoft.Synapse/workspaces/sqlPools | Yes | No | [Azure Synapse Analytics SQL pools](../essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) |
-|Microsoft.VMWareCloudSimple/virtualMachines | Yes | No | [CloudSimple virtual machines](../essentials/metrics-supported.md#microsoftvmwarecloudsimplevirtualmachines) |
+|Microsoft.VMWareCloudSimple/virtualMachines | Yes | No | [CloudSimple virtual machines](../essentials/metrics-supported.md) |
|Microsoft.Web/containerApps | Yes | No | Azure Container Apps | |Microsoft.Web/hostingEnvironments/multiRolePools | Yes | No | [Azure App Service environment multi-role pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools)| |Microsoft.Web/hostingEnvironments/workerPools | Yes | No | [Azure App Service environment worker pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsworkerpools)|
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
# Tips for updating your JVM args - Azure Monitor Application Insights for Java
-## Azure environments
+## Azure App Services
-Configure [App Services](../../app-service/configure-language-java.md#set-java-runtime-options).
+See [Application Monitoring for Azure App Service and Java](./azure-web-apps-java.md).
+
+## Azure Functions
+
+See [Monitoring Azure Functions with Azure Monitor Application Insights](./monitor-functions.md#distributed-tracing-for-java-applications-public-preview).
## Spring Boot
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Starting from version 3.2.0, the following preview instrumentations can be enabl
{ "preview": { "instrumentation": {
+ "akka": {
+ "enabled": true
+ },
"apacheCamel": { "enabled": true }, "grizzly": { "enabled": true },
- "springIntegration": {
+ "play": {
"enabled": true },
- "akka": {
+ "springIntegration": {
"enabled": true }, "vertx": {
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
When the conditions in the rules are met, one or more autoscale actions are trig
## Scaling out and scaling up
-Autoscale scales in and out, which is an increase, or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
+Autoscale scales in and out, which is an increase, or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a Virtual Machine Scale Set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
In contrast, scaling up and down, or vertical scaling, keeps the number of resources constant, but gives those resources more capacity in terms of memory, CPU speed, disk space and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
When the conditions in the rules are met, one or more autoscale actions are trig
### Predictive autoscale (preview)
-[Predictive autoscale](./autoscale-predictive.md) uses machine learning to help manage and scale Azure virtual machine scale sets with cyclical workload patterns. It forecasts the overall CPU load on your virtual machine scale set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
+[Predictive autoscale](./autoscale-predictive.md) uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load on your Virtual Machine Scale Set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
## Autoscale setup
The following diagram shows the autoscale architecture.
### Resource metrics
-Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual machine scale sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. See [Autoscale Common Metrics](autoscale-common-metrics.md) for a list of available metrics.
+Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual Machine Scale Sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. See [Autoscale Common Metrics](autoscale-common-metrics.md) for a list of available metrics.
### Custom metrics
The full list of configurable fields and descriptions is available in the [Autos
For code examples, see
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
## Horizontal vs vertical scaling
-Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
+Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a Virtual Machine Scale Set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
In contrast, vertical scaling, keeps the same number of resources constant, but gives them more capacity in terms of memory, CPU speed, disk space and network. Adding or removing capacity in vertical scaling is known as scaling or down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
The following services are supported by autoscale:
| Service | Schema & Documentation | | | |
-| Azure Virtual machines scale sets |[Overview of autoscale with Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
+| Azure Virtual machines scale sets |[Overview of autoscale with Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
| Web apps |[Scaling Web Apps](autoscale-get-started.md) | | Azure API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) | Azure Data Explorer Clusters|[Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling)| | Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) | | Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
+ Spring Cloud |[Set up autoscale for microservice applications](../../spring-apps/how-to-setup-autoscale.md)|
+| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
+| Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)|
+| Logic Apps - Integration Service Environment(ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
## Next steps
To learn more about autoscale, see the following resources:
* [Azure Monitor autoscale common metrics](autoscale-common-metrics.md) * [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with Azure PowerShell](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest) * [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings) * [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Metric alerts from Container insights
-description: This article reviews the recommended metric alerts available from Container insights in public preview.
+ Title: Create metric alert rules in Container insights (preview)
+description: Describes how to create recommended metric alerts rules for a Kubernetes cluster in Container insights.
Previously updated : 05/24/2022 Last updated : 09/28/2022
-# Recommended metric alerts (preview) from Container insights
+# Metric alert rules in Container insights (preview)
-To alert on system resource issues when they are experiencing peak demand and running near capacity, with Container insights you would create a log alert based on performance data stored in Azure Monitor Logs. Container insights now includes pre-configured metric alert rules for your AKS and Azure Arc-enabled Kubernetes cluster, which is in public preview.
+Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides pre-configured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
-This article reviews the experience and provides guidance on configuring and managing these alert rules.
+> [!IMPORTANT]
+> Container insights in Azure Monitor now supports alerts based on Prometheus metrics. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts.
+## Types of metric alert rules
+There are two types of metric rules used by Container insights based on either Prometheus metrics or custom metrics. See a list of the specific alert rules for each at [Alert rule details](#alert-rule-details).
-If you're not familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md).
+| Alert rule type | Description |
+|:|:|
+| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus (preview)](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are hand-picked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>-*Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
+| [Metric rules](#metrics-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. |
-> [!NOTE]
-> Beginning October 8, 2021, three alerts have been updated to correctly calculate the alert condition: **Container CPU %**, **Container working set memory %**, and **Persistent Volume Usage %**. These new alerts have the same names as their corresponding previously available alerts, but they use new, updated metrics. We recommend that you disable the alerts that use the "Old" metrics, described in this article, and enable the "New" metrics. The "Old" metrics will no longer be available in recommended alerts after they are disabled, but you can manually re-enable them.
-## Prerequisites
+## Prometheus alert rules
+[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts-preview) use metric data from your Kubernetes cluster sent to [Azure Monitor manage service for Prometheus](../essentials/prometheus-metrics-overview.md).
-Before you start, confirm the following:
+### Prerequisites
+- Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).
-* Custom metrics are only available in a subset of Azure regions. A list of supported regions is documented in [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
+### Enable alert rules
-* To support metric alerts and the introduction of additional metrics, the minimum agent version required is **mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod05262020** for AKS and **mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod09252020** for Azure Arc-enabled Kubernetes cluster.
+The only method currently available for creating Prometheus alert rules is a Resource Manager template.
- To verify your cluster is running the newer version of the agent, you can either:
+1. Download the template that includes the set of alert rules that you want to enable. See [Alert rule details](#alert-rule-details) for a listing of the rules for each.
- * Run the command: `kubectl describe pod <azure-monitor-agent-pod-name> --namespace=kube-system`. In the status returned, note the value under **Image** for Azure Monitor agent in the *Containers* section of the output.
- * On the **Nodes** tab, select the cluster node and on the **Properties** pane to the right, note the value under **Agent Image Tag**.
+ - [Community alerts](https://aka.ms/azureprometheus-communityalerts)
+ - [Recommended alerts](https://aka.ms/azureprometheus-recommendedalerts)
- The value shown for AKS should be version **ciprod05262020** or later. The value shown for Azure Arc-enabled Kubernetes cluster should be version **ciprod09252020** or later. If your cluster has an older version, see [How to upgrade the Container insights agent](container-insights-manage-agent.md#upgrade-agent-on-aks-cluster) for steps to get the latest version.
+2. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates) for guidance.
- For more information related to the agent release, see [agent release history](https://github.com/microsoft/docker-provider/tree/ci_feature_prod). To verify metrics are being collected, you can use Azure Monitor metrics explorer and verify from the **Metric namespace** that **insights** is listed. If it is, you can go ahead and start setting up the alerts. If you don't see any metrics collected, the cluster Service Principal or MSI is missing the necessary permissions. To verify the SPN or MSI is a member of the **Monitoring Metrics Publisher** role, follow the steps described in the section [Upgrade per cluster using Azure CLI](container-insights-update-metrics.md#update-one-cluster-by-using-the-azure-cli) to confirm and set role assignment.
-
-> [!TIP]
-> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
+> [!NOTE]
+> While the Prometheus alert could be created in a different resource group to the target resource, you should use the same resource group as your target resource.
-## Alert rules overview
+### Edit alert rules
-To alert on what matters, Container insights includes the following metric alerts for your AKS and Azure Arc-enabled Kubernetes clusters:
+ To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it using any deployment method.
-|Name| Description |Default threshold |
-|-|-||
-|**(New)Average container CPU %** |Calculates average CPU used per container.|When average CPU usage per container is greater than 95%.|
-|**(New)Average container working set memory %** |Calculates average working set memory used per container.|When average working set memory usage per container is greater than 95%. |
-|Average CPU % |Calculates average CPU used per node. |When average node CPU utilization is greater than 80% |
-| Daily Data Cap Breach | When data cap is breached| When the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md) |
-|Average Disk Usage % |Calculates average disk usage for a node.|When disk usage for a node is greater than 80%. |
-|**(New)Average Persistent Volume Usage %** |Calculates average PV usage per pod. |When average PV usage per pod is greater than 80%.|
-|Average Working set memory % |Calculates average Working set memory for a node. |When average Working set memory for a node is greater than 80%. |
-|Restarting container count |Calculates number of restarting containers. | When container restarts are greater than 0. |
-|Failed Pod Counts |Calculates if any pod in failed state.|When a number of pods in failed state are greater than 0. |
-|Node NotReady status |Calculates if any node is in NotReady state.|When a number of nodes in NotReady state are greater than 0. |
-|OOM Killed Containers |Calculates number of OOM killed containers. |When a number of OOM killed containers is greater than 0. |
-|Pods ready % |Calculates the average ready state of pods. |When ready state of pods is less than 80%.|
-|Completed job count |Calculates number of jobs completed more than six hours ago. |When number of stale jobs older than six hours is greater than 0.|
+### Configure alertable metrics in ConfigMaps
-There are common properties across all of these alert rules:
+Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
-* All alert rules are metric based.
+- *cpuExceededPercentage*
+- *cpuThresholdViolated*
+- *memoryRssExceededPercentage*
+- *memoryRssThresholdViolated*
+- *memoryWorkingSetExceededPercentage*
+- *memoryWorkingSetThresholdViolated*
+- *pvUsageExceededPercentage*
+- *pvUsageThresholdViolated*
-* All alert rules are disabled by default.
+> [!TIP]
+> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
-* All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
-* Alerts rules do not have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.
+1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
-* You can modify the threshold for alert rules by directly editing them. However, refer to the guidance provided in each alert rule before modifying its threshold.
+ - **Example**. Use the following ConfigMap configuration to modify the *cpuExceededPercentage* threshold to 90%:
-The following alert-based metrics have unique behavior characteristics compared to the other metrics:
+ ```
+ [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
+ # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
+ container_cpu_threshold_percentage = 90.0
+ # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
+ container_memory_rss_threshold_percentage = 95.0
+ # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
+ container_memory_working_set_threshold_percentage = 95.0
+ ```
-* *completedJobsCount* metric is only sent when there are jobs that are completed greater than six hours ago.
+ - **Example**. Use the following ConfigMap configuration to modify the *pvUsageExceededPercentage* threshold to 80%:
-* *containerRestartCount* metric is only sent when there are containers restarting.
+ ```
+ [alertable_metrics_configuration_settings.pv_utilization_thresholds]
+ # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
+ pv_usage_threshold_percentage = 80.0
+ ```
-* *oomKilledContainerCount* metric is only sent when there are OOM killed containers.
+2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-* *cpuExceededPercentage*, *memoryRssExceededPercentage*, and *memoryWorkingSetExceededPercentage* metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-* *pvUsageExceededPercentage* metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
+The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
-## Metrics collected
+## Metrics alert rules
+[Metric alert rules](../alerts/alerts-types.md#metric-alerts) use [custom metric data from your Kubernetes cluster](container-insights-custom-metrics.md).
-The following metrics are enabled and collected, unless otherwise specified, as part of this feature. The metrics in **bold** with label "Old" are the ones replaced by "New" metrics collected for correct alert evaluation.
-|Metric namespace |Metric |Description |
-||-||
-|Insights.container/nodes |cpuUsageMillicores |CPU utilization in millicores by host.|
-|Insights.container/nodes |cpuUsagePercentage, cpuUsageAllocatablePercentage (preview) |CPU usage percentage by node and allocatable respectively.|
-|Insights.container/nodes |memoryRssBytes |Memory RSS utilization in bytes by host.|
-|Insights.container/nodes |memoryRssPercentage, memoryRssAllocatablePercentage (preview) |Memory RSS usage percentage by host and allocatable respectively.|
-|Insights.container/nodes |memoryWorkingSetBytes |Memory Working Set utilization in bytes by host.|
-|Insights.container/nodes |memoryWorkingSetPercentage, memoryRssAllocatablePercentage (preview) |Memory Working Set usage percentage by host and allocatable respectively.|
-|Insights.container/nodes |nodesCount |Count of nodes by status.|
-|Insights.container/nodes |diskUsedPercentage |Percentage of disk used on the node by device.|
-|Insights.container/pods |podCount |Count of pods by controller, namespace, node, and phase.|
-|Insights.container/pods |completedJobsCount |Completed jobs count older user configurable threshold (default is six hours) by controller, Kubernetes namespace. |
-|Insights.container/pods |restartingContainerCount |Count of container restarts by controller, Kubernetes namespace.|
-|Insights.container/pods |oomKilledContainerCount |Count of OOMkilled containers by controller, Kubernetes namespace.|
-|Insights.container/pods |podReadyPercentage |Percentage of pods in ready state by controller, Kubernetes namespace.|
-|Insights.container/containers |**(Old)cpuExceededPercentage** |CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
-|Insights.container/containers |**(New)cpuThresholdViolated** |Metric triggered when CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
-|Insights.container/containers |**(Old)memoryRssExceededPercentage** |Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(New)memoryRssThresholdViolated** |Metric triggered when Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(Old)memoryWorkingSetExceededPercentage** |Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(New)memoryWorkingSetThresholdViolated** |Metric triggered when Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/persistentvolumes |**(Old)pvUsageExceededPercentage** |PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.|
-|Insights.container/persistentvolumes |**(New)pvUsageThresholdViolated** |Metric triggered when PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.
+### Prerequisites
+ - You may need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
+ - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
-## Enable alert rules
-Follow these steps to enable the metric alerts in Azure Monitor from the Azure portal. To enable using a Resource Manager template, see [Enable with a Resource Manager template](#enable-with-a-resource-manager-template).
+### Enable and configure alert rules
-### From the Azure portal
+#### [Azure portal](#tab/azure-portal)
-This section walks through enabling Container insights metric alert (preview) from the Azure portal.
+#### Enable alert rules
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. From the **Insights** menu for your cluster, select **Recommended alerts**.
-2. Access to the Container insights metrics alert (preview) feature is available directly from an AKS cluster by selecting **Insights** from the left pane in the Azure portal.
+ :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot showing recommended alerts option in Container insights.":::
-3. From the command bar, select **Recommended alerts**.
- ![Screenshot showing the Recommended alerts option in Container insights.](./media/container-insights-metric-alerts/command-bar-recommended-alerts.png)
+2. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
-4. The **Recommended alerts** property pane automatically displays on the right side of the page. By default, all alert rules in the list are disabled. After selecting **Enable**, the alert rule is created and the rule name updates to include a link to the alert resource.
+ :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot showing list of recommended alerts and option for enabling each.":::
- ![Screenshot showing the Recommended alerts properties pane.](./media/container-insights-metric-alerts/recommended-alerts-pane.png)
+3. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page, specify an existing or create an action group by selecting **Create action group**.
- After selecting the **Enable/Disable** toggle to enable the alert, an alert rule is created and the rule name updates to include a link to the actual alert resource.
+ :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot showing selection of an action group.":::
- ![Screenshot showing the option to enable an alert rule.](./media/container-insights-metric-alerts/recommended-alerts-pane-enable.png)
+#### Edit alert rules
-5. Alert rules are not associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** and on the **Action Groups** page, specify an existing or create an action group by selecting **Add** or **Create**.
+To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your AKS cluster.
- ![Screenshot showing the option to select an action group.](./media/container-insights-metric-alerts/select-action-group.png)
+1. From Container insights for your cluster, select **Recommended alerts**.
+2. Click the **Rule Name** to open the alert rule.
+3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for details on the alert rule settings.
-### Enable with a Resource Manager template
+#### Disable alert rules
+1. From Container insights for your cluster, select **Recommended alerts**.
+2. Change the status for the alert rule to **Disabled**.
-You can use an Azure Resource Manager template and parameters file to create the included metric alerts in Azure Monitor.
+### [Resource Manager](#tab/resource-manager)
+For custom metrics, a separate Resource Manager template is provided for each alert rule.
-The basic steps are as follows:
+#### Enable alert rules
1. Download one or all of the available templates that describe how to create the alert from [GitHub](https://github.com/microsoft/Docker-Provider/tree/ci_dev/alerts/recommended_alerts_ARM).- 2. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
+3. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md) for guidance.
-3. Deploy the template from the Azure portal, PowerShell, or Azure CLI.
-
-#### Deploy through Azure portal
-
-1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create the alert rule using the following commands:
-
-2. To deploy a customized template through the portal, select **Create a resource** from the [Azure portal](https://portal.azure.com).
-
-3. Search for **template**, and then select **Template deployment**.
-
-4. Select **Create**.
-
-5. You see several options for creating a template, select **Build your own template in editor**.
-
-6. On the **Edit template page**, select **Load file** and then select the template file.
-
-7. On the **Edit template** page, select **Save**.
-
-8. On the **Custom deployment** page, specify the following and then when complete select **Purchase** to deploy the template and create the alert rule.
-
- * Resource group
- * Location
- * Alert Name
- * Cluster Resource ID
+#### Disable alert rules
+To disable custom alert rules, use the same Resource Manager template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
-#### Deploy with Azure PowerShell or CLI
-
-1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create the alert rule using the following commands:
-
-2. You can create the metric alert using the template and parameters file using PowerShell or Azure CLI.
-
- Using Azure PowerShell
-
- ```powershell
- Connect-AzAccount
-
- Select-AzSubscription -SubscriptionName <yourSubscriptionName>
- New-AzResourceGroupDeployment -Name CIMetricAlertDeployment -ResourceGroupName ResourceGroupofTargetResource `
- -TemplateFile templateFilename.json -TemplateParameterFile templateParameterFilename.parameters.json
- ```
-
- Using Azure CLI
-
- ```azurecli
- az login
-
- az deployment group create \
- --name AlertDeployment \
- --resource-group ResourceGroupofTargetResource \
- --template-file templateFileName.json \
- --parameters @templateParameterFilename.parameters.json
- ```
-
- >[!NOTE]
- >While the metric alert could be created in a different resource group to the target resource, we recommend using the same resource group as your target resource.
-
-## Edit alert rules
-
-You can view and manage Container insights alert rules, to edit its threshold or configure an [action group](../alerts/action-groups.md) for your AKS cluster. While you can perform these actions from the Azure portal and Azure CLI, it can also be done directly from your AKS cluster in Container insights.
-
-1. From the command bar, select **Recommended alerts**.
+
-2. To modify the threshold, on the **Recommended alerts** pane, select the enabled alert. In the **Edit rule**, select the **Alert criteria** you want to edit.
- * To modify the alert rule threshold, select the **Condition**.
- * To specify an existing or create an action group, select **Add** or **Create** under **Action group**
+## Alert rule details
+The following sections provide details on the alert rules provided by Container insights.
+
+### Community alert rules
+These are hand-picked alerts from Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-mixins).
+
+- KubeJobNotCompleted
+- KubeJobFailed
+- KubePodCrashLooping
+- KubePodNotReady
+- KubeDeploymentReplicasMismatch
+- KubeStatefulSetReplicasMismatch
+- KubeHpaReplicasMismatch
+- KubeHpaMaxedOut
+- KubeQuotaAlmostFull
+- KubeMemoryQuotaOvercommit
+- KubeCPUQuotaOvercommit
+- KubeVersionMismatch
+- KubeNodeNotReady
+- KubeNodeReadinessFlapping
+- KubeletTooManyPods
+- KubeNodeUnreachable
+### Recommended alert rules
+The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics.
+
+| Prometheus alert name | Custom metric alert name | Description | Default threshold |
+|:|:|:|:|
+| Average container CPU % | Average container CPU % | Calculates average CPU used per container. | 95% |
+| Average container working set memory % | Average container working set memory % | Calculates average working set memory used per container. | 95% |
+| Average CPU % | Average CPU % | Calculates average CPU used per node. | 80% |
+| Average Disk Usage % | Average Disk Usage % | Calculates average disk usage for a node. | 80% |
+| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average PV usage per pod. | 80% |
+| Average Working set memory % | Average Working set memory % | Calculates average Working set memory for a node. | 80% |
+| Restarting container count | Restarting container count | Calculates number of restarting containers. | 0 |
+| Failed Pod Counts | Failed Pod Counts | Calculates number of restarting containers. | 0 |
+| Node NotReady status | Node NotReady status | Calculates if any node is in NotReady state. | 0 |
+| OOM Killed Containers | OOM Killed Containers | Calculates number of OOM killed containers. | 0 |
+| Pods ready % | Pods ready % | Calculates the average ready state of pods. | 80% |
+| Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 |
-To view alerts created for the enabled rules, in the **Recommended alerts** pane select **View in alerts**. You are redirected to the alert menu for the AKS cluster, where you can see all the alerts currently created for your cluster.
+> [!NOTE]
+> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule is not included with the Prometheus alert rules.
+>
+> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) using the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
-## Configure alertable metrics in ConfigMaps
-Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
+Common properties across all of these alert rules include:
-* *cpuExceededPercentage*
-* *cpuThresholdViolated*
-* *memoryRssExceededPercentage*
-* *memoryRssThresholdViolated*
-* *memoryWorkingSetExceededPercentage*
-* *memoryWorkingSetThresholdViolated*
-* *pvUsageExceededPercentage*
-* *pvUsageThresholdViolated*
+- All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
+- All alert rules are disabled by default.
+- Alerts rules don't have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.
+- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before modifying its threshold.
-1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
+The following metrics have unique behavior characteristics:
- - To modify the *cpuExceededPercentage* threshold to 90% and begin collection of this metric when that threshold is met and exceeded, configure the ConfigMap file using the following example:
+**Prometheus and custom metrics**
+- `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.
+- `containerRestartCount` metric is only sent when there are containers restarting.
+- `oomKilledContainerCount` metric is only sent when there are OOM killed containers.
+- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). cpuThresholdViolated, memoryRssThresholdViolated, and memoryWorkingSetThresholdViolated metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.
+- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). `pvUsageThresholdViolated` metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
+- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
- ```
- [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
- # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
- container_cpu_threshold_percentage = 90.0
- # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
- container_memory_rss_threshold_percentage = 95.0
- # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
- container_memory_working_set_threshold_percentage = 95.0
- ```
+
+**Prometheus only**
+- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), you should configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. See [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
+- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
- - To modify the *pvUsageExceededPercentage* threshold to 80% and begin collection of this metric when that threshold is met and exceeded, configure the ConfigMap file using the following example:
- ```
- [alertable_metrics_configuration_settings.pv_utilization_thresholds]
- # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
- pv_usage_threshold_percentage = 80.0
- ```
-2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+## View alerts
+View fired alerts for your cluster from [**Alerts** in the **Monitor menu** in the Azure portal] with other fired alerts in your subscription. You can also select **View in alerts** from the **Recommended alerts** pane to view alerts from custom metrics.
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+> [!NOTE]
+> Prometheus alerts will not currently be displayed when you select **Alerts** from your AKs cluster since the alert rule doesn't use the cluster as its target.
-The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor agent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
## Next steps -- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.--- To learn more about Azure Monitor and how to monitor other aspects of your Kubernetes cluster, see [View Kubernetes cluster performance](container-insights-analyze.md).
+- [Read about the different alert rule types in Azure Monitor](../alerts/alerts-types.md).
+- [Read about alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The activity log uses a diagnostic setting but has its own user interface becaus
This section discusses requirements and limitations.
+### Time before telemetry gets to destination
+
+Once you have set up a diagnostic setting, data should start flowing to your selected destination(s) with 90 minutes. If you get no information within 24 hours, then either
+- no logs are being generated or
+- something is wrong in the underlying routing mechanism. Try disabling the configuration and then reenabling it. Contact Azure support through the Azure portal if you continue to have issues.
+ ### Metrics as a source There are certain limitations with exporting metrics:
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 09/12/2022 Last updated : 09/13/2022
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|CACompliantDeviceSuccessCount|Yes|CACompliantDeviceSuccessCount|Count|Count|CA compliant device success count for Azure AD|No Dimensions|
-|CAManagedDeviceSuccessCount|No|CAManagedDeviceSuccessCount|Count|Count|CA domain join device success count for Azure AD|No Dimensions|
-|MFAAttemptCount|No|MFAAttemptCount|Count|Count|MFA attempt count for Azure AD|No Dimensions|
-|MFAFailureCount|No|MFAFailureCount|Count|Count|MFA failure count for Azure AD|No Dimensions|
-|MFASuccessCount|No|MFASuccessCount|Count|Count|MFA success count for Azure AD|No Dimensions|
-|SamlFailureCount|Yes|SamlFailureCount|Count|Count|Saml token failure count for relying party scenario|No Dimensions|
-|SamlSuccessCount|Yes|SamlSuccessCount|Count|Count|Saml token success count for relying party scenario|No Dimensions|
+|ThrottledRequests|No|ThrottledRequests|Count|Average|azureADMetrics type metric|No Dimensions|
## Microsoft.AnalysisServices/servers
This latest update adds a new column and reorders the metrics to be alphabetical
|qpu_metric|Yes|QPU|Count|Average|QPU. Range 0-100 for S1, 0-200 for S2 and 0-400 for S4|ServerResourceType| |QueryPoolBusyThreads|Yes|Query Pool Busy Threads|Count|Average|Number of busy threads in the query thread pool.|ServerResourceType| |QueryPoolIdleThreads|Yes|Threads: Query pool idle threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|ServerResourceType|
-|QueryPoolJobQueueLength|Yes|Threads: Query pool job queue length|Count|Average|Number of jobs in the queue of the query thread pool.|ServerResourceType|
+|QueryPoolJobQueueLength|Yes|Threads: Query pool job queue lengt|Count|Average|Number of jobs in the queue of the query thread pool.|ServerResourceType|
|Quota|Yes|Memory: Quota|Bytes|Average|Current memory quota, in bytes. Memory quota is also known as a memory grant or memory reservation.|ServerResourceType| |QuotaBlocked|Yes|Memory: Quota Blocked|Count|Average|Current number of quota requests that are blocked until other memory quotas are freed.|ServerResourceType| |RowsConvertedPerSec|Yes|Processing: Rows converted per sec|CountPerSecond|Average|Rate of rows converted during processing.|ServerResourceType|
This latest update adds a new column and reorders the metrics to be alphabetical
|total-requests|Yes|total-requests|Count|Average|Total number of requests in the lifetime of the process|Deployment, AppName, Pod| |working-set|Yes|working-set|Count|Average|Amount of working set used by the process (MB)|Deployment, AppName, Pod| - ## Microsoft.Automation/automationAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|TotalJob|Yes|Total Jobs|Count|Total|The total number of jobs|Runbook Name, Status|
+|TotalJob|Yes|Total Jobs|Count|Total|The total number of jobs|Runbook, Status|
|TotalUpdateDeploymentMachineRuns|Yes|Total Update Deployment Machine Runs|Count|Total|Total software update deployment machine runs in a software update deployment run|SoftwareUpdateConfigurationName, Status, TargetComputer, SoftwareUpdateConfigurationRunId| |TotalUpdateDeploymentRuns|Yes|Total Update Deployment Runs|Count|Total|Total software update deployment runs|SoftwareUpdateConfigurationName, Status|
This latest update adds a new column and reorders the metrics to be alphabetical
|WaitingForStartTaskNodeCount|No|Waiting For Start Task Node Count|Count|Total|Number of nodes waiting for the Start Task to complete|No Dimensions|
-## Microsoft.BatchAI/workspaces
+## Microsoft.BatchAI/workspaces
+|Category|Category Display Name|Costs To Export|
+||||
+|BaiClusterEvent|BaiClusterEvent|No|
+|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
+|BaiJobEvent|BaiJobEvent|No|
-|Category|Category Display Name|Costs To Export|
-||||
-|BaiClusterEvent|BaiClusterEvent|No|
-|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
-|BaiJobEvent|BaiJobEvent|No|
## microsoft.bing/accounts
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|allcachehits|Yes|Cache Hits (Instance Based)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allcachemisses|Yes|Cache Misses (Instance Based)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allcacheRead|Yes|Cache Read (Instance Based)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allcacheWrite|Yes|Cache Write (Instance Based)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allconnectedclients|Yes|Connected Clients (Instance Based)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allConnectionsClosedPerSecond|Yes|Connections Closed Per Second (Instance Based)|CountPerSecond|Maximum|The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). For more details, see https://aka.ms/redis/metrics.|ShardId, Primary, Ssl|
-|allConnectionsCreatedPerSecond|Yes|Connections Created Per Second (Instance Based)|CountPerSecond|Maximum|The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). For more details, see https://aka.ms/redis/metrics.|ShardId, Primary, Ssl|
-|allevictedkeys|Yes|Evicted Keys (Instance Based)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allexpiredkeys|Yes|Expired Keys (Instance Based)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allgetcommands|Yes|Gets (Instance Based)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|alloperationsPerSecond|Yes|Operations Per Second (Instance Based)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allpercentprocessortime|Yes|CPU (Instance Based)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allserverLoad|Yes|Server Load (Instance Based)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allsetcommands|Yes|Sets (Instance Based)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|alltotalcommandsprocessed|Yes|Total Operations (Instance Based)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|alltotalkeys|Yes|Total Keys (Instance Based)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allusedmemory|Yes|Used Memory (Instance Based)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allusedmemorypercentage|Yes|Used Memory Percentage (Instance Based)|Percent|Maximum|The percentage of cache memory used for key/value pairs. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allusedmemoryRss|Yes|Used Memory RSS (Instance Based)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|cachehits|Yes|Cache Hits|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cachehits0|Yes|Cache Hits (Shard 0)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits1|Yes|Cache Hits (Shard 1)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits2|Yes|Cache Hits (Shard 2)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits3|Yes|Cache Hits (Shard 3)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits4|Yes|Cache Hits (Shard 4)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits5|Yes|Cache Hits (Shard 5)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits6|Yes|Cache Hits (Shard 6)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits7|Yes|Cache Hits (Shard 7)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits8|Yes|Cache Hits (Shard 8)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits9|Yes|Cache Hits (Shard 9)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheLatency|Yes|Cache Latency Microseconds (Preview)|Count|Average|The latency to the cache in microseconds. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cachemisses|Yes|Cache Misses|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cachemisses0|Yes|Cache Misses (Shard 0)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses1|Yes|Cache Misses (Shard 1)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses2|Yes|Cache Misses (Shard 2)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses3|Yes|Cache Misses (Shard 3)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses4|Yes|Cache Misses (Shard 4)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses5|Yes|Cache Misses (Shard 5)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses6|Yes|Cache Misses (Shard 6)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses7|Yes|Cache Misses (Shard 7)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses8|Yes|Cache Misses (Shard 8)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses9|Yes|Cache Misses (Shard 9)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemissrate|Yes|Cache Miss Rate|Percent|Total|The % of get requests that miss. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cacheRead|Yes|Cache Read|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cacheRead0|Yes|Cache Read (Shard 0)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead1|Yes|Cache Read (Shard 1)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead2|Yes|Cache Read (Shard 2)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead3|Yes|Cache Read (Shard 3)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead4|Yes|Cache Read (Shard 4)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead5|Yes|Cache Read (Shard 5)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead6|Yes|Cache Read (Shard 6)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead7|Yes|Cache Read (Shard 7)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead8|Yes|Cache Read (Shard 8)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead9|Yes|Cache Read (Shard 9)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cacheWrite0|Yes|Cache Write (Shard 0)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite1|Yes|Cache Write (Shard 1)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite2|Yes|Cache Write (Shard 2)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite3|Yes|Cache Write (Shard 3)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite4|Yes|Cache Write (Shard 4)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite5|Yes|Cache Write (Shard 5)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite6|Yes|Cache Write (Shard 6)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite7|Yes|Cache Write (Shard 7)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite8|Yes|Cache Write (Shard 8)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite9|Yes|Cache Write (Shard 9)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients|Yes|Connected Clients|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|connectedclients0|Yes|Connected Clients (Shard 0)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients1|Yes|Connected Clients (Shard 1)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients2|Yes|Connected Clients (Shard 2)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients3|Yes|Connected Clients (Shard 3)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients4|Yes|Connected Clients (Shard 4)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients5|Yes|Connected Clients (Shard 5)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients6|Yes|Connected Clients (Shard 6)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients7|Yes|Connected Clients (Shard 7)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients8|Yes|Connected Clients (Shard 8)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients9|Yes|Connected Clients (Shard 9)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|errors|Yes|Errors|Count|Maximum|The number errors that occurred on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, ErrorType|
-|evictedkeys|Yes|Evicted Keys|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|evictedkeys0|Yes|Evicted Keys (Shard 0)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys1|Yes|Evicted Keys (Shard 1)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys2|Yes|Evicted Keys (Shard 2)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys3|Yes|Evicted Keys (Shard 3)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys4|Yes|Evicted Keys (Shard 4)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys5|Yes|Evicted Keys (Shard 5)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys6|Yes|Evicted Keys (Shard 6)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys7|Yes|Evicted Keys (Shard 7)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys8|Yes|Evicted Keys (Shard 8)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys9|Yes|Evicted Keys (Shard 9)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys|Yes|Expired Keys|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|expiredkeys0|Yes|Expired Keys (Shard 0)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys1|Yes|Expired Keys (Shard 1)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys2|Yes|Expired Keys (Shard 2)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys3|Yes|Expired Keys (Shard 3)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys4|Yes|Expired Keys (Shard 4)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys5|Yes|Expired Keys (Shard 5)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys6|Yes|Expired Keys (Shard 6)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys7|Yes|Expired Keys (Shard 7)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys8|Yes|Expired Keys (Shard 8)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys9|Yes|Expired Keys (Shard 9)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands|Yes|Gets|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|getcommands0|Yes|Gets (Shard 0)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands1|Yes|Gets (Shard 1)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands2|Yes|Gets (Shard 2)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands3|Yes|Gets (Shard 3)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands4|Yes|Gets (Shard 4)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands5|Yes|Gets (Shard 5)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands6|Yes|Gets (Shard 6)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands7|Yes|Gets (Shard 7)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands8|Yes|Gets (Shard 8)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands9|Yes|Gets (Shard 9)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond|Yes|Operations Per Second|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|operationsPerSecond0|Yes|Operations Per Second (Shard 0)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond1|Yes|Operations Per Second (Shard 1)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond2|Yes|Operations Per Second (Shard 2)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond3|Yes|Operations Per Second (Shard 3)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond4|Yes|Operations Per Second (Shard 4)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond5|Yes|Operations Per Second (Shard 5)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond6|Yes|Operations Per Second (Shard 6)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond7|Yes|Operations Per Second (Shard 7)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond8|Yes|Operations Per Second (Shard 8)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond9|Yes|Operations Per Second (Shard 9)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime|Yes|CPU|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|percentProcessorTime0|Yes|CPU (Shard 0)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime1|Yes|CPU (Shard 1)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime2|Yes|CPU (Shard 2)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime3|Yes|CPU (Shard 3)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime4|Yes|CPU (Shard 4)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime5|Yes|CPU (Shard 5)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime6|Yes|CPU (Shard 6)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime7|Yes|CPU (Shard 7)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime8|Yes|CPU (Shard 8)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime9|Yes|CPU (Shard 9)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad|Yes|Server Load|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|serverLoad0|Yes|Server Load (Shard 0)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad1|Yes|Server Load (Shard 1)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad2|Yes|Server Load (Shard 2)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad3|Yes|Server Load (Shard 3)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad4|Yes|Server Load (Shard 4)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad5|Yes|Server Load (Shard 5)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad6|Yes|Server Load (Shard 6)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad7|Yes|Server Load (Shard 7)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad8|Yes|Server Load (Shard 8)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad9|Yes|Server Load (Shard 9)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands|Yes|Sets|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|setcommands0|Yes|Sets (Shard 0)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands1|Yes|Sets (Shard 1)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands2|Yes|Sets (Shard 2)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands3|Yes|Sets (Shard 3)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands4|Yes|Sets (Shard 4)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands5|Yes|Sets (Shard 5)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands6|Yes|Sets (Shard 6)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands7|Yes|Sets (Shard 7)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands8|Yes|Sets (Shard 8)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands9|Yes|Sets (Shard 9)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed|Yes|Total Operations|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|totalcommandsprocessed0|Yes|Total Operations (Shard 0)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed1|Yes|Total Operations (Shard 1)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed2|Yes|Total Operations (Shard 2)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed3|Yes|Total Operations (Shard 3)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed4|Yes|Total Operations (Shard 4)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed5|Yes|Total Operations (Shard 5)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed6|Yes|Total Operations (Shard 6)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed7|Yes|Total Operations (Shard 7)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed8|Yes|Total Operations (Shard 8)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed9|Yes|Total Operations (Shard 9)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys|Yes|Total Keys|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|totalkeys0|Yes|Total Keys (Shard 0)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys1|Yes|Total Keys (Shard 1)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys2|Yes|Total Keys (Shard 2)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys3|Yes|Total Keys (Shard 3)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys4|Yes|Total Keys (Shard 4)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys5|Yes|Total Keys (Shard 5)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys6|Yes|Total Keys (Shard 6)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys7|Yes|Total Keys (Shard 7)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys8|Yes|Total Keys (Shard 8)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys9|Yes|Total Keys (Shard 9)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory|Yes|Used Memory|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|usedmemory0|Yes|Used Memory (Shard 0)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory1|Yes|Used Memory (Shard 1)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory2|Yes|Used Memory (Shard 2)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory3|Yes|Used Memory (Shard 3)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory4|Yes|Used Memory (Shard 4)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory5|Yes|Used Memory (Shard 5)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory6|Yes|Used Memory (Shard 6)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory7|Yes|Used Memory (Shard 7)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory8|Yes|Used Memory (Shard 8)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory9|Yes|Used Memory (Shard 9)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemorypercentage|Yes|Used Memory Percentage|Percent|Maximum|The percentage of cache memory used for key/value pairs. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|usedmemoryRss|Yes|Used Memory RSS|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|usedmemoryRss0|Yes|Used Memory RSS (Shard 0)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss1|Yes|Used Memory RSS (Shard 1)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss2|Yes|Used Memory RSS (Shard 2)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss3|Yes|Used Memory RSS (Shard 3)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss4|Yes|Used Memory RSS (Shard 4)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss5|Yes|Used Memory RSS (Shard 5)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss6|Yes|Used Memory RSS (Shard 6)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss7|Yes|Used Memory RSS (Shard 7)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|allcachehits|Yes|Cache Hits (Instance Based)|Count|Total||ShardId, Port, Primary|
+|allcachemisses|Yes|Cache Misses (Instance Based)|Count|Total||ShardId, Port, Primary|
+|allcacheRead|Yes|Cache Read (Instance Based)|BytesPerSecond|Maximum||ShardId, Port, Primary|
+|allcacheWrite|Yes|Cache Write (Instance Based)|BytesPerSecond|Maximum||ShardId, Port, Primary|
+|allconnectedclients|Yes|Connected Clients (Instance Based)|Count|Maximum||ShardId, Port, Primary|
+|allevictedkeys|Yes|Evicted Keys (Instance Based)|Count|Total||ShardId, Port, Primary|
+|allexpiredkeys|Yes|Expired Keys (Instance Based)|Count|Total||ShardId, Port, Primary|
+|allgetcommands|Yes|Gets (Instance Based)|Count|Total||ShardId, Port, Primary|
+|alloperationsPerSecond|Yes|Operations Per Second (Instance Based)|Count|Maximum||ShardId, Port, Primary|
+|allpercentprocessortime|Yes|CPU (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
+|allserverLoad|Yes|Server Load (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
+|allsetcommands|Yes|Sets (Instance Based)|Count|Total||ShardId, Port, Primary|
+|alltotalcommandsprocessed|Yes|Total Operations (Instance Based)|Count|Total||ShardId, Port, Primary|
+|alltotalkeys|Yes|Total Keys (Instance Based)|Count|Maximum||ShardId, Port, Primary|
+|allusedmemory|Yes|Used Memory (Instance Based)|Bytes|Maximum||ShardId, Port, Primary|
+|allusedmemorypercentage|Yes|Used Memory Percentage (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
+|allusedmemoryRss|Yes|Used Memory RSS (Instance Based)|Bytes|Maximum||ShardId, Port, Primary|
+|cachehits|Yes|Cache Hits|Count|Total||ShardId|
+|cachehits0|Yes|Cache Hits (Shard 0)|Count|Total||No Dimensions|
+|cachehits1|Yes|Cache Hits (Shard 1)|Count|Total||No Dimensions|
+|cachehits2|Yes|Cache Hits (Shard 2)|Count|Total||No Dimensions|
+|cachehits3|Yes|Cache Hits (Shard 3)|Count|Total||No Dimensions|
+|cachehits4|Yes|Cache Hits (Shard 4)|Count|Total||No Dimensions|
+|cachehits5|Yes|Cache Hits (Shard 5)|Count|Total||No Dimensions|
+|cachehits6|Yes|Cache Hits (Shard 6)|Count|Total||No Dimensions|
+|cachehits7|Yes|Cache Hits (Shard 7)|Count|Total||No Dimensions|
+|cachehits8|Yes|Cache Hits (Shard 8)|Count|Total||No Dimensions|
+|cachehits9|Yes|Cache Hits (Shard 9)|Count|Total||No Dimensions|
+|cacheLatency|Yes|Cache Latency Microseconds (Preview)|Count|Average||ShardId|
+|cachemisses|Yes|Cache Misses|Count|Total||ShardId|
+|cachemisses0|Yes|Cache Misses (Shard 0)|Count|Total||No Dimensions|
+|cachemisses1|Yes|Cache Misses (Shard 1)|Count|Total||No Dimensions|
+|cachemisses2|Yes|Cache Misses (Shard 2)|Count|Total||No Dimensions|
+|cachemisses3|Yes|Cache Misses (Shard 3)|Count|Total||No Dimensions|
+|cachemisses4|Yes|Cache Misses (Shard 4)|Count|Total||No Dimensions|
+|cachemisses5|Yes|Cache Misses (Shard 5)|Count|Total||No Dimensions|
+|cachemisses6|Yes|Cache Misses (Shard 6)|Count|Total||No Dimensions|
+|cachemisses7|Yes|Cache Misses (Shard 7)|Count|Total||No Dimensions|
+|cachemisses8|Yes|Cache Misses (Shard 8)|Count|Total||No Dimensions|
+|cachemisses9|Yes|Cache Misses (Shard 9)|Count|Total||No Dimensions|
+|cachemissrate|Yes|Cache Miss Rate|Percent|cachemissrate||ShardId|
+|cacheRead|Yes|Cache Read|BytesPerSecond|Maximum||ShardId|
+|cacheRead0|Yes|Cache Read (Shard 0)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead1|Yes|Cache Read (Shard 1)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead2|Yes|Cache Read (Shard 2)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead3|Yes|Cache Read (Shard 3)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead4|Yes|Cache Read (Shard 4)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead5|Yes|Cache Read (Shard 5)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead6|Yes|Cache Read (Shard 6)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead7|Yes|Cache Read (Shard 7)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead8|Yes|Cache Read (Shard 8)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead9|Yes|Cache Read (Shard 9)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum||ShardId|
+|cacheWrite0|Yes|Cache Write (Shard 0)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite1|Yes|Cache Write (Shard 1)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite2|Yes|Cache Write (Shard 2)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite3|Yes|Cache Write (Shard 3)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite4|Yes|Cache Write (Shard 4)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite5|Yes|Cache Write (Shard 5)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite6|Yes|Cache Write (Shard 6)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite7|Yes|Cache Write (Shard 7)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite8|Yes|Cache Write (Shard 8)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite9|Yes|Cache Write (Shard 9)|BytesPerSecond|Maximum||No Dimensions|
+|connectedclients|Yes|Connected Clients|Count|Maximum||ShardId|
+|connectedclients0|Yes|Connected Clients (Shard 0)|Count|Maximum||No Dimensions|
+|connectedclients1|Yes|Connected Clients (Shard 1)|Count|Maximum||No Dimensions|
+|connectedclients2|Yes|Connected Clients (Shard 2)|Count|Maximum||No Dimensions|
+|connectedclients3|Yes|Connected Clients (Shard 3)|Count|Maximum||No Dimensions|
+|connectedclients4|Yes|Connected Clients (Shard 4)|Count|Maximum||No Dimensions|
+|connectedclients5|Yes|Connected Clients (Shard 5)|Count|Maximum||No Dimensions|
+|connectedclients6|Yes|Connected Clients (Shard 6)|Count|Maximum||No Dimensions|
+|connectedclients7|Yes|Connected Clients (Shard 7)|Count|Maximum||No Dimensions|
+|connectedclients8|Yes|Connected Clients (Shard 8)|Count|Maximum||No Dimensions|
+|connectedclients9|Yes|Connected Clients (Shard 9)|Count|Maximum||No Dimensions|
+|errors|Yes|Errors|Count|Maximum||ShardId, ErrorType|
+|evictedkeys|Yes|Evicted Keys|Count|Total||ShardId|
+|evictedkeys0|Yes|Evicted Keys (Shard 0)|Count|Total||No Dimensions|
+|evictedkeys1|Yes|Evicted Keys (Shard 1)|Count|Total||No Dimensions|
+|evictedkeys2|Yes|Evicted Keys (Shard 2)|Count|Total||No Dimensions|
+|evictedkeys3|Yes|Evicted Keys (Shard 3)|Count|Total||No Dimensions|
+|evictedkeys4|Yes|Evicted Keys (Shard 4)|Count|Total||No Dimensions|
+|evictedkeys5|Yes|Evicted Keys (Shard 5)|Count|Total||No Dimensions|
+|evictedkeys6|Yes|Evicted Keys (Shard 6)|Count|Total||No Dimensions|
+|evictedkeys7|Yes|Evicted Keys (Shard 7)|Count|Total||No Dimensions|
+|evictedkeys8|Yes|Evicted Keys (Shard 8)|Count|Total||No Dimensions|
+|evictedkeys9|Yes|Evicted Keys (Shard 9)|Count|Total||No Dimensions|
+|expiredkeys|Yes|Expired Keys|Count|Total||ShardId|
+|expiredkeys0|Yes|Expired Keys (Shard 0)|Count|Total||No Dimensions|
+|expiredkeys1|Yes|Expired Keys (Shard 1)|Count|Total||No Dimensions|
+|expiredkeys2|Yes|Expired Keys (Shard 2)|Count|Total||No Dimensions|
+|expiredkeys3|Yes|Expired Keys (Shard 3)|Count|Total||No Dimensions|
+|expiredkeys4|Yes|Expired Keys (Shard 4)|Count|Total||No Dimensions|
+|expiredkeys5|Yes|Expired Keys (Shard 5)|Count|Total||No Dimensions|
+|expiredkeys6|Yes|Expired Keys (Shard 6)|Count|Total||No Dimensions|
+|expiredkeys7|Yes|Expired Keys (Shard 7)|Count|Total||No Dimensions|
+|expiredkeys8|Yes|Expired Keys (Shard 8)|Count|Total||No Dimensions|
+|expiredkeys9|Yes|Expired Keys (Shard 9)|Count|Total||No Dimensions|
+|getcommands|Yes|Gets|Count|Total||ShardId|
+|getcommands0|Yes|Gets (Shard 0)|Count|Total||No Dimensions|
+|getcommands1|Yes|Gets (Shard 1)|Count|Total||No Dimensions|
+|getcommands2|Yes|Gets (Shard 2)|Count|Total||No Dimensions|
+|getcommands3|Yes|Gets (Shard 3)|Count|Total||No Dimensions|
+|getcommands4|Yes|Gets (Shard 4)|Count|Total||No Dimensions|
+|getcommands5|Yes|Gets (Shard 5)|Count|Total||No Dimensions|
+|getcommands6|Yes|Gets (Shard 6)|Count|Total||No Dimensions|
+|getcommands7|Yes|Gets (Shard 7)|Count|Total||No Dimensions|
+|getcommands8|Yes|Gets (Shard 8)|Count|Total||No Dimensions|
+|getcommands9|Yes|Gets (Shard 9)|Count|Total||No Dimensions|
+|operationsPerSecond|Yes|Operations Per Second|Count|Maximum||ShardId|
+|operationsPerSecond0|Yes|Operations Per Second (Shard 0)|Count|Maximum||No Dimensions|
+|operationsPerSecond1|Yes|Operations Per Second (Shard 1)|Count|Maximum||No Dimensions|
+|operationsPerSecond2|Yes|Operations Per Second (Shard 2)|Count|Maximum||No Dimensions|
+|operationsPerSecond3|Yes|Operations Per Second (Shard 3)|Count|Maximum||No Dimensions|
+|operationsPerSecond4|Yes|Operations Per Second (Shard 4)|Count|Maximum||No Dimensions|
+|operationsPerSecond5|Yes|Operations Per Second (Shard 5)|Count|Maximum||No Dimensions|
+|operationsPerSecond6|Yes|Operations Per Second (Shard 6)|Count|Maximum||No Dimensions|
+|operationsPerSecond7|Yes|Operations Per Second (Shard 7)|Count|Maximum||No Dimensions|
+|operationsPerSecond8|Yes|Operations Per Second (Shard 8)|Count|Maximum||No Dimensions|
+|operationsPerSecond9|Yes|Operations Per Second (Shard 9)|Count|Maximum||No Dimensions|
+|percentProcessorTime|Yes|CPU|Percent|Maximum||ShardId|
+|percentProcessorTime0|Yes|CPU (Shard 0)|Percent|Maximum||No Dimensions|
+|percentProcessorTime1|Yes|CPU (Shard 1)|Percent|Maximum||No Dimensions|
+|percentProcessorTime2|Yes|CPU (Shard 2)|Percent|Maximum||No Dimensions|
+|percentProcessorTime3|Yes|CPU (Shard 3)|Percent|Maximum||No Dimensions|
+|percentProcessorTime4|Yes|CPU (Shard 4)|Percent|Maximum||No Dimensions|
+|percentProcessorTime5|Yes|CPU (Shard 5)|Percent|Maximum||No Dimensions|
+|percentProcessorTime6|Yes|CPU (Shard 6)|Percent|Maximum||No Dimensions|
+|percentProcessorTime7|Yes|CPU (Shard 7)|Percent|Maximum||No Dimensions|
+|percentProcessorTime8|Yes|CPU (Shard 8)|Percent|Maximum||No Dimensions|
+|percentProcessorTime9|Yes|CPU (Shard 9)|Percent|Maximum||No Dimensions|
+|serverLoad|Yes|Server Load|Percent|Maximum||ShardId|
+|serverLoad0|Yes|Server Load (Shard 0)|Percent|Maximum||No Dimensions|
+|serverLoad1|Yes|Server Load (Shard 1)|Percent|Maximum||No Dimensions|
+|serverLoad2|Yes|Server Load (Shard 2)|Percent|Maximum||No Dimensions|
+|serverLoad3|Yes|Server Load (Shard 3)|Percent|Maximum||No Dimensions|
+|serverLoad4|Yes|Server Load (Shard 4)|Percent|Maximum||No Dimensions|
+|serverLoad5|Yes|Server Load (Shard 5)|Percent|Maximum||No Dimensions|
+|serverLoad6|Yes|Server Load (Shard 6)|Percent|Maximum||No Dimensions|
+|serverLoad7|Yes|Server Load (Shard 7)|Percent|Maximum||No Dimensions|
+|serverLoad8|Yes|Server Load (Shard 8)|Percent|Maximum||No Dimensions|
+|serverLoad9|Yes|Server Load (Shard 9)|Percent|Maximum||No Dimensions|
+|setcommands|Yes|Sets|Count|Total||ShardId|
+|setcommands0|Yes|Sets (Shard 0)|Count|Total||No Dimensions|
+|setcommands1|Yes|Sets (Shard 1)|Count|Total||No Dimensions|
+|setcommands2|Yes|Sets (Shard 2)|Count|Total||No Dimensions|
+|setcommands3|Yes|Sets (Shard 3)|Count|Total||No Dimensions|
+|setcommands4|Yes|Sets (Shard 4)|Count|Total||No Dimensions|
+|setcommands5|Yes|Sets (Shard 5)|Count|Total||No Dimensions|
+|setcommands6|Yes|Sets (Shard 6)|Count|Total||No Dimensions|
+|setcommands7|Yes|Sets (Shard 7)|Count|Total||No Dimensions|
+|setcommands8|Yes|Sets (Shard 8)|Count|Total||No Dimensions|
+|setcommands9|Yes|Sets (Shard 9)|Count|Total||No Dimensions|
+|totalcommandsprocessed|Yes|Total Operations|Count|Total||ShardId|
+|totalcommandsprocessed0|Yes|Total Operations (Shard 0)|Count|Total||No Dimensions|
+|totalcommandsprocessed1|Yes|Total Operations (Shard 1)|Count|Total||No Dimensions|
+|totalcommandsprocessed2|Yes|Total Operations (Shard 2)|Count|Total||No Dimensions|
+|totalcommandsprocessed3|Yes|Total Operations (Shard 3)|Count|Total||No Dimensions|
+|totalcommandsprocessed4|Yes|Total Operations (Shard 4)|Count|Total||No Dimensions|
+|totalcommandsprocessed5|Yes|Total Operations (Shard 5)|Count|Total||No Dimensions|
+|totalcommandsprocessed6|Yes|Total Operations (Shard 6)|Count|Total||No Dimensions|
+|totalcommandsprocessed7|Yes|Total Operations (Shard 7)|Count|Total||No Dimensions|
+|totalcommandsprocessed8|Yes|Total Operations (Shard 8)|Count|Total||No Dimensions|
+|totalcommandsprocessed9|Yes|Total Operations (Shard 9)|Count|Total||No Dimensions|
+|totalkeys|Yes|Total Keys|Count|Maximum||ShardId|
+|totalkeys0|Yes|Total Keys (Shard 0)|Count|Maximum||No Dimensions|
+|totalkeys1|Yes|Total Keys (Shard 1)|Count|Maximum||No Dimensions|
+|totalkeys2|Yes|Total Keys (Shard 2)|Count|Maximum||No Dimensions|
+|totalkeys3|Yes|Total Keys (Shard 3)|Count|Maximum||No Dimensions|
+|totalkeys4|Yes|Total Keys (Shard 4)|Count|Maximum||No Dimensions|
+|totalkeys5|Yes|Total Keys (Shard 5)|Count|Maximum||No Dimensions|
+|totalkeys6|Yes|Total Keys (Shard 6)|Count|Maximum||No Dimensions|
+|totalkeys7|Yes|Total Keys (Shard 7)|Count|Maximum||No Dimensions|
+|totalkeys8|Yes|Total Keys (Shard 8)|Count|Maximum||No Dimensions|
+|totalkeys9|Yes|Total Keys (Shard 9)|Count|Maximum||No Dimensions|
+|usedmemory|Yes|Used Memory|Bytes|Maximum||ShardId|
+|usedmemory0|Yes|Used Memory (Shard 0)|Bytes|Maximum||No Dimensions|
+|usedmemory1|Yes|Used Memory (Shard 1)|Bytes|Maximum||No Dimensions|
+|usedmemory2|Yes|Used Memory (Shard 2)|Bytes|Maximum||No Dimensions|
+|usedmemory3|Yes|Used Memory (Shard 3)|Bytes|Maximum||No Dimensions|
+|usedmemory4|Yes|Used Memory (Shard 4)|Bytes|Maximum||No Dimensions|
+|usedmemory5|Yes|Used Memory (Shard 5)|Bytes|Maximum||No Dimensions|
+|usedmemory6|Yes|Used Memory (Shard 6)|Bytes|Maximum||No Dimensions|
+|usedmemory7|Yes|Used Memory (Shard 7)|Bytes|Maximum||No Dimensions|
+|usedmemory8|Yes|Used Memory (Shard 8)|Bytes|Maximum||No Dimensions|
+|usedmemory9|Yes|Used Memory (Shard 9)|Bytes|Maximum||No Dimensions|
+|usedmemorypercentage|Yes|Used Memory Percentage|Percent|Maximum||ShardId|
+|usedmemoryRss|Yes|Used Memory RSS|Bytes|Maximum||ShardId|
+|usedmemoryRss0|Yes|Used Memory RSS (Shard 0)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss1|Yes|Used Memory RSS (Shard 1)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss2|Yes|Used Memory RSS (Shard 2)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss3|Yes|Used Memory RSS (Shard 3)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss4|Yes|Used Memory RSS (Shard 4)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss5|Yes|Used Memory RSS (Shard 5)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss6|Yes|Used Memory RSS (Shard 6)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss7|Yes|Used Memory RSS (Shard 7)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum||No Dimensions|
## Microsoft.Cache/redisEnterprise
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
|UsedCapacity|Yes|Used capacity|Bytes|Average|Account used capacity|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.ClassicStorage/storageAccounts/fileServices
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication, FileShare| |Egress|Yes|Egress|Bytes|Total|The amount of egress data, in bytes. This number includes egress from an external client into Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication, FileShare| |FileCapacity|No|File Capacity|Bytes|Average|The amount of storage used by the storage account's File service in bytes.|FileShare|
-|FileCount|No|File Count|Count|Average|The number of files in the storage account's File service.|FileShare|
+|FileCount|No|File Count|Count|Average|The number of file in the storage account's File service.|FileShare|
|FileShareCount|No|File Share Count|Count|Average|The number of file shares in the storage account's File service.|No Dimensions| |FileShareQuota|No|File share quota size|Bytes|Average|The upper limit on the amount of storage that can be used by Azure Files Service in bytes.|FileShare| |FileShareSnapshotCount|No|File Share Snapshot Count|Count|Average|The number of snapshots present on the share in storage account's Files Service.|FileShare|
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication, FileShare| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication, FileShare| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication, FileShare|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication, FileShare|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication, FileShare|
## Microsoft.ClassicStorage/storageAccounts/queueServices
This latest update adds a new column and reorders the metrics to be alphabetical
|QueueMessageCount|Yes|Queue Message Count|Count|Average|The approximate number of queue messages in the storage account's Queue service.|No Dimensions| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.ClassicStorage/storageAccounts/tableServices
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication| |TableCapacity|Yes|Table Capacity|Bytes|Average|The amount of storage used by the storage account's Table service in bytes.|No Dimensions|
-|TableCount|Yes|Table Count|Count|Average|The number of tables in the storage account's Table service.|No Dimensions|
+|TableCount|Yes|Table Count|Count|Average|The number of table in the storage account's Table service.|No Dimensions|
|TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage account's Table service.|No Dimensions|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.Cloudtest/hostedpools
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|broker.msgs.delivered|Yes|Broker: Messages Delivered (Preview)|Count|Total|Total number of messages delivered by the broker|Result, FailureReasonCategory, QoS, TopicSpaceName|
+|broker.msgs.delivery.throttlingLatency|Yes|Broker: Message delivery latency from throttling (Preview)|Milliseconds|Average|The average egress message delivery latency due to throttling|No Dimensions|
+|broker.msgs.published|Yes|Broker: Messages Published (Preview)|Count|Total|Total number of messages published to the broker|Result, FailureReasonCategory, QoS|
|c2d.commands.egress.abandon.success|Yes|C2D messages abandoned|Count|Total|Number of cloud-to-device messages abandoned by the device|No Dimensions| |c2d.commands.egress.complete.success|Yes|C2D message deliveries completed|Count|Total|Number of cloud-to-device message deliveries completed successfully by the device|No Dimensions| |c2d.commands.egress.reject.success|Yes|C2D messages rejected|Count|Total|Number of cloud-to-device messages rejected by the device|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|jobs.listJobs.success|Yes|Successful calls to list jobs|Count|Total|The count of all successful calls to list jobs.|No Dimensions| |jobs.queryJobs.failure|Yes|Failed job queries|Count|Total|The count of all failed calls to query jobs.|No Dimensions| |jobs.queryJobs.success|Yes|Successful job queries|Count|Total|The count of all successful calls to query jobs.|No Dimensions|
+|mqtt.connections|Yes|MQTT: New Connections (Preview)|Count|Total|The number of new connections per IoT Hub|SessionType, MqttEndpoint|
+|mqtt.sessions|Yes|MQTT: New Sessions (Preview)|Count|Total|The number of new sessions per IoT Hub|SessionType, MqttEndpoint|
+|mqtt.sessions.dropped|Yes|MQTT: Dropped Sessions (Preview)|Percent|Average|The rate of dropped sessions per IoT Hub|DropReason|
+|mqtt.subscriptions|Yes|MQTT: New Subscriptions (Preview)|Count|Total|The number of subscriptions|Result, FailureReasonCategory, OperationType, TopicSpaceName|
|RoutingDataSizeInBytesDelivered|Yes|Routing Delivery Message Size in Bytes (preview)|Bytes|Total|The total size in bytes of messages delivered by IoT hub to an endpoint. You can use the EndpointName and EndpointType dimensions to view the size of the messages in bytes delivered to your different endpoints. The metric value increases for every message delivered, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, RoutingSource| |RoutingDeliveries|Yes|Routing Deliveries (preview)|Count|Total|The number of times IoT Hub attempted to deliver messages to all endpoints using routing. To see the number of successful or failed attempts, use the Result dimension. To see the reason of failure, like invalid, dropped, or orphaned, use the FailureReasonCategory dimension. You can also use the EndpointName and EndpointType dimensions to understand how many messages were delivered to your different endpoints. The metric value increases by one for each delivery attempt, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, FailureReasonCategory, Result, RoutingSource| |RoutingDeliveryLatency|Yes|Routing Delivery Latency (preview)|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into an endpoint. You can use the EndpointName and EndpointType dimensions to understand the latency to your different endpoints.|EndpointType, EndpointName, RoutingSource|
This latest update adds a new column and reorders the metrics to be alphabetical
|IntegratedCacheItemHitRate|No|IntegratedCacheItemHitRate|Percent|Average|Number of point reads that used the integrated cache divided by number of point reads routed through the dedicated gateway with eventual consistency|Region, | |IntegratedCacheQueryExpirationCount|No|IntegratedCacheQueryExpirationCount|Count|Average|Number of queries evicted from the integrated cache due to TTL expiration|Region, | |IntegratedCacheQueryHitRate|No|IntegratedCacheQueryHitRate|Percent|Average|Number of queries that used the integrated cache divided by number of queries routed through the dedicated gateway with eventual consistency|Region, |
-|MetadataRequests|No|Metadata Requests|Count|Count|Count of metadata requests. Azure Cosmos DB maintains system metadata collection for each account, that allows you to enumerate collections, databases, etc, and their configurations, free of charge.|DatabaseName, CollectionName, Region, StatusCode, |
+|MetadataRequests|No|Metadata Requests|Count|Count|Count of metadata requests. Cosmos DB maintains system metadata collection for each account, that allows you to enumerate collections, databases, etc, and their configurations, free of charge.|DatabaseName, CollectionName, Region, StatusCode, |
|MongoCollectionCreate|No|Mongo Collection Created|Count|Count|Mongo Collection Created|ResourceName, ChildResourceName, | |MongoCollectionDelete|No|Mongo Collection Deleted|Count|Count|Mongo Collection Deleted|ResourceName, ChildResourceName, | |MongoCollectionThroughputUpdate|No|Mongo Collection Throughput Updated|Count|Count|Mongo Collection Throughput Updated|ResourceName, ChildResourceName, |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |Availability|Yes|Availability|Percent|Average|The availability rate of the service.|No Dimensions|
-|CosmosDbCollectionSize|Yes|Azure Cosmos DB Collection Size|Bytes|Total|The size of the backing Azure Cosmos DB collection, in bytes.|No Dimensions|
-|CosmosDbIndexSize|Yes|Azure Cosmos DB Index Size|Bytes|Total|The size of the backing Azure Cosmos DB collection's index, in bytes.|No Dimensions|
-|CosmosDbRequestCharge|Yes|Azure Cosmos DB RU usage|Count|Total|The RU usage of requests to the service's backing Azure Cosmos DB.|Operation, ResourceType|
-|CosmosDbRequests|Yes|Service Azure Cosmos DB requests|Count|Sum|The total number of requests made to a service's backing Azure Cosmos DB.|Operation, ResourceType|
-|CosmosDbThrottleRate|Yes|Service Azure Cosmos DB throttle rate|Count|Sum|The total number of 429 responses from a service's backing Azure Cosmos DB.|Operation, ResourceType|
+|CosmosDbCollectionSize|Yes|Cosmos DB Collection Size|Bytes|Total|The size of the backing Cosmos DB collection, in bytes.|No Dimensions|
+|CosmosDbIndexSize|Yes|Cosmos DB Index Size|Bytes|Total|The size of the backing Cosmos DB collection's index, in bytes.|No Dimensions|
+|CosmosDbRequestCharge|Yes|Cosmos DB RU usage|Count|Total|The RU usage of requests to the service's backing Cosmos DB.|Operation, ResourceType|
+|CosmosDbRequests|Yes|Service Cosmos DB requests|Count|Sum|The total number of requests made to a service's backing Cosmos DB.|Operation, ResourceType|
+|CosmosDbThrottleRate|Yes|Service Cosmos DB throttle rate|Count|Sum|The total number of 429 responses from a service's backing Cosmos DB.|Operation, ResourceType|
|IoTConnectorDeviceEvent|Yes|Number of Incoming Messages|Count|Sum|The total number of messages received by the Azure IoT Connector for FHIR prior to any normalization.|Operation, ConnectorName| |IoTConnectorDeviceEventProcessingLatencyMs|Yes|Average Normalize Stage Latency|Milliseconds|Average|The average time between an event's ingestion time and the time the event is processed for normalization.|Operation, ConnectorName| |IoTConnectorMeasurement|Yes|Number of Measurements|Count|Sum|The number of normalized value readings received by the FHIR conversion stage of the Azure IoT Connector for FHIR.|Operation, ConnectorName|
This latest update adds a new column and reorders the metrics to be alphabetical
|GpuUtilization|Yes|GpuUtilization|Count|Average|Percentage of utilization on a GPU node. Utilization is reported at one minute intervals.|Scenario, runId, NodeId, DeviceId, ClusterName| |GpuUtilizationMilliGPUs|Yes|GpuUtilizationMilliGPUs|Count|Average|Utilization of a GPU device in milli-GPUs. Utilization is aggregated in one minute intervals.|RunId, InstanceId, DeviceId, ComputeName| |GpuUtilizationPercentage|Yes|GpuUtilizationPercentage|Count|Average|Utilization percentage of a GPU device. Utilization is aggregated in one minute intervals.|RunId, InstanceId, DeviceId, ComputeName|
-|IBReceiveMegabytes|Yes|IBReceiveMegabytes|Count|Average|Network data received over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
-|IBTransmitMegabytes|Yes|IBTransmitMegabytes|Count|Average|Network data sent over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|IBReceiveMegabytes|Yes|IBReceiveMegabytes|Count|Average|Network data received over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName, DeviceId|
+|IBTransmitMegabytes|Yes|IBTransmitMegabytes|Count|Average|Network data sent over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName, DeviceId|
|Idle Cores|Yes|Idle Cores|Count|Average|Number of idle cores|Scenario, ClusterName| |Idle Nodes|Yes|Idle Nodes|Count|Average|Number of idle nodes. Idle nodes are the nodes which are not running any jobs but can accept new job if available.|Scenario, ClusterName| |Leaving Cores|Yes|Leaving Cores|Count|Average|Number of leaving cores|Scenario, ClusterName|
This latest update adds a new column and reorders the metrics to be alphabetical
|Model Deploy Succeeded|Yes|Model Deploy Succeeded|Count|Total|Number of model deployments that succeeded in this workspace|Scenario| |Model Register Failed|Yes|Model Register Failed|Count|Total|Number of model registrations that failed in this workspace|Scenario, StatusCode| |Model Register Succeeded|Yes|Model Register Succeeded|Count|Total|Number of model registrations that succeeded in this workspace|Scenario|
-|NetworkInputMegabytes|Yes|NetworkInputMegabytes|Count|Average|Network data received in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
-|NetworkOutputMegabytes|Yes|NetworkOutputMegabytes|Count|Average|Network data sent in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|NetworkInputMegabytes|Yes|NetworkInputMegabytes|Count|Average|Network data received in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName, DeviceId|
+|NetworkOutputMegabytes|Yes|NetworkOutputMegabytes|Count|Average|Network data sent in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName, DeviceId|
|Not Responding Runs|Yes|Not Responding Runs|Count|Total|Number of runs not responding for this workspace. Count is updated when a run enters Not Responding state.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Not Started Runs|Yes|Not Started Runs|Count|Total|Number of runs in Not Started state for this workspace. Count is updated when a request is received to create a run but run information has not yet been populated. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Preempted Cores|Yes|Preempted Cores|Count|Average|Number of preempted cores|Scenario, ClusterName|
This latest update adds a new column and reorders the metrics to be alphabetical
|ScheduledMessages|No|Count of scheduled messages in a Queue/Topic.|Count|Average|Count of scheduled messages in a Queue/Topic.|EntityName| |ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, | |ServerSendLatency|Yes|Server Send Latency.|Milliseconds|Average|Server Send Latency.|EntityName|
-|Size|No|Size|Bytes|Average|Size of a Queue/Topic in Bytes.|EntityName|
+|Size|No|Size|Bytes|Average|Size of an Queue/Topic in Bytes.|EntityName|
|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, | |ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, MessagingErrorSubCode| |UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, |
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication| |Egress|Yes|Egress|Bytes|Total|The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication|
-|SuccessE2ELatency|Yes|Success E2E Latency|MilliSeconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
-|SuccessServerLatency|Yes|Success Server Latency|MilliSeconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication, TransactionType|
+|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
+|SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
|UsedCapacity|Yes|Used capacity|Bytes|Average|The amount of storage used by the storage account. For standard storage accounts, it's the sum of capacity used by blob, table, file, and queue. For premium storage accounts and Blob storage accounts, it is the same as BlobCapacity or FileCapacity.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.Storage/storageAccounts/fileServices
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication, FileShare| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication, FileShare| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication, FileShare|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication, FileShare|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication, FileShare|
## Microsoft.Storage/storageAccounts/queueServices
This latest update adds a new column and reorders the metrics to be alphabetical
|QueueMessageCount|Yes|Queue Message Count|Count|Average|The number of unexpired queue messages in the storage account.|No Dimensions| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.Storage/storageAccounts/tableServices
This latest update adds a new column and reorders the metrics to be alphabetical
|TableCapacity|Yes|Table Capacity|Bytes|Average|The amount of Table storage used by the storage account.|No Dimensions| |TableCount|Yes|Table Count|Count|Average|The number of tables in the storage account.|No Dimensions| |TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage account.|No Dimensions|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.StorageCache/caches
This latest update adds a new column and reorders the metrics to be alphabetical
|LateInputEvents|Yes|Late Input Events|Count|Total|Late Input Events|LogicalName, PartitionId, ProcessorInstance, NodeName| |OutputEvents|Yes|Output Events|Count|Total|Output Events|LogicalName, PartitionId, ProcessorInstance, NodeName| |OutputWatermarkDelaySeconds|Yes|Watermark Delay|Seconds|Maximum|Watermark Delay|LogicalName, PartitionId, ProcessorInstance, NodeName|
-|ProcessCPUUsagePercentage|Yes|CPU % Utilization (Preview)|Percent|Maximum|CPU % Utilization (Preview)|LogicalName, PartitionId, ProcessorInstance, NodeName|
+|ProcessCPUUsagePercentage|Yes|CPU % Utilization|Percent|Maximum|CPU % Utilization|LogicalName, PartitionId, ProcessorInstance, NodeName|
|ResourceUtilization|Yes|SU (Memory) % Utilization|Percent|Maximum|SU (Memory) % Utilization|LogicalName, PartitionId, ProcessorInstance, NodeName|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |BuiltinSqlPoolDataProcessedBytes|No|Data processed (bytes)|Bytes|Total|Amount of data processed by queries|No Dimensions|
-|BuiltinSqlPoolLoginAttempts|No|Login attempts|Count|Total|Count of login attempts that succeeded or failed|Result|
+|BuiltinSqlPoolLoginAttempts|No|Login attempts|Count|Total|Count of login attempts that succeded or failed|Result|
|BuiltinSqlPoolRequestsEnded|No|Requests ended|Count|Total|Count of Requests that succeeded, failed, or were cancelled|Result| |IntegrationActivityRunsEnded|No|Activity runs ended|Count|Total|Count of integration activities that succeeded, failed, or were cancelled|Result, FailureType, Activity, ActivityType, Pipeline| |IntegrationPipelineRunsEnded|No|Pipeline runs ended|Count|Total|Count of integration pipeline runs that succeeded, failed, or were cancelled|Result, FailureType, Pipeline|
This latest update adds a new column and reorders the metrics to be alphabetical
|DWULimit|No|DWU limit|Count|Maximum|Service level objective of the SQL pool|No Dimensions| |DWUUsed|No|DWU used|Count|Maximum|Represents a high-level representation of usage across the SQL pool. Measured by DWU limit * DWU percentage|No Dimensions| |DWUUsedPercent|No|DWU used percentage|Percent|Maximum|Represents a high-level representation of usage across the SQL pool. Measured by taking the maximum between CPU percentage and Data IO percentage|No Dimensions|
-|LocalTempDBUsedPercent|No|Local tempdb used percentage|Percent|Maximum|Local tempdb utilization across all compute nodes - values are emitted every five minutes|No Dimensions|
+|LocalTempDBUsedPercent|No|Local tempdb used percentage|Percent|Maximum|Local tempdb utilization across all compute nodes - values are emitted every five minute|No Dimensions|
|MemoryUsedPercent|No|Memory used percentage|Percent|Maximum|Memory utilization across all nodes in the SQL pool|No Dimensions| |QueuedQueries|No|Queued queries|Count|Total|Cumulative count of requests queued after the max concurrency limit was reached|IsUserDefined| |WLGActiveQueries|No|Workload group active queries|Count|Total|The active queries within the workload group. Using this metric unfiltered and unsplit displays all active queries running on the system|IsUserDefined, WorkloadGroup|
This latest update adds a new column and reorders the metrics to be alphabetical
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see - https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
This latest update adds a new column and reorders the metrics to be alphabetical
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see - https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 09/07/2022 Last updated : 10/13/2022
If you think something is missing, you can open a GitHub comment at the bottom o
|BlockchainApplication|Blockchain Application|No|
-## microsoft.botservice/botservices
+## Microsoft.BotService/botServices
|Category|Category Display Name|Costs To Export| ||||
-|BotRequest|Requests from the channels to the bot|No|
+|logSpecification.Name.Empty|logSpecification.DisplayName.empty|Yes|
## Microsoft.Cache/redis
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|admissionsenforcer|AKS Guardrails/Admissions Enforcer|Yes|
|cloud-controller-manager|Kubernetes Cloud Controller Manager|Yes| |cluster-autoscaler|Kubernetes Cluster Autoscaler|No|
+|csi-azuredisk-controller|Kubernetes CSI Azuredisk Controller|Yes|
+|csi-azuredisk-controller-v2|Kubernetes CSI Azuredisk V2 Controller|Yes|
+|csi-azurefile-controller|Kubernetes CSI Azurefile Controller|Yes|
+|csi-blob-controller|Kubernetes CSI Blob Controller|Yes|
+|csi-snapshot-controller|Kubernetes CSI Snapshot Controller|Yes|
|guard|Kubernetes Guard|No| |kube-apiserver|Kubernetes API Server|No| |kube-audit|Kubernetes Audit|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|deltaPipelines|Databricks Delta Pipelines|Yes| |featureStore|Databricks Feature Store|Yes| |genie|Databricks Genie|Yes|
+|gitCredentials|Databricks Git Credentials|Yes|
|globalInitScripts|Databricks Global Init Scripts|Yes| |iamRole|Databricks IAM Role|Yes| |instancePools|Instance Pools|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|sqlPermissions|Databricks SQLPermissions|No| |ssh|Databricks SSH|No| |unityCatalog|Databricks Unity Catalog|Yes|
+|webTerminal|Databricks Web Terminal|Yes|
|workspace|Databricks Workspace|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|ExPCompute|ExPCompute|Yes|
-|Request|Request|No|
+|ExPCompute|ExPComput
## Microsoft.HealthcareApis/services
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|Audit Logs|No|
+|AzurePolicyEvaluationDetails|Azure Policy Evaluation Details|Yes|
## Microsoft.Kusto/Clusters
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|CollectionCrudLogEvent|CollectionCrud|Yes|
|ScanStatusLogEvent|ScanStatus|No|
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-security.md
You can use these additional security features to further secure your Azure Moni
## Tamper-proofing and immutability
-Azure Monitor is an append-only data platform that includes provisions to delete data for compliance purposes. [Set a lock on your Log Analytics workspace](../../azure-resource-manager/management/lock-resources.md) to block all activities that could delete data: purge, table delete, and table- or workspace-level data retention changes.
+Azure Monitor is an append-only data platform, but includes provisions to delete data for compliance purposes. You can [set a lock on your Log Analytics workspace](../../azure-resource-manager/management/lock-resources.md) to block all activities that could delete data: purge, table delete, and table- or workspace-level data retention changes. However, this lock can still be removed.
-To tamper-proof your monitoring solution, we recommend you [export data to an immutable storage solution](../../storage/blobs/immutable-storage-overview.md).
+To fully tamper-proof your monitoring solution, we recommend you [export your data to an immutable storage solution](../../storage/blobs/immutable-storage-overview.md).
## Next steps
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Search jobs are asynchronous queries that fetch records into a new search table
## When to use search jobs
-Use a search job when the log query timeout of 10 minutes isn't sufficient to search through large volumes of data or if your running a slow query.
+Use a search job when the log query timeout of 10 minutes isn't sufficient to search through large volumes of data or if you're running a slow query.
Search jobs also let you retrieve records from [Archived Logs](data-retention-archive.md) and [Basic Logs](basic-logs-configure.md) tables into a new log table you can use for queries. In this way, running a search job can be an alternative to:
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Spring Apps](../spring-apps/overview.md) | Microsoft.AppPlatform/Spring | [**Yes**](./essentials/metrics-supported.md#microsoftappplatformspring) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappplatformspring) | | | | [Azure Attestation Service](../attestation/overview.md) | Microsoft.Attestation/attestationProviders | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftattestationattestationproviders) | | | | [Azure Automation](../automation/index.yml) | Microsoft.Automation/automationAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftautomationautomationaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftautomationautomationaccounts) | | |
- | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.AVS/privateClouds | [**Yes**](./essentials/metrics-supported.md#microsoftavsprivateclouds) | [**Yes**](./essentials/resource-logs-categories.md#microsoftavsprivateclouds) | | |
+ | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.AVS/privateClouds | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure Batch](../batch/index.yml) | Microsoft.Batch/batchAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftbatchbatchaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbatchbatchaccounts) | | | | [Azure Batch](../batch/index.yml) | Microsoft.BatchAI/workspaces | No | No | | |
- | [Azure Cognitive Services- Bing Search API](../cognitive-services/bing-web-search/index.yml) | Microsoft.Bing/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftbingaccounts) | No | | |
+ | [Azure Cognitive Services- Bing Search API](../cognitive-services/bing-web-search/index.yml) | Microsoft.Bing/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftmapsaccounts) | No | | |
| [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/blockchainMembers | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/cordaMembers | No | [**Yes**](./essentials/resource-logs-categories.md) | | | | [Azure Bot Service](/azure/bot-service/) | Microsoft.BotService/botServices | [**Yes**](./essentials/metrics-supported.md#microsoftbotservicebotservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbotservicebotservices) | | |
- | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredis) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcacheredis) | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
+ | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
| [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/redisEnterprise | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredisenterprise) | No | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | | | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdncdnwebapplicationfirewallpolicies) | | | | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofiles) | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Files (Classic)](../storage/files/index.yml) | Microsoft.ClassicStorage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Queue Storage (Classic)](../storage/queues/index.yml) | Microsoft.ClassicStorage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Table Storage (Classic)](../storage/tables/index.yml) | Microsoft.ClassicStorage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | Microsoft Cloud Test Platform | Microsoft.Cloudtest/hostedpools | [**Yes**](./essentials/metrics-supported.md#microsoftcloudtesthostedpools) | No | | |
- | Microsoft Cloud Test Platform | Microsoft.Cloudtest/pools | [**Yes**](./essentials/metrics-supported.md#microsoftcloudtestpools) | No | | |
- | [Cray ClusterStor in Azure](https://azure.microsoft.com/blog/supercomputing-in-the-cloud-announcing-three-new-cray-in-azure-offers/) | Microsoft.ClusterStor/nodes | [**Yes**](./essentials/metrics-supported.md#microsoftclusterstornodes) | No | | |
+ | Microsoft Cloud Test Platform | Microsoft.Cloudtest/hostedpools | [**Yes**](./essentials/metrics-supported.md) | No | | |
+ | Microsoft Cloud Test Platform | Microsoft.Cloudtest/pools | [**Yes**](./essentials/metrics-supported.md) | No | | |
+ | [Cray ClusterStor in Azure](https://azure.microsoft.com/blog/supercomputing-in-the-cloud-announcing-three-new-cray-in-azure-offers/) | Microsoft.ClusterStor/nodes | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure Cognitive Services](../cognitive-services/index.yml) | Microsoft.CognitiveServices/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcognitiveservicesaccounts) | | |
- | [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md#microsoftcommunicationcommunicationservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcommunicationcommunicationservices) | | |
+ | [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservices) | No | | Agent required to monitor guest operating system and workflows.| | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices/roles | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | No | | Agent required to monitor guest operating system and workflows.|
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md#microsoftcomputedisks) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | |
+ | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | |
| [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
- | [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftconnectedvehicleplatformaccounts) | | |
+ | [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure Container Instances](../container-instances/index.yml) | Microsoft.ContainerInstance/containerGroups | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | No | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | | | [Azure Container Registry](../container-registry/index.yml) | Microsoft.ContainerRegistry/registries | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerregistryregistries) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerregistryregistries) | | | | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.ContainerService/managedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerservicemanagedclusters) | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlflexibleservers) | | | | [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlservers) | | | | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlflexibleservers) | | |
- | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serverGroupsv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlservergroupsv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlservergroupsv2) | | |
+ | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serverGroupsv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | |
| [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlservers) | | | | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serversv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | | | [Microsoft Azure Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/applicationgroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationapplicationgroups) | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/topics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridtopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridtopics) | | | | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/clusters | [**Yes**](./essentials/metrics-supported.md#microsofteventhubclusters) | No | 0 | | | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/namespaces | [**Yes**](./essentials/metrics-supported.md#microsofteventhubnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventhubnamespaces) | 0 | |
- | [Microsoft Experimentation Platform](https://www.microsoft.com/research/group/experimentation-platform-exp/) | microsoft.experimentation/experimentWorkspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md#microsoftexperimentationexperimentworkspaces) | | |
+ | [Microsoft Experimentation Platform](https://www.microsoft.com/research/group/experimentation-platform-exp/) | microsoft.experimentation/experimentWorkspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure HDInsight](../hdinsight/index.yml) | Microsoft.HDInsight/clusters | [**Yes**](./essentials/metrics-supported.md#microsofthdinsightclusters) | No | [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | | | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/services | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisservices) | [**Yes**](./essentials/resource-logs-categories.md#microsofthealthcareapisservices) | | | | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/workspaces/iotconnectors | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors) | No | | |
- | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworknetworkfunctions) | No | | |
- | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworkvirtualnetworkfunctions) | No | | |
+ | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md) | No | | |
+ | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure Monitor](./index.yml) | microsoft.insights/autoscalesettings | [**Yes**](./essentials/metrics-supported.md#microsoftinsightsautoscalesettings) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings) | | | | [Azure Monitor](./index.yml) | microsoft.insights/components | [**Yes**](./essentials/metrics-supported.md#microsoftinsightscomponents) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightscomponents) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure IoT Central](../iot-central/index.yml) | Microsoft.IoTCentral/IoTApps | [**Yes**](./essentials/metrics-supported.md#microsoftiotcentraliotapps) | No | | | | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/managedHSMs | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultmanagedhsms) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | | | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | |
- | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftkubernetesconnectedclusters) | No | | |
+ | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure Data Explorer](/azure/data-explorer/) | Microsoft.Kusto/clusters | [**Yes**](./essentials/metrics-supported.md#microsoftkustoclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkustoclusters) | | | | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationAccounts | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicintegrationaccounts) | | | | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationServiceEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) | No | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservices) | | | | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservicesliveevents) | No | | | | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) | No | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediavideoanalyzers) | | |
+ | [Azure Media Services](/azure/media-services/) | Microsoft.Medi) | | |
| [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/remoteRenderingAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityremoterenderingaccounts) | No | | | | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/spatialAnchorsAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityspatialanchorsaccounts) | No | | | | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) | No | | | | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools/volumes | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) | No | | | | [Azure Application Gateway](../application-gateway/index.yml) | Microsoft.Network/applicationGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkapplicationgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways) | | | | [Azure Firewall](../firewall/index.yml) | Microsoft.Network/azureFirewalls | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkazurefirewalls) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkazurefirewalls) | | |
- | [Azure Bastion](../bastion/index.yml) | Microsoft.Network/bastionHosts | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkbastionhosts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkbastionhosts) | | |
+ | [Azure Bastion](../bastion/index.yml) | Microsoft.Network/bastionHosts | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/connections | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkconnections) | No | | | | [Azure DNS](../dns/index.yml) | Microsoft.Network/dnszones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkdnszones) | No | | | | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteCircuits | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Private Link](../private-link/private-link-overview.md) | Microsoft.Network/privateLinkServices | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivatelinkservices) | No | | | | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/publicIPAddresses | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkpublicipaddresses) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | | | [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) | Microsoft.Network/trafficmanagerprofiles | [**Yes**](./essentials/metrics-supported.md#microsoftnetworktrafficmanagerprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworktrafficmanagerprofiles) | | |
- | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/virtualHubs | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualhubs) | No | | |
+ | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/virtualHubs | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/virtualNetworkGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworkgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworkgateways) | | | | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualNetworks | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworks) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworks) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | | | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualRouters | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualrouters) | No | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenantsworkspaces) | | | | [Power BI Embedded](/azure/power-bi-embedded/) | Microsoft.PowerBIDedicated/capacities | [**Yes**](./essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbidedicatedcapacities) | | | | [Microsoft Purview](../purview/index.yml) | Microsoft.Purview/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftpurviewaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpurviewaccounts) | | |
- | [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftrecoveryservicesvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrecoveryservicesvaults) | | |
+ | [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure Relay](../azure-relay/relay-what-is-it.md) | Microsoft.Relay/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftrelaynamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrelaynamespaces) | | |
- | [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md#microsoftresourcessubscriptions) | No | | |
+ | [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure Cognitive Search](../search/index.yml) | Microsoft.Search/searchServices | [**Yes**](./essentials/metrics-supported.md#microsoftsearchsearchservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsearchsearchservices) | | | | [Azure Service Bus](/azure/service-bus/) | Microsoft.ServiceBus/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftservicebusnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftservicebusnamespaces) | [Azure Service Bus](/azure/service-bus/) | | | [Azure Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.|
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/sqlPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspacessqlpools) | | | | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironments) | | | | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments/eventsources | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironmentseventsources) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironmentseventsources) | | |
- | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.VMwareCloudSimple/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftvmwarecloudsimplevirtualmachines) | No | | |
+ | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.VMwareCloudSimple/virtualMachines | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/connections | [**Yes**](./essentials/metrics-supported.md#microsoftwebconnections) | No | | | | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebhostingenvironments) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments/multiRolePools | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
azure-monitor Workbooks Commonly Used Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-commonly-used-components.md
You can summarize status by using a simple visual indication instead of presenti
The following example shows how to set up a traffic light icon per computer based on the CPU utilization metric. 1. [Create a new empty workbook](workbooks-create-workbook.md).
-1. [Add a parameter](workbooks-create-workbook.md#add-a-parameter-to-an-azure-workbook), make it a [time range parameter](workbooks-time.md), and name it **TimeRange**.
+1. [Add a parameter](workbooks-create-workbook.md#add-parameters), make it a [time range parameter](workbooks-time.md), and name it **TimeRange**.
1. Select **Add query** to add a log query control to the workbook. 1. For **Query type**, select `Logs`, and for **Resource type**, select `Log Analytics`. Select a Log Analytics workspace in your subscription that has VM performance data as a resource. 1. In the query editor, enter:
The following example shows how to enable this scenario. Let's say you want the
### Set up parameters
-1. [Create a new empty workbook](workbooks-create-workbook.md) and [add a parameter component](workbooks-create-workbook.md#add-a-parameter-to-an-azure-workbook).
+1. [Create a new empty workbook](workbooks-create-workbook.md) and [add a parameter component](workbooks-create-workbook.md#add-parameters).
1. Select **Add parameter** to create a new parameter. Use the following settings: - **Parameter name**: `OsFilter` - **Display name**: `Operating system`
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
This video walks you through creating workbooks.
To create a new Azure workbook: 1. From the Azure Workbooks page, select an empty template or select **New** in the top toolbar. 1. Combine any of these elements to add to your workbook:
- - [Text](#adding-text)
- - [Parameters](#adding-parameters)
- - [Queries](#adding-queries)
- - [Metric charts](#adding-metric-charts)
- - [Links](#adding-links)
- - [Groups](#adding-groups)
+ - [Text](#add-text)
+ - [Parameters](#add-parameters)
+ - [Queries](#add-queries)
+ - [Metric charts](#add-metric-charts)
+ - [Links](#add-links)
+ - [Groups](#add-groups)
- Configuration options
-## Adding text
+## Add text
Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
Text is added through a markdown control into which an author can add their cont
**Preview mode**: :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
-### Add text to an Azure workbook
+To add text to an Azure workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a query by doing either of these steps: - Select **Add**, and **Add text** below an existing element, or at the bottom of the workbook.
You can also choose a text parameter as the source of the style. The parameter v
**Warning style example**: :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
-## Adding queries
+## Add queries
Azure Workbooks allow you to query any of the supported workbook [data sources](workbooks-data-sources.md). For example, you can query Azure Resource Health to help you view any service problems affecting your resources. You can also query Azure Monitor metrics, which is numeric data collected at regular intervals. Azure Monitor metrics provide information about an aspect of a system at a particular time.
-### Add a query to an Azure Workbook
+To add a query to an Azure Workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a query by doing either of these steps: - Select **Add**, and **Add query** below an existing element, or at the bottom of the workbook.
For example, you can query Azure Resource Health to help you view any service pr
``` In this case, the query returns no rows if the **AzureDiagnostics** table is missing, or if the **ResourceId** column is missing from the table.
-## Adding parameters
+## Add parameters
-You can collect input from consumers and reference it in other parts of the workbook using parameters. Often, you would use parameters to scope the result set or to set the right visual. Parameters help you build interactive reports and experiences.
+You can collect input from consumers and reference it in other parts of the workbook using parameters. Use parameters to scope the result set or to set the right visual. Parameters help you build interactive reports and experiences. For more information on how parameters can be used, see [workbook parameters](workbooks-parameters.md).
Workbooks allow you to control how your parameter controls are presented to consumers ΓÇô text box vs. drop down, single- vs. multi-select, values from text, JSON, KQL, or Azure Resource Graph, etc.
-### Add a parameter to an Azure Workbook
+Watch this video to learn how to use parameters and log data in Azure Workbooks.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE59Wee]
+
+To add a parameter to an Azure Workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a parameter by doing either of these steps: - Select **Add**, and **Add parameter** below an existing element, or at the bottom of the workbook.
Workbooks allow you to control how your parameter controls are presented to cons
:::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing the creation of a time range parameter.":::
-## Adding metric charts
+## Add metric charts
Most Azure resources emit metric data about state and health such as CPU utilization, storage availability, count of database transactions, failing app requests, etc. Using workbooks, you can create visualizations of the metric data as time-series charts.
The example below shows the number of transactions in a storage account over the
:::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" alt-text="Screenshot showing a metric area chart for storage transactions in a workbook.":::
-### Add a metric chart to an Azure Workbook
+To add a metric chart to an Azure Workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a metric chart by doing either of these steps: - Select **Add**, and **Add metric** below an existing element, or at the bottom of the workbook.
This is a metric chart in edit mode:
:::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" alt-text="Screenshot showing a metric scatter chart for storage latency.":::
-## Adding links
+## Add links
You can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs. :::image type="content" source="media/workbooks-create-workbook/workbooks-empty-links.png" alt-text="Screenshot of adding a link to a workbook.":::+
+Watch this video to learn how to use tabs, groups, and contextual links in Azure Workbooks:
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE59YTe]
### Link styles You can apply styles to the link element itself and to individual links.
You can apply styles to the link element itself and to individual links.
|List |:::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-list.png" alt-text="Screenshot of list style workbook link."::: | Links appear as a list of links, with no bullets. | |Paragraph | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-paragraph.png" alt-text="Screenshot of paragraph style workbook link."::: |Links appear as a paragraph of links, wrapped like a paragraph of text. | |Navigation | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-navigation.png" alt-text="Screenshot of navigation style workbook link."::: | Links appear as links, with vertical dividers, or pipes (`|`) between each link. |
-|Tabs | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-tabs.png" alt-text="Screenshot of tabs style workbook link."::: |Links appear as tabs. Each link appears as a tab, no link styling options apply to individual links. See the [tabs](#using-tabs) section below for how to configure tabs. |
-|Toolbar | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-toolbar.png" alt-text="Screenshot of toolbar style workbook link."::: | Links appear an Azure portal styled toolbar, with icons and text. Each link appears as a toolbar button. See the [toolbar](#using-toolbars) section below for how to configure toolbars. |
+|Tabs | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-tabs.png" alt-text="Screenshot of tabs style workbook link."::: |Links appear as tabs. Each link appears as a tab, no link styling options apply to individual links. See the [tabs](#tabs) section below for how to configure tabs. |
+|Toolbar | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-toolbar.png" alt-text="Screenshot of toolbar style workbook link."::: | Links appear an Azure portal styled toolbar, with icons and text. Each link appears as a toolbar button. See the [toolbar](#toolbars) section below for how to configure toolbars. |
**Link styles**
Links can use all of the link actions available in [link actions](workbooks-link
|Set a parameter value | A parameter can be set to a value when selecting a link, button, or tab. Tabs are often configured to set a parameter to a value, which hides and shows other parts of the workbook based on that value.| |Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another step visible. This action can be used to create a "table of contents", or a "go back to the top" style experience. |
-### Using tabs
+### Tabs
Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where selecting either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
g
- The first tab is selected by default, invoking whatever action that tab has specified. If the first tab's action opens another view, as soon as the tabs are created, a view appears. - You can use tabs to open another views, but this functionality should be used sparingly, since most users won't expect to navigate by selecting a tab. If other tabs are setting a parameter to a specific value, a tab that opens a view wouldn't change that value, so the rest of the workbook content will continue to show the view or data for the previous tab.
-### Using toolbars
+### Toolbars
Use the Toolbar style to have your links appear styled as a toolbar. In toolbar style, the author must fill in fields for:
If any required parameters are used in button text, tooltip text, or value field
A sample workbook with toolbars, globals parameters, and ARM Actions is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-toolbar-links).
-## Adding groups
+## Add groups
A group item in a workbook allows you to logically group a set of steps in a workbook.
Groups in workbooks are useful for several things:
- **Visibility**: When you want several items to hide or show together, you can set the visibility of the entire group of items, instead of setting visibility settings on each individual item. This can be useful in templates that use tabs, as you can use a group as the content of the tab, and the entire group can be hidden/shown based on a parameter set by the selected tab. - **Performance**: When you have a large template with many sections or tabs, you can convert each section into its own subtemplate, and use groups to load all the subtemplates within the top-level template. The content of the subtemplates won't load or run until a user makes those groups visible. Learn more about [how to split a large template into many templates](#splitting-a-large-template-into-many-templates).
-### Add a group to your workbook
+To add a group to your workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a parameter by doing either of these steps: - Select **Add**, and **Add group** below an existing element, or at the bottom of the workbook.
azure-monitor Workbooks Graph Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-graph-visualizations.md
The following graph shows data flowing in and out of a computer via various port
[![Screenshot that shows a tile summary view.](./media/workbooks-graph-visualizations/graph.png)](./media/workbooks-graph-visualizations/graph.png#lightbox)
+Watch this video to learn how to create graphs and use links in Azure Workbooks.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5ah5O]
+ ## Add a graph 1. Switch the workbook to edit mode by selecting **Edit**.
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | locks | scope of assignment | 1-90 | Alphanumerics, periods, underscores, hyphens, and parenthesis.<br><br>Can't end in period. | > | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. | > | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
+> | policyExemptions | scope of exemption | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. | > | roleAssignments | tenant | 36 | Must be a globally unique identifier (GUID). | > | roleDefinitions | tenant | 36 | Must be a globally unique identifier (GUID). |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
This section talks about limited access features in Azure Video Indexer.
|When did I create the account?|Trial account (free)| Paid account <br/>(classic or ARM-based)| ||||
-|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. <br/><br/>We proactively sent emails to these customers + AEs.|
+|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period.|
|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features (after max 10 days).| \*In Brazil South we also disabled the face detection.
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
This section talks about limited access features in Azure Video Indexer.
|When did I create the account?|Trial Account (Free)| Paid Account <br/>(classic or ARM-based)| ||||
-|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. <br/><br/>We proactively sent emails to these customers + AEs.|
+|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period.|
|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features (after max 10 days).| \*In Brazil South we also disabled the face detection.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Before you begin the prerequisites, review the [Performance best practices](#per
Azure VMware Solution currently supports the following regions:
-**America** : East US, East US 2, West US, Central US, South Central US, North Central US, Canada East, Canada Central .
-
-**Europe** : West Europe, North Europe, UK West, UK South, France Central, Switzerland West, Germany West Central.
-
-**Asia** : East Asia, Southeast Asia, Japan East, Japan West.
+**Asia** : East Asia, Japan East, Japan West, Southeast Asia.
**Australia** : Australia East, Australia Southeast. **Brazil** : Brazil South.
+**Europe** : France Central, Germany West Central, North Europe, Switzerland West, UK South, UK West, West Europe
+
+**North America** : Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US.
+ The list of supported regions will expand as the preview progresses. ## Performance best practices
backup Backup Azure Reports Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reports-data-model.md
Title: Data model for Azure Backup diagnostics events description: This data model is in reference to the Resource Specific Mode of sending diagnostic events to Log Analytics (LA). Previously updated : 10/30/2019 Last updated : 10/19/2022+++ # Data Model for Azure Backup Diagnostics Events
This table provides information about core backup entities, such as vaults and b
| BackupItemFriendlyName | Text | Friendly name of the backup item | | BackupItemName | Text | Name of the backup item | | BackupItemProtectionState | Text | Protection State of the Backup Item |
-| BackupItemFrontEndSize | Text | Front-end size of the backup item |
+| BackupItemFrontEndSize | Text | Front-end size (in MBs) of the backup item |
| BackupItemType | Text | Type of backup item. For example: VM, FileFolder | | BackupItemUniqueId | Text | Unique identifier of the backup item | | BackupManagementServerType | Text | Type of the Backup Management Server, as in MABS, SC DPM |
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
Title: Overview of Backup vaults description: An overview of Backup vaults. Previously updated : 09/26/2022 Last updated : 10/19/2022
This section explains how to move a Backup vault (configured for Azure Backup) a
### Supported regions
-The vault move across subscriptions and resource groups is supported in all public regions.
+The vault move across subscriptions and resource groups is supported in all public and national regions.
### Use Azure portal to move Backup vault to a different resource group
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
Last updated 09/27/2021
Microsoft Azure offers a cloud infrastructure with a wide range of integrated cloud services to meet your business needs. In some cases, though, you may need to run services on bare metal servers without a virtualization layer. You may need root access and control over the operating system (OS). To meet this need, Azure offers BareMetal Infrastructure for several high-value, mission-critical applications. BareMetal Infrastructure is made up of dedicated BareMetal instances (compute instances). It features:-- High-performance storage appropriate to the application (NFS, ISCSI, and Fiber Channel). Storage can also be shared across BareMetal instances to enable features like scale-out clusters or high availability pairs with STONITH.
+- High-performance storage appropriate to the application (NFS, ISCSI, and Fiber Channel). Storage can also be shared across BareMetal instances to enable features like scale-out clusters or high availability pairs with failed-node-fencing capability.
- A set of function-specific virtual LANs (VLANs) in an isolated environment. This environment also has special VLANs you can access if you're running virtual machines (VMs) on one or more Azure Virtual Networks (VNets) in your Azure subscription. The entire environment is represented as a resource group in your Azure subscription.
bastion Bastion Connect Vm Rdp Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-rdp-linux.md
description: Learn how to use Azure Bastion to connect to Linux VM using RDP.
Previously updated : 08/08/2022 Last updated : 10/18/2022
In order to make a connection, the following roles are required:
* Reader role on the virtual machine * Reader role on the NIC with private IP of the virtual machine * Reader role on the Azure Bastion resource
+* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network).
+ ### Ports
To connect to the Linux VM via RDP, you must have the following ports open on yo
## Next steps
-Read the [Bastion FAQ](bastion-faq.md).
+Read the [Bastion FAQ](bastion-faq.md) for more information.
bastion Bastion Connect Vm Ssh Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-windows.md
description: Learn how to use Azure Bastion to connect to Windows VM using SSH.
Previously updated : 08/18/2022 Last updated : 10/18/2022
In order to make a connection, the following roles are required:
* Reader role on the virtual machine * Reader role on the NIC with private IP of the virtual machine * Reader role on the Azure Bastion resource
+* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network).
### Ports
In order to connect to the Windows VM via SSH, you must have the following ports
* Inbound port: SSH (22) *or* * Inbound port: Custom value (you will then need to specify this custom port when you connect to the VM via Azure Bastion)
+See the [Azure Bastion FAQ](bastion-faq.md) for additional requirements.
+ ### Supported configurations Currently, Azure Bastion only supports connecting to Windows VMs via SSH using **OpenSSH**.
center-sap-solutions Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/deploy-s4hana.md
Title: Deploy S/4HANA infrastructure (preview)
-description: Learn how to deploy S/4HANA infrastructure with Azure Center for SAP solutions (ACSS) through the Azure portal. You can deploy High Availability (HA), non-HA, and single-server configurations.
+description: Learn how to deploy S/4HANA infrastructure with Azure Center for SAP solutions through the Azure portal. You can deploy High Availability (HA), non-HA, and single-server configurations.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to deploy S/4HANA infrastructure using Azure Center for SAP solutions so that I can manage SAP workloads in the Azure portal.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azure Center for SAP solutions (ACSS)*. There are [three deployment options](#deployment-types): distributed with High Availability (HA), distributed non-HA, and single server.
+In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azure Center for SAP solutions*. There are [three deployment options](#deployment-types): distributed with High Availability (HA), distributed non-HA, and single server.
## Prerequisites
There are three deployment options that you can select for your infrastructure,
1. In the search bar, enter and select **Azure Center for SAP solutions**.
-1. On the ACSS landing page, select **Create a new SAP system**.
+1. On the Azure Center for SAP solutions landing page, select **Create a new SAP system**.
1. On the **Create Virtual Instance for SAP solutions** page, on the **Basics** tab, fill in the details for your project.
There are three deployment options that you can select for your infrastructure,
1. For **SAP FQDN**, provide FQDN for you system such "sap.contoso.com"
-1. Under **User assigned managed identity**, provide the identity which ACSS will use to deploy infrastructure.
+1. Under **User assigned managed identity**, provide the identity which Azure Center for SAP solutions will use to deploy infrastructure.
1. For **Managed identity source**, choose if you want to create a new identity or use an existing identity.
There are three deployment options that you can select for your infrastructure,
1. Select **Next: Virtual machines**.
-1. In the **Virtual machines** tab, generate SKU size and total VM count recommendations for each SAP instance from ACSS.
+1. In the **Virtual machines** tab, generate SKU size and total VM count recommendations for each SAP instance from Azure Center for SAP solutions.
1. For **Generate Recommendation based on**, under **Get virtual machine recommendations**, select **SAP Application Performance Standard (SAPS)**.
There are three deployment options that you can select for your infrastructure,
The number of VMs for ASCS and Database instances aren't editable. The default number for each is **2**.
- ACSS automatically configures a database disk layout for the deployment. To view the layout for a single database server, make sure to select a VM SKU. Then, select **View disk configuration**. If there's more than one database server, the layout applies to each server.
+ Azure Center for SAP solutions automatically configures a database disk layout for the deployment. To view the layout for a single database server, make sure to select a VM SKU. Then, select **View disk configuration**. If there's more than one database server, the layout applies to each server.
1. Select **Next: Tags**.
-1. Optionally, enter tags to apply to all resources created by the ACSS process. These resources include the VIS, ASCS instances, Application Server instances, Database instances, VMs, disks, and NICs.
+1. Optionally, enter tags to apply to all resources created by the Azure Center for SAP solutions process. These resources include the VIS, ASCS instances, Application Server instances, Database instances, VMs, disks, and NICs.
1. Select **Review + Create**.
center-sap-solutions Get Quality Checks Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/get-quality-checks-insights.md
Title: Get quality checks and insights for a Virtual Instance for SAP solutions (preview)
-description: Learn how to get quality checks and insights for a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS) through the Azure portal.
+description: Learn how to get quality checks and insights for a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to use the quality checks feature so that I can learn more insights about virtual machines within my Virtual Instance for SAP resource.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-The *Quality Insights* Azure workbook in *Azure Center for SAP solutions (ACSS)* provides insights about the SAP system resources. The feature is part of the monitoring capabilities built in to the *Virtual Instance for SAP solutions (VIS)*. These quality checks make sure that your SAP system uses Azure and SAP best practices for reliability and performance.
+The *Quality Insights* Azure workbook in *Azure Center for SAP solutions* provides insights about the SAP system resources. The feature is part of the monitoring capabilities built in to the *Virtual Instance for SAP solutions (VIS)*. These quality checks make sure that your SAP system uses Azure and SAP best practices for reliability and performance.
In this how-to guide, you'll learn how to use quality checks and insights to get more information about virtual machine (VM) configurations within your SAP system. ## Prerequisites -- An SAP system that you've [created with ACSS](deploy-s4hana.md) or [registered with ACSS](register-existing-system.md).
+- An SAP system that you've [created with Azure Center for SAP solutions](deploy-s4hana.md) or [registered with Azure Center for SAP solutions](register-existing-system.md).
## Open Quality Insights workbook
To open the workbook:
:::image type="content" source="media/get-quality-checks-insights/quality-insights.png" lightbox="media/get-quality-checks-insights/quality-insights.png" alt-text="Screenshot of Azure portal, showing the Quality Insights workbook page selected in the sidebar menu for a virtual Instance for SAP solutions."::: There are multiple sections in the workbook:-- Select the default **Advisor Recommendations** tab to [see the list of recommendations made by ACSS for the different instances in your VIS](#get-advisor-recommendations)
+- Select the default **Advisor Recommendations** tab to [see the list of recommendations made by Azure Center for SAP solutions for the different instances in your VIS](#get-advisor-recommendations)
- Select the **Virtual Machine** tab to [find information about the VMs in your VIS](#get-vm-information) - Select the **Configuration Checks** tab to [see configuration checks for your VIS](#run-configuration-checks) ## Get Advisor Recommendations
-The **Quality checks** feature in ACSS runs validation checks for all VIS resources. These quality checks validate the SAP system configurations follow the best practices recommended by SAP and Azure. If a VIS doesn't follow these best practices, you receive a recommendation from Azure Advisor.
+The **Quality checks** feature in Azure Center for SAP solutions runs validation checks for all VIS resources. These quality checks validate the SAP system configurations follow the best practices recommended by SAP and Azure. If a VIS doesn't follow these best practices, you receive a recommendation from Azure Advisor.
The table in the **Advisor Recommendations** tab shows all the recommendations for ASCS, Application and Database instances in the VIS.
The following checks are run for each VIS:
> [!NOTE] > These quality checks run on all VIS instances at a regular frequency of 12 hours. The corresponding recommendations in Azure Advisor also refresh at the same 12-hour frequency.
-If you take action on one or more recommendations from ACSS, wait for the next refresh to see any new recommendations from Azure Advisor.
+If you take action on one or more recommendations from Azure Center for SAP solutions, wait for the next refresh to see any new recommendations from Azure Advisor.
## Get VM information
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
Title: Install SAP software (preview)
-description: Learn how to install software on your SAP system created using Azure Center for SAP solutions (ACSS).
+description: Learn how to install software on your SAP system created using Azure Center for SAP solutions.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to install SAP software so that I can use Azure Center for SAP solutions.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-After you've created infrastructure for your new SAP system using *Azure Center for SAP solutions (ACSS)*, you need to install the SAP software.
+After you've created infrastructure for your new SAP system using *Azure Center for SAP solutions*, you need to install the SAP software.
In this how-to guide, you'll learn how to upload and install all the required components in your Azure account. You can either [run a pre-installation script to automate the upload process](#option-1-upload-software-components-with-script) or [manually upload the components](#option-2-upload-software-components-manually). Then, you can [run the software installation wizard](#install-software).
In this how-to guide, you'll learn how to upload and install all the required co
## Supported software
-ACSS supports the following SAP software version: **S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00**.
+Azure Center for SAP solutions supports the following SAP software version: **S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00**.
Following is the operating system (OS) software versions compatibility with SAP Software Version: | Publisher | Version | Generation SKU | Patch version name | Supported SAP Software Version |
The following components are necessary for the SAP installation:
- `jq` version 1.6 - `ansible` version 2.9.27 - `netaddr` version 0.8.0-- The SAP Bill of Materials (BOM), as generated by ACSS. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_064_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information:
+- The SAP Bill of Materials (BOM), as generated by Azure Center for SAP solutions. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_064_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information:
- The full name of the SAP package (`name`) - The package name with its file extension as downloaded (`archive`) - The checksum of the package as specified by SAP (`checksum`)
You can use the following method to download and upload the SAP components to yo
You also can [run scripts to automate this process](#option-1-upload-software-components-with-script) instead. 1. Create a new Azure storage account for storing the software components.
-1. Grant the ACSS application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access to this storage account.
+1. Grant the Azure Center for SAP solutions application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access to this storage account.
1. Create a container within the storage account. You can choose any container name; for example, **sapbits**. 1. Create two folders within the container, named **deployervmpackages** and **sapfiles**. > [!WARNING]
Now, you can [install the SAP software](#install-software) using the installatio
## Install software
-To install the SAP software on Azure, use the ACSS installation wizard.
+To install the SAP software on Azure, use the Azure Center for SAP solutions installation wizard.
1. Sign in to the [Azure portal](https://portal.azure.com).
center-sap-solutions Manage Virtual Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/manage-virtual-instance.md
Title: Manage a Virtual Instance for SAP solutions (preview)
-description: Learn how to configure a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS) through the Azure portal.
+description: Learn how to configure a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to configure my Virtual Instance for SAP solutions resource so that I can find system properties and connect to databases.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this article, you'll learn how to view the *Virtual Instance for SAP solutions (VIS)* resource created in *Azure Center for SAP solutions (ACSS)* through the Azure portal. You can use these steps to find your SAP system's properties and connect parts of the VIS to other resources like databases.
+In this article, you'll learn how to view the *Virtual Instance for SAP solutions (VIS)* resource created in *Azure Center for SAP solutions* through the Azure portal. You can use these steps to find your SAP system's properties and connect parts of the VIS to other resources like databases.
## Prerequisites
To configure your VIS in the Azure portal:
1. On the **Azure Center for SAP solutions** overview page, search for and select **Virtual Instances for SAP solutions** in the sidebar menu. 1. On the **Virtual Instances for SAP solutions** page, select the VIS that you want to view.
- :::image type="content" source="media/configure-virtual-instance/select-vis.png" lightbox="media/configure-virtual-instance/select-vis.png" alt-text="Screenshot of Azure portal, showing the VIS page in the ACSS service with a table of available VIS resources.":::
+ :::image type="content" source="media/configure-virtual-instance/select-vis.png" lightbox="media/configure-virtual-instance/select-vis.png" alt-text="Screenshot of Azure portal, showing the VIS page in the Azure Center for SAP solutions service with a table of available VIS resources.":::
## Monitor VIS
In the sidebar menu, look under the section **SAP resources**:
## Connect to HANA database
-If you've deployed an SAP system using ACSS, [find the SAP system's main password and HANA database passwords](#find-sap-and-hana-passwords).
+If you've deployed an SAP system using Azure Center for SAP solutions, [find the SAP system's main password and HANA database passwords](#find-sap-and-hana-passwords).
The HANA database username is either `system` or `SYSTEM` for:
To delete a VIS:
1. Select **Delete** to delete the VIS. 1. Wait for the deletion operation to complete for the VIS and related resources.
-After you delete a VIS, you can register the SAP system again. Open ACSS in the Azure portal, and select **Register an existing SAP system**.
+After you delete a VIS, you can register the SAP system again. Open Azure Center for SAP solutions in the Azure portal, and select **Register an existing SAP system**.
## Next steps
center-sap-solutions Monitor Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/monitor-portal.md
Title: Monitor SAP system from the Azure portal (preview)
-description: Learn how to monitor the health and status of your SAP system, along with important SAP metrics, using the Azure Center for SAP solutions (ACSS) within the Azure portal.
+description: Learn how to monitor the health and status of your SAP system, along with important SAP metrics, using the Azure Center for SAP solutions within the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to set up monitoring for my Virtual Instance for SAP solutions, so that I can monitor the health and status of my SAP system in Azure Center for SAP solutions.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to monitor the health and status of your SAP system with *Azure Center for SAP solutions (ACSS)* through the Azure portal. The following capabilities are available for your *Virtual Instance for SAP solutions* resource:
+In this how-to guide, you'll learn how to monitor the health and status of your SAP system with *Azure Center for SAP solutions* through the Azure portal. The following capabilities are available for your *Virtual Instance for SAP solutions* resource:
- Monitor your SAP system, along with its instances and VMs. - Analyze important SAP infrastructure metrics.-- Create and/or register an instance of Azure Monitor for SAP solutions (AMS) to monitor SAP platform metrics.
+- Create and/or register an instance of Azure Monitor for SAP solutions to monitor SAP platform metrics.
## System health
-The *health* of an SAP system within ACSS is based on the status of its underlying instances. Codes for health are also determined by the collective impact of these instances on the performance of the SAP system.
+The *health* of an SAP system within Azure Center for SAP solutions is based on the status of its underlying instances. Codes for health are also determined by the collective impact of these instances on the performance of the SAP system.
Possible values for health are:
Possible values for health are:
## System status
-The *status* of an SAP system within ACSS indicates the current state of the system.
+The *status* of an SAP system within Azure Center for SAP solutions indicates the current state of the system.
Possible values for status are:
To check basic health and status settings:
1. On the page for the VIS, review the table of instances. There is an overview of health and status information for each VIS.
- :::image type="content" source="media/monitor-portal/all-vis-statuses.png" lightbox="media/monitor-portal/all-vis-statuses.png" alt-text="Screenshot of the ACSS service in the Azure portal, showing a page of all VIS resources with their health and status information.":::
+ :::image type="content" source="media/monitor-portal/all-vis-statuses.png" lightbox="media/monitor-portal/all-vis-statuses.png" alt-text="Screenshot of the Azure Center for SAP solutions service in the Azure portal, showing a page of all VIS resources with their health and status information.":::
1. Select the VIS you want to check.
To see information about SAP application server instances:
## Monitor SAP infrastructure
-ACSS enables you to analyze important SAP infrastructure metrics from the Azure portal.
+Azure Center for SAP solutions enables you to analyze important SAP infrastructure metrics from the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com).
ACSS enables you to analyze important SAP infrastructure metrics from the Azure
## Configure Azure Monitor
-You can also set up or register AMS to monitor SAP platform-level metrics.
+You can also set up or register Azure Monitor for SAP solutions to monitor SAP platform-level metrics.
1. Sign in to the [Azure portal](https://portal.azure.com).
You can also set up or register AMS to monitor SAP platform-level metrics.
1. On the page for the VIS, select the VIS from the table.
-1. In the sidebar menu for the VIS, under **Monitoring**, select **Azure Monitor for SAP**.
+1. In the sidebar menu for the VIS, under **Monitoring**, select **Azure Monitor for SAP solutions**.
-1. Select whether you want to [create a new AMS instance](#create-new-ams-resource), or [register an existing AMS instance](#register-existing-ams-resource). If you don't see this option, you've already configured this setting.
+1. Select whether you want to [create a new Azure Monitor for SAP solutions instance](#create-new-Azure Monitor for SAP solutions-resource), or [register an existing Azure Monitor for SAP solutions instance](#register-existing-Azure Monitor for SAP solutions-resource). If you don't see this option, you've already configured this setting.
- :::image type="content" source="media/monitor-portal/monitoring-setup.png" lightbox="media/monitor-portal/monitoring-setup.png" alt-text="Screenshot of AMS page inside a VIS resource in the Azure portal, showing the option to create or register a new instance.":::
+ :::image type="content" source="media/monitor-portal/monitoring-setup.png" lightbox="media/monitor-portal/monitoring-setup.png" alt-text="Screenshot of Azure Monitor for SAP solutions page inside a VIS resource in the Azure portal, showing the option to create or register a new instance.":::
-1. After you create or register your AMS instance, you are redirected to the AMS instance.
+1. After you create or register your Azure Monitor for SAP solutions instance, you are redirected to the Azure Monitor for SAP solutions instance.
-### Create new AMS resource
+### Create new Azure Monitor for SAP solutions resource
-To configure a new AMS resource:
+To configure a new Azure Monitor for SAP solutions resource:
-1. On the **Create new AMS resource** page, select the **Basics** tab.
+1. On the **Create new Azure Monitor for SAP solutions resource** page, select the **Basics** tab.
- :::image type="content" source="media/monitor-portal/ams-creation.png" lightbox="media/monitor-portal/ams-creation.png" alt-text="Screenshot of AMS creation page, showing the Basics tab and required fields.":::
+ :::image type="content" source="media/monitor-portal/ams-creation.png" lightbox="media/monitor-portal/ams-creation.png" alt-text="Screenshot of Azure Monitor for SAP solutions creation page, showing the Basics tab and required fields.":::
1. Under **Project details**, configure your resource. 1. For **Subscription**, select your Azure subscription.
- 1. For **AMS resource group**, select the same resource group as the VIS.
+ 1. For **Azure Monitor for SAP solutions resource group**, select the same resource group as the VIS.
> [!IMPORTANT] > If you select a resource group that's different from the resource group of the VIS, the deployment fails.
-1. Under **AMS instance details**, configure your AMS instance.
+1. Under **Azure Monitor for SAP solutions instance details**, configure your Azure Monitor for SAP solutions instance.
- 1. For **Resource name**, enter a name for your AMS resource.
+ 1. For **Resource name**, enter a name for your Azure Monitor for SAP solutions resource.
1. For **Workload region**, select an Azure region for your workload.
To configure a new AMS resource:
1. Select the **Review + Create** tab.
-### Register existing AMS resource
+### Register existing Azure Monitor for SAP solutions resource
-To register an existing **AMS resource**, select the instance from the drop-down menu on the **Register AMS** page.
+To register an existing Azure Monitor for SAP solutions resource, select the instance from the drop-down menu on the registration page.
> [!NOTE]
-> You can only view and select the current version of AMS resources. AMS (classic) resources aren't available.
+> You can only view and select the current version of Azure Monitor for SAP solutions resources. Azure Monitor for SAP solutions (classic) resources aren't available.
- :::image type="content" source="media/monitor-portal/ams-registration.png" lightbox="media/monitor-portal/ams-registration.png" alt-text="Screenshot of AMS registration page, showing the selection of an existing AMS resource.":::
+ :::image type="content" source="media/monitor-portal/ams-registration.png" lightbox="media/monitor-portal/ams-registration.png" alt-text="Screenshot of Azure Monitor for SAP solutions registration page, showing the selection of an existing Azure Monitor for SAP solutions resource.":::
-## Unregister AMS from VIS
+## Unregister Azure Monitor for SAP solutions from VIS
> [!NOTE]
-> This operation only unregisters the AMS resource from the VIS. To delete the AMS resource, you need to delete the AMS instance.
+> This operation only unregisters the Azure Monitor for SAP solutions resource from the VIS. To delete the Azure Monitor for SAP solutions resource, you need to delete the Azure Monitor for SAP solutions instance.
-To remove the link between your AMS resource and your VIS:
+To remove the link between your Azure Monitor for SAP solutions resource and your VIS:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the sidebar menu, under **Monitoring**, select **Azure Monitor for SAP**.
+1. In the sidebar menu, under **Monitoring**, select **Azure Monitor for SAP solutions**.
-1. On the AMS page, select **Delete** to unregister the resource.
+1. On the Azure Monitor for SAP solutions page, select **Delete** to unregister the resource.
1. Wait for the confirmation message, **Azure Monitor for SAP solutions has been unregistered successfully**.
center-sap-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/overview.md
Title: Azure Center for SAP solutions (preview)
-description: Azure Center for SAP solutions (ACSS) is an Azure offering that makes SAP a top-level workload on Azure. You can use ACSS to deploy or manage SAP systems on Azure seamlessly.
+description: Azure Center for SAP solutions is an Azure offering that makes SAP a top-level workload on Azure. You can use Azure Center for SAP solutions to deploy or manage SAP systems on Azure seamlessly.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to learn about Azure Center for SAP solutions so that I can decide to use the service with a new or existing SAP system.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-*Azure Center for SAP solutions (ACSS)* is an Azure offering that makes SAP a top-level workload on Azure. ACSS is an end-to-end solution that enables you to create and run SAP systems as a unified workload on Azure and provides a more seamless foundation for innovation. You can take advantage of the management capabilities for both new and existing Azure-based SAP systems.
+*Azure Center for SAP solutions* is an Azure offering that makes SAP a top-level workload on Azure. Azure Center for SAP solutions is an end-to-end solution that enables you to create and run SAP systems as a unified workload on Azure and provides a more seamless foundation for innovation. You can take advantage of the management capabilities for both new and existing Azure-based SAP systems.
-The guided deployment experience takes care of creating the necessary compute, storage and networking components needed to run your SAP system. ACSS then helps automate the installation of the SAP software according to Microsoft best practices.
+The guided deployment experience takes care of creating the necessary compute, storage and networking components needed to run your SAP system. Azure Center for SAP solutions then helps automate the installation of the SAP software according to Microsoft best practices.
-In ACSS, you either create a new SAP system or register an existing one, which then creates a *Virtual Instance for SAP solutions (VIS)*. The VIS brings SAP awareness to Azure by providing management capabilities, such as being able to see the status and health of your SAP systems. Another example is quality checks and insights, which allow you to know when your system isn't following documented best practices and standards.
+In Azure Center for SAP solutions, you either create a new SAP system or register an existing one, which then creates a *Virtual Instance for SAP solutions (VIS)*. The VIS brings SAP awareness to Azure by providing management capabilities, such as being able to see the status and health of your SAP systems. Another example is quality checks and insights, which allow you to know when your system isn't following documented best practices and standards.
-You can use ACSS to deploy the following types of SAP systems:
+You can use Azure Center for SAP solutions to deploy the following types of SAP systems:
- Single server - Distributed
For existing SAP systems that run on Azure, there's a simple registration experi
- SAP systems that run on Windows, SUSE and RHEL Linux operating systems - SAP systems that run on HANA, DB2, SQL Server, Oracle, Max DB, or SAP ASE databases
-ACSS brings services, tools and frameworks together to provide an end-to-end unified experience for deployment and management of SAP workloads on Azure, creating the foundation for you to build innovative solutions for your unique requirements.
+Azure Center for SAP solutions brings services, tools and frameworks together to provide an end-to-end unified experience for deployment and management of SAP workloads on Azure, creating the foundation for you to build innovative solutions for your unique requirements.
## What is a Virtual Instance for SAP solutions?
-When you use ACSS, you'll create a *Virtual Instance for SAP solutions (VIS)* resource. The VIS is a logical representation of an SAP system on Azure.
+When you use Azure Center for SAP solutions, you'll create a *Virtual Instance for SAP solutions (VIS)* resource. The VIS is a logical representation of an SAP system on Azure.
-Every time that you [create a new SAP system through ACSS](deploy-s4hana.md), or [register an existing SAP system to ACSS](register-existing-system.md), Azure creates a VIS. A VIS contains the metadata for the entire SAP system.
+Every time that you [create a new SAP system through Azure Center for SAP solutions](deploy-s4hana.md), or [register an existing SAP system to Azure Center for SAP solutions](register-existing-system.md), Azure creates a VIS. A VIS contains the metadata for the entire SAP system.
Each VIS consists of:
Each VIS consists of:
Inside the VIS, the SID is the parent resource. Your VIS resource is named after the SID of your SAP system. Any ASCS, Application Server, or database instances are child resources of the SID. The child resources are associated with one or more VM resources outside of the VIS. A standalone system has all three instances mapped to a single VM. A distributed system has one ASCS and one Database instance, with each mapped to a VM. High Availability (HA) deployments have the ASCS and Database instances mapped to multiple VMs to enable HA. A distributed or HA type SAP system can have multiple Application Server instances linked to their respective VMs.
-## What can you do with ACSS?
+## What can you do with Azure Center for SAP solutions?
After you create a VIS, you can:
After you create a VIS, you can:
## Next steps - [Create a network for a new VIS deployment](prepare-network.md)-- [Register an existing SAP system in ACSS](register-existing-system.md)
+- [Register an existing SAP system in Azure Center for SAP solutions](register-existing-system.md)
center-sap-solutions Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/prepare-network.md
Title: Prepare network for infrastructure deployment (preview)
-description: Learn how to prepare a network for use with an S/4HANA infrastructure deployment with Azure Center for SAP solutions (ACSS) through the Azure portal.
+description: Learn how to prepare a network for use with an S/4HANA infrastructure deployment with Azure Center for SAP solutions through the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to create a virtual network so that I can deploy S/4HANA infrastructure in Azure Center for SAP solutions.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to prepare a virtual network to deploy S/4 HANA infrastructure using *Azure Center for SAP solutions (ACSS)*. This article provides general guidance about creating a virtual network. Your individual environment and use case will determine how you need to configure your own network settings for use with a *Virtual Instance for SAP (VIS)* resource.
+In this how-to guide, you'll learn how to prepare a virtual network to deploy S/4 HANA infrastructure using *Azure Center for SAP solutions*. This article provides general guidance about creating a virtual network. Your individual environment and use case will determine how you need to configure your own network settings for use with a *Virtual Instance for SAP (VIS)* resource.
-If you have an existing network that you're ready to use with ACSS, [go to the deployment guide](deploy-s4hana.md) instead of following this guide.
+If you have an existing network that you're ready to use with Azure Center for SAP solutions, [go to the deployment guide](deploy-s4hana.md) instead of following this guide.
## Prerequisites - An Azure subscription. - [Review the quotas for your Azure subscription](../azure-portal/supportability/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. - It's recommended to have multiple IP addresses in the subnet or subnets before you begin deployment. For example, it's always better to have a `/26` mask instead of `/29`. -- Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow ACSS to size your SAP system. If you're not sure, you can also select the VMs. There are:
+- Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are:
- A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS. - A single or cluster of Database VMs, which make up a single Database instance in the VIS. - A single Application Server VM, which makes up a single Application instance in the VIS. Depending on the number of Application Servers being deployed or registered, there can be multiple application instances.
If you're using Red Hat for the VMs, [allowlist the Red Hat endpoints](../virtua
### Allowlist storage accounts
-ACSS needs access to the following storage accounts to install SAP software correctly:
+Azure Center for SAP solutions needs access to the following storage accounts to install SAP software correctly:
- The storage account where you're storing the SAP media that is required during software installation.-- The storage account created by ACSS in a managed resource group, which ACSS also owns and manages.
+- The storage account created by Azure Center for SAP solutions in a managed resource group, which Azure Center for SAP solutions also owns and manages.
There are multiple options to allow access to these storage accounts:
There are multiple options to allow access to these storage accounts:
### Allowlist Key Vault
-ACSS creates a key vault to store and access the secret keys during software installation. This key vault also stores the SAP system password. To allow access to this key vault, you can:
+Azure Center for SAP solutions creates a key vault to store and access the secret keys during software installation. This key vault also stores the SAP system password. To allow access to this key vault, you can:
- Allow internet connectivity - Configure a [**AzureKeyVault** service tag](../virtual-network/service-tags-overview.md#available-service-tags)
ACSS creates a key vault to store and access the secret keys during software ins
### Allowlist Azure AD
-ACSS uses Azure AD to get the authentication token for obtaining secrets from a managed key vault during SAP installation. To allow access to Azure AD, you can:
+Azure Center for SAP solutions uses Azure AD to get the authentication token for obtaining secrets from a managed key vault during SAP installation. To allow access to Azure AD, you can:
- Allow internet connectivity - Configure an [**AzureActiveDirectory** service tag](../virtual-network/service-tags-overview.md#available-service-tags). ### Allowlist Azure Resource Manager
-ACSS uses a managed identity for software installation. Managed identity authentication requires a call to the Azure Resource Manager endpoint. To allow access to this endpoint, you can:
+Azure Center for SAP solutions uses a managed identity for software installation. Managed identity authentication requires a call to the Azure Resource Manager endpoint. To allow access to this endpoint, you can:
- Allow internet connectivity - Configure an [**AzureResourceManager** service tag](../virtual-network/service-tags-overview.md#available-service-tags).
center-sap-solutions Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/register-existing-system.md
Title: Register existing SAP system (preview)
-description: Learn how to register an existing SAP system in Azure Center for SAP solutions (ACSS) through the Azure portal. You can visualize, manage, and monitor your existing SAP system through ACSS.
+description: Learn how to register an existing SAP system in Azure Center for SAP solutions through the Azure portal. You can visualize, manage, and monitor your existing SAP system through Azure Center for SAP solutions.
Previously updated : 07/19/2022 Last updated : 10/19/2022
-#Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions (ACSS).
+#Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions.
# Register existing SAP system (preview) [!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to register an existing SAP system with *Azure Center for SAP solutions (ACSS)*. After you register an SAP system with ACSS, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
+In this how-to guide, you'll learn how to register an existing SAP system with *Azure Center for SAP solutions*. After you register an SAP system with Azure Center for SAP solutions, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
- View and track the SAP system as an Azure resource, called the *Virtual Instance for SAP solutions (VIS)*. - Get recommendations for your SAP infrastructure, based on quality checks that evaluate best practices for SAP on Azure.
In this how-to guide, you'll learn how to register an existing SAP system with *
- Check that you're trying to register a [supported SAP system configuration](#supported-systems) - Check that your Azure account has **Contributor** role access on the subscription or resource groups where you have the SAP system resources. - Register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system.-- A **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. ACSS service uses this identity to discover your SAP system resources and register the system as a VIS resource.
+- A **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
- Make sure each virtual machine (VM) in the SAP system is currently running on Azure. These VMs include: - The ABAP SAP Central Services (ASCS) Server instance - The Application Server instance or instances
In this how-to guide, you'll learn how to register an existing SAP system with *
## Supported systems
-You can register SAP systems with ACSS that run on the following configurations:
+You can register SAP systems with Azure Center for SAP solutions that run on the following configurations:
- SAP NetWeaver or ABAP stacks - Windows, SUSE and RHEL Linux operating systems - HANA, DB2, SQL Server, Oracle, Max DB, and SAP ASE databases
-The following SAP system configurations aren't supported in ACSS:
+The following SAP system configurations aren't supported in Azure Center for SAP solutions:
- HANA Large Instance (HLI) - Systems with HANA Scale-out configuration
The following SAP system configurations aren't supported in ACSS:
- Systems distributed across peered virtual networks - Systems using IPv6 addresses
-## Enable ACSS resource permissions
+## Enable resource permissions
-When you register an existing SAP system as a VIS, ACSS service needs a **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. Before you register an SAP system with ACSS, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
+When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
-ACSS uses this user-assigned managed identity to install VM extensions on the ASCS, Application Server and DB VMs. This step allows ACSS to discover the SAP system components, and other SAP system metadata. ACSS also needs this user-assigned managed identity to enable SAP system monitoring and management capabilities.
+Azure Center for SAP solutions uses this user-assigned managed identity to install VM extensions on the ASCS, Application Server and DB VMs. This step allows Azure Center for SAP solutions to discover the SAP system components, and other SAP system metadata. Azure Center for SAP solutions also needs this user-assigned managed identity to enable SAP system monitoring and management capabilities.
### Setup User-assigned managed identity To provide permissions to the SAP system resources to a user-assigned managed identity:
-1. [Create a new user-assigned managed identity](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) if needed or use an existing one.
-1. [Assign **Contributor** role access](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on all Resource Groups in which the SAP system resources exist. That is, Compute, Network and Storage Resource Groups.
-1. Once the permissions are assigned, this managed identity can be used in ACSS to register and manage SAP systems.
+1. [Create a new user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) if needed or use an existing one.
+1. [Assign **Contributor** role access](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on all Resource Groups in which the SAP system resources exist. That is, Compute, Network and Storage Resource Groups.
+1. Once the permissions are assigned, this managed identity can be used in Azure Center for SAP solutions to register and manage SAP systems.
## Register SAP system
-To register an existing SAP system in ACSS:
+To register an existing SAP system in Azure Center for SAP solutions:
-1. Sign in to the [Azure portal](https://portal.azure.com). Make sure to sign in with an Azure account that has **Contributor** role access to the subscription or resource groups where the SAP system exists. For more information, see the [resource permissions explanation](#enable-acss-resource-permissions).
+1. Sign in to the [Azure portal](https://portal.azure.com). Make sure to sign in with an Azure account that has **Contributor** role access to the subscription or resource groups where the SAP system exists. For more information, see the [resource permissions explanation](#enable-resource-permissions).
1. Search for and select **Azure Center for SAP solutions** in the Azure portal's search bar. 1. On the **Azure Center for SAP solutions** page, select **Register an existing SAP system**.
- :::image type="content" source="media/register-existing-system/register-button.png" alt-text="Screenshot of ACSS service overview page in the Azure portal, showing button to register an existing SAP system." lightbox="media/register-existing-system/register-button.png":::
+ :::image type="content" source="media/register-existing-system/register-button.png" alt-text="Screenshot of Azure Center for SAP solutions service overview page in the Azure portal, showing button to register an existing SAP system." lightbox="media/register-existing-system/register-button.png":::
1. On the **Basics** tab of the **Register existing SAP system** page, provide information about the SAP system. 1. For **ASCS virtual machine**, select **Select ASCS virtual machine** and select the ASCS VM resource.
To register an existing SAP system in ACSS:
1. For **SAP product**, select the SAP system product from the drop-down menu. 1. For **Environment**, select the environment type from the drop-down menu. For example, production or non-production environments. 1. For **Managed identity source**, select **Use existing user-assigned managed identity** option.
- 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Contributor** role access to the [resources of this SAP system.](#enable-acss-resource-permissions)
+ 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Contributor** role access to the [resources of this SAP system.](#enable-resource-permissions)
1. Select **Review + register** to discover the SAP system and begin the registration process.
- :::image type="content" source="media/register-existing-system/registration-page.png" alt-text="Screenshot of ACSS registration page, highlighting mandatory fields to identify the existing SAP system." lightbox="media/register-existing-system/registration-page.png":::
+ :::image type="content" source="media/register-existing-system/registration-page.png" alt-text="Screenshot of Azure Center for SAP solutions registration page, highlighting mandatory fields to identify the existing SAP system." lightbox="media/register-existing-system/registration-page.png":::
1. On the **Review + register** pane, make sure your settings are correct. Then, select **Register**.
To register an existing SAP system in ACSS:
You can now review the VIS resource in the Azure portal. The resource page shows the SAP system resources, and information about the system.
-If the registration doesn't succeed, see [what to do when an SAP system registration fails in ACSS](#fix-registration-failure).
+If the registration doesn't succeed, see [what to do when an SAP system registration fails in Azure Center for SAP solutions](#fix-registration-failure).
## Fix registration failure
-The process of registering an SAP system in ACSS might fail for the following reasons:
+The process of registering an SAP system in Azure Center for SAP solutions might fail for the following reasons:
- The selected ASCS VM and SID don't match. Make sure to select the correct ASCS VM for the SAP system that you chose, and vice versa. - The ASCS instance or VM isn't running. Make sure the instance and VM are in the **Running** state.
The process of registering an SAP system in ACSS might fail for the following re
- Command to start up sapstartsrv process on SAP VMs: /usr/sap/hostctrl/exe/hostexecstart -start - At least one Application Server and the Database aren't running for the SAP system that you chose. Make sure the Application Servers and Database VMs are in the **Running** state. - The user trying to register the SAP system doesn't have **Contributor** role permissions. For more information, see the [prerequisites for registering an SAP system](#prerequisites).-- The user-assigned managed identity doesn't have **Contributor** role access to the Azure subscription or resource groups where the SAP system exists. For more information, see [how to enable ACSS resource permissions](#enable-acss-resource-permissions).
+- The user-assigned managed identity doesn't have **Contributor** role access to the Azure subscription or resource groups where the SAP system exists. For more information, see [how to enable Azure Center for SAP solutions resource permissions](#enable-resource-permissions).
There's also a known issue with registering *S/4HANA 2021* version SAP systems. You might receive the error message: **Failed to discover details from the Db VM**. This error happens when the Database identifier is incorrectly configured on the SAP system. One possible cause is that the Application Server profile parameter `rsdb/dbid` has an incorrect identifier for the HANA Database. To fix the error:
center-sap-solutions Start Stop Sap Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/start-stop-sap-systems.md
Title: Start and stop SAP systems (preview)
-description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS) through the Azure portal.
+description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022
-#Customer intent: As a developer, I want to start and stop SAP systems in ACSS so that I can control instances through the Virtual Instance for SAP resource.
+#Customer intent: As a developer, I want to start and stop SAP systems in Azure Center for SAP solutions so that I can control instances through the Virtual Instance for SAP resource.
# Start and stop SAP systems (preview) [!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn to start and stop your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions (ACSS)*.
+In this how-to guide, you'll learn to start and stop your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions*.
Through the Azure portal, you can start and stop:
Through the Azure portal, you can start and stop:
## Prerequisites -- An SAP system that you've [created in ACSS](prepare-network.md) or [registered with ACSS](register-existing-system.md).
+- An SAP system that you've [created in Azure Center for SAP solutions](prepare-network.md) or [registered with Azure Center for SAP solutions](register-existing-system.md).
- For the start operation to work, all virtual machines (VMs) inside the SAP system must be running. This capability starts or stops the SAP application instances, not the VMs that make up the SAP system resources. - The `sapstartsrv` service must be running on all VMs related to the SAP system. - For HA deployments, the HA interface cluster connector for SAP (`sap_vendor_cluster_connector`) must be installed on the ASCS instance. For more information, see the [SUSE connector specifications](https://www.suse.com/c/sap-netweaver-suse-cluster-integration-new-sap_suse_cluster_connector-version-3-0-0/) and [RHEL connector specifications](https://access.redhat.com/solutions/3606101).
center-sap-solutions View Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/view-cost-analysis.md
Title: View post-deployment cost analysis in Azure Center for SAP solutions (preview)
-description: Learn how to view the cost of running an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS).
+description: Learn how to view the cost of running an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
Previously updated : 09/23/2022 Last updated : 10/19/2022 #Customer intent: As an SAP Basis Admin, I want to understand the cost incurred for running SAP systems on Azure.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to view the running cost of your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions (ACSS)*.
+In this how-to guide, you'll learn how to view the running cost of your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions*.
After you deploy or register an SAP system as a VIS resource, you can [view the cost of running that SAP system on the VIS resource's page](#view-cost-analysis). This feature shows the post-deployment running costs in the context of your SAP system. When you have Azure resources of multiple SAP systems in a single resource group, you no longer need to analyze the cost for each system. Instead, you can easily view the system-level cost from the VIS resource. ## How does cost analysis work?
-When you deploy infrastructure for a new SAP system with ACSS or register an existing system with ACSS, the **costanalysis-parent** tag is added to all virtual machines (VMs), disks, and load balancers related to that SAP system. The cost is determined by the total cost of all the Azure resources in the system with the **costanalysis-parent** tag.
+When you deploy infrastructure for a new SAP system with Azure Center for SAP solutions or register an existing system with Azure Center for SAP solutions, the **costanalysis-parent** tag is added to all virtual machines (VMs), disks, and load balancers related to that SAP system. The cost is determined by the total cost of all the Azure resources in the system with the **costanalysis-parent** tag.
Whenever there are changes to the SAP system, such as the addition or removal of Application Server Instance VMs, tags are updated on the relevant Azure resources. > [!NOTE]
cognitive-services Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-face-data.md
Title: "Migrate your face data across subscriptions - Face"
description: This guide shows you how to migrate your stored face data from one Face subscription to another. -+ Last updated 02/22/2021-+ ms.devlang: csharp
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
Title: What is Spatial Analysis?
description: This document explains the basic concepts and features of the Azure Spatial Analysis container. -+ -+
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
For a more structured approach, follow a Learn module for Image Analysis.
You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library.md) to get started.
+### Extract text from images (preview)
+
+Version 4.0 preview of Image Analysis offers the ability to extract text from images. Contextual information like line number and position is also returned. Text reading is also available through the main [OCR service](overview-ocr.md), but in Image Analysis this feature is optimized for image inputs as opposed to documents. [Reading text in images](concept-ocr.md)
+
+### Detect people in images (preview)
+
+Version 4.0 preview of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. [People detection](concept-people-detection.md)
### Tag visual features
Analyze the contents of an image to return the coordinates of the *area of inter
You can use Computer Vision to [detect adult content](concept-detecting-adult-content.md) in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.
-### Read text in images (preview)
-
-Version 4.0 of Image Analysis offers the ability to extract text from images. Contextual information like line number and position is also returned. Text reading is also available through the main [OCR service](overview-ocr.md), but in Image Analysis this feature is optimized for image inputs as opposed to documents. [Reading text in images](concept-ocr.md)
-
-### Detect people in images (preview)
-
-Version 4.0 of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. [People detection](concept-people-detection.md)
- ## Image requirements #### [Version 3.2](#tab/3-2)
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
## Run the container in disconnected environments
-Starting in container version 3.0.0, select customers can run speech-to-text containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
+You must request access to use containers disconnected from the internet. For more information, see [Request access to use containers in disconnected environments](../containers/disconnected-containers.md#request-access-to-use-containers-in-disconnected-environments).
-Starting in container version 2.0.0, select customers can run neural-text-to-speech containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
+> [!NOTE]
+> For general container requirements, see [Container requirements and recommendations](#container-requirements-and-recommendations).
# [Speech-to-text](#tab/stt)
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
Title: Limited Access features for Cognitive Services
description: Azure Cognitive Services that are available with Limited Access are described below. -+ Last updated 06/16/2022-+ # Limited Access features for Cognitive Services
cognitive-services Create Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-bicep.md
Title: Create an Azure Cognitive Services resource using Bicep | Microsoft Docs
description: Create an Azure Cognitive Service resource with Bicep. keywords: cognitive services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence -+ Last updated 04/29/2022-+
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
Last updated 11/23/2021
The question answering Authoring API is used to automate common tasks like adding new question answer pairs, as well as creating, publishing, and maintaining projects/knowledge bases. > [!NOTE]
-> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
+> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
## Prerequisites
cognitive-services Power Virtual Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/power-virtual-agents.md
In this tutorial, you learn how to:
> * Publish Power Virtual Agents > * Test Power Virtual Agents, and receive an answer from your Question Answering project
-> [!Note]
+> [!NOTE]
> The QnA Maker service is being retired on the 31st of March, 2025. A newer version of the question and answering capability is now available as part of [Azure Cognitive Service for Language](/azure/cognitive-services/language-service/). For question answering capabilities within the Language Service, see [question answering](../overview.md). Starting 1st October, 2022 you wonΓÇÖt be able to create new QnA Maker resources. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../how-to/migrate-qnamaker.md). ## Create and publish a project
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md
By default, Text Analytics for health will use the latest available AI model on
| Supported Versions | latest version | |--|--|
+| `2022-08-15-preview` | `2022-08-15-preview` |
| `2022-03-01` | `2022-03-01` | | `2021-05-15` | `2021-05-15` |
The [Text Analytics for health container](use-containers.md) uses separate model
### Input languages
-Currently the Text Analytics for health hosted API only [supports](../language-support.md) the English language. Additional languages are currently in preview when deploying the API in a container, as detailed [under Text Analytics for health languages support](../language-support.md).
+The Text Analytics for health supports English in addition to multiple languages that are currently in preview. You can use the hosted API or deploy the API in a container, as detailed [under Text Analytics for health languages support](../language-support.md).
## Submitting data
Analysis is performed upon receipt of the request. If you send a request using t
[!INCLUDE [asynchronous-result-availability](../../includes/async-result-availability.md)]
+## Submitting a Fast Healthcare Interoperability Resources (FHIR) request
+
+To receive your result using the **FHIR** structure, you must send the FHIR version in the API request body. You can also send the **document type** as a parameter to the FHIR API request body. If the request does not specify a document type, the value is set to none.
+
+| Parameter Name | Type | Value |
+|--|--|--|
+| fhirVersion | string | `4.0.1` |
+| documentType | string | `ClinicalTrial`, `Consult`, `DischargeSummary`, `HistoryAndPhysical`, `Imaging`, `None`, `Pathology`, `ProcedureNote`, `ProgressNote`|
+++ ## Getting results from the feature Depending on your API request, and the data you submit to the Text Analytics for health, you will get:
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
Use this article to learn which natural languages are supported by Text Analytic
## Hosted API Service
-The hosted API service supports English language, model version 03-01-2022.
+The hosted API service supports English language, model version 03-01-2022. Additional languages, English, Spanish, French, German Italian, Portuguese and Hebrew are supported with model version 2022-08-15-preview.
+
+When structuring the API request, the relevant language tags must be added for these languages:
+
+```
+English ΓÇô ΓÇ£enΓÇ¥
+Spanish ΓÇô ΓÇ£esΓÇ¥
+French - ΓÇ£frΓÇ¥
+German ΓÇô ΓÇ£deΓÇ¥
+Italian ΓÇô ΓÇ£itΓÇ¥
+Portuguese ΓÇô ΓÇ£ptΓÇ¥
+Hebrew ΓÇô ΓÇ£heΓÇ¥
+```
+```json
+json
+
+{
+ "analysisInput": {
+ "documents": [
+ {
+ "text": "El médico prescrió 200 mg de ibuprofeno.",
+ "language": "es",
+ "id": "1"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "Healthcare",
+ }
+ ]
+}
+```
## Docker container
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: Using Text Analytics for health client library and REST API > [!IMPORTANT]
-> Fast Healthcare Interoperability Resources (FHIR) structuring is available for preview using the Language REST API. The client libraries are not currently supported.
+> Fast Healthcare Interoperability Resources (FHIR) structuring is available for preview using the Language REST API. The client libraries are not currently supported. [Learn more](./how-to/call-api.md) on how to use FHIR structuring in your API call.
::: zone pivot="programming-language-csharp"
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* Expanded language support for: * [Sentiment analysis](./sentiment-opinion-mining/language-support.md) * [Key phrase extraction](./key-phrase-extraction/language-support.md)
- * [Named entity recognition](./key-phrase-extraction/language-support.md)
+ * [Named entity recognition](./named-entity-recognition/language-support.md)
+ * [Text Analytics for health](./text-analytics-for-health/language-support.md)
* [Multi-region deployment](./concepts/custom-features/multi-region-deployment.md) and [project asset versioning](./concepts/custom-features/project-versioning.md) for: * [Conversational language understanding](./conversational-language-understanding/overview.md) * [Orchestration workflow](./orchestration-workflow/overview.md)
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Conversational language understanding](./conversational-language-understanding/service-limits.md#regional-availability) * [Orchestration workflow](./orchestration-workflow/service-limits.md#regional-availability) * [Custom text classification](./custom-text-classification/service-limits.md#regional-availability)
- * [Custom named entity recognition](./custom-named-entity-recognition/service-limits.md#regional-availability)
+ * [Custom named entity recognition](./custom-named-entity-recognition/service-limits.md#regional-availability)
+* Document type as an input supported for [Text Analytics for health](./text-analytics-for-health/how-to/call-api.md) FHIR requests
## September 2022
cognitive-services Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/manage-resources.md
Title: Recover deleted Cognitive Services resource
description: This article provides instructions on how to recover an already-deleted Cognitive Services resource. -+ Last updated 07/02/2021-+ # Recover deleted Cognitive Services resources
cognitive-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/responsible-use-of-ai-overview.md
Title: Overview of Responsible use of AI
description: Azure Cognitive Services provides information and guidelines on how to responsibly use our AI services in applications. Below are the links to articles that provide this guidance for the different services within the Cognitive Services suite. -+ Last updated 1/10/2022-+ # Responsible use of AI with Cognitive Services
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services
description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 10/12/2022 --++
communication-services Credentials Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/credentials-best-practices.md
leaveChatBtn.addEventListener('click', function() {
}); ```
+If you want to cancel subsequent refresh tasks, [dispose](#clean-up-resources) of the Credential object.
+ ### Clean up resources
-Since the Credential object can be passed to multiple Chat or Calling client instances, the SDK will make no assumptions about its lifetime and leaves the responsibility of its disposal to the developer. It's up to the Communication Services applications to dispose the Credential instance when it's no longer needed. Disposing the credential is also the recommended way of canceling scheduled refresh actions when the proactive refreshing is enabled.
+Since the Credential object can be passed to multiple Chat or Calling client instances, the SDK will make no assumptions about its lifetime and leaves the responsibility of its disposal to the developer. It's up to the Communication Services applications to dispose the Credential instance when it's no longer needed. Disposing the credential will also cancel scheduled refresh actions when the proactive refreshing is enabled.
Call the `.dispose()` function.
const chatClient = new ChatClient("<endpoint-url>", tokenCredential);
tokenCredential.dispose() ```
+## Handle a sign-out
+
+Depending on your scenario, you may want to sign a user out from one or more
+
+- To sign a user out from a single service, [dispose](#clean-up-resources) of the Credential object.
+- To sign a user out from multiple services, implement a signaling mechanism to notify all services to [dispose](#clean-up-resources) of the Credential object, and additionally, [revoke all access tokens](../quickstarts/access-tokens.md?tabs=windows&pivots=programming-language-javascript#revoke-access-tokens) for a given identity.
+ ## Next steps
communication-services Manage Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-video.md
Last updated 08/10/2021
-zone_pivot_groups: acs-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
#Customer intent: As a developer, I want to manage video calls with the acs sdks so that I can create a calling application that provides video capabilities.
Learn how to manage video calls with the Azure Communication Services SDKS. We'l
[!INCLUDE [Manage Video Calls iOS](./includes/manage-video/manage-video-ios.md)] ::: zone-end + ## Next steps - [Learn how to manage calls](./manage-calls.md) - [Learn how to record calls](./record-calls.md)
confidential-computing Quick Create Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-marketplace.md
If you don't have an Azure subscription, [create an account](https://azure.micro
1. Fill in the following information in the Basics tab:
- * **Authentication type**: Select **SSH public key** if you're creating a Linux VM.
+ * **Authentication type**: Select **SSH public key** if you're creating a Linux VM.
> [!NOTE] > You have the choice of using an SSH public key or a Password for authentication. SSH is more secure. For instructions on how to generate an SSH key, see [Create SSH keys on Linux and Mac for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
If you don't have an Azure subscription, [create an account](https://azure.micro
## Connect to the Linux VM
-If you already use a BASH shell, connect to the Azure VM using the **ssh** command. In the following command, replace the VM user name and IP address to connect to your Linux VM.
+Open your SSH client of choice, like Bash on Linux or PowerShell on Windows. The `ssh` command is typically included in Linux, macOS, and Windows. If you are using Windows 7 or older, where Win32 OpenSSH is not included by default, consider installing [WSL](/windows/wsl/about) or using [Azure Cloud Shell](../cloud-shell/overview.md) from the browser. In the following command, replace the VM user name and IP address to connect to your Linux VM.
```bash ssh azureadmin@40.55.55.555
You can find the Public IP address of your VM in the Azure portal, under the Ove
:::image type="content" source="media/quick-create-portal/public-ip-virtual-machine.png" alt-text="IP address in Azure portal":::
-If you're running on Windows and don't have a BASH shell, install an SSH client, such as PuTTY.
-
-1. [Download and install PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).
-
-1. Run PuTTY.
-
-1. On the PuTTY configuration screen, enter your VM's public IP address.
-
-1. Select **Open** and enter your username and password at the prompts.
-
-For more information about connecting to Linux VMs, see [Create a Linux VM on Azure using the Portal](../virtual-machines/linux/quick-create-portal.md).
-
-> [!NOTE]
-> If you see a PuTTY security alert about the server's host key not being cached in the registry, choose from the following options. If you trust this host, select **Yes** to add the key to PuTTy's cache and continue connecting. If you want to carry on connecting just once, without adding the key to the cache, select **No**. If you don't trust this host, select **Cancel** to abandon the connection.
- ## Intel SGX Drivers > [!NOTE]
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-portal.md
If you don't have an Azure subscription, [create an account](https://azure.micro
## Connect to the Linux VM
-If you already use a BASH shell, connect to the Azure VM using the **ssh** command. In the following command, replace the VM user name and IP address to connect to your Linux VM.
+Open your SSH client of choice, like Bash on Linux or PowerShell on Windows. The `ssh` command is typically included in Linux, macOS, and Windows. If you are using Windows 7 or older, where Win32 OpenSSH is not included by default, consider installing [WSL](/windows/wsl/about) or using [Azure Cloud Shell](../cloud-shell/overview.md) from the browser. In the following command, replace the VM user name and IP address to connect to your Linux VM.
```bash ssh azureadmin@40.55.55.555
You can find the Public IP address of your VM in the Azure portal, under the Ove
:::image type="content" source="media/quick-create-portal/public-ip-virtual-machine.png" alt-text="IP address in Azure portal":::
-If you're running on Windows and don't have a BASH shell, install an SSH client, such as PuTTY.
-
-1. [Download and install PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
-
-1. Run PuTTY.
-
-1. On the PuTTY configuration screen, enter your VM's public IP address.
-
-1. Select **Open** and enter your username and password at the prompts.
For more information about connecting to Linux VMs, see [Create a Linux VM on Azure using the Portal](../virtual-machines/linux/quick-create-portal.md).
-> [!NOTE]
-> If you see a PuTTY security alert about the server's host key not being cached in the registry, choose from the following options. If you trust this host, select **Yes** to add the key to PuTTy's cache and continue connecting. If you want to carry on connecting just once, without adding the key to the cache, select **No**. If you don't trust this host, select **Cancel** to abandon the connection.
- ## Install Azure DCAP Client > [!NOTE]
container-apps Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment.md
Previously updated : 12/05/2021 Last updated : 10/18/2022
Reasons to deploy container apps to the same environment include situations when
- Manage related services - Deploy different applications to the same virtual network-- Have applications communicate with each other using Dapr
+- Instrument Dapr applications that communicate via the Dapr service invocation API
- Have applications to share the same Dapr configuration - Have applications share the same log analytics workspace Reasons to deploy container apps to different environments include situations when you want to ensure: - Two applications never share the same compute resources-- Two applications can't communicate with each other via Dapr
+- Two Dapr applications can't communicate via the Dapr service invocation API
## Logs
container-registry Container Registry Content Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-content-trust.md
Important to any distributed system designed with security in mind is verifying
As an image publisher, content trust allows you to **sign** the images you push to your registry. Consumers of your images (people or systems pulling images from your registry) can configure their clients to pull *only* signed images. When an image consumer pulls a signed image, their Docker client verifies the integrity of the image. In this model, consumers are assured that the signed images in your registry were indeed published by you, and that they've not been modified since being published.
+> [!NOTE]
+> Azure Container Registry (ACR) does not support `acr import` to import images signed with Docker Content Trust (DCT). By design, the signatures are not visible after the import, and the notary v2 stores these signatures as artifacts.
+ ### Trusted images Content trust works with the **tags** in a repository. Image repositories can contain images with both signed and unsigned tags. For example, you might sign only the `myimage:stable` and `myimage:latest` images, but not `myimage:dev`.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
In some situations, you may want to override this automatic behavior to better s
Azure Cosmos DB supports two indexing modes: - **Consistent**: The index is updated synchronously as you create, update or delete items. This means that the consistency of your read queries will be the [consistency configured for the account](consistency-levels.md).-- **None**: Indexing is disabled on the container. This is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete.
+- **None**: Indexing is disabled on the container. This mode is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete.
> [!NOTE] > Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmoslazyindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing).
-By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index documents as they are written.
+By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index documents as they're written.
## <a id="index-size"></a>Index size
Taking the same example again:
- the path to anything under `headquarters` is `/headquarters/*`
-For example, we could include the `/headquarters/employees/?` path. This path would ensure that we index the employees property but would not index additional nested JSON within this property.
+For example, we could include the `/headquarters/employees/?` path. This path would ensure that we index the employees property but wouldn't index additional nested JSON within this property.
## Include/exclude strategy Any indexing policy has to include the root path `/*` as either an included or an excluded path. -- Include the root path to selectively exclude paths that don't need to be indexed. This is the recommended approach as it lets Azure Cosmos DB proactively index any new property that may be added to your model.-- Exclude the root path to selectively include paths that need to be indexed.
+- Include the root path to selectively exclude paths that don't need to be indexed. This approach is recommended as it lets Azure Cosmos DB proactively index any new property that may be added to your model.
-- For paths with regular characters that include: alphanumeric characters and _ (underscore), you don't have to escape the path string around double quotes (for example, "/path/?"). For paths with other special characters, you need to escape the path string around double quotes (for example, "/\"path-abc\"/?"). If you expect special characters in your path, you can escape every path for safety. Functionally, it doesn't make any difference if you escape every path Vs just the ones that have special characters.
+- Exclude the root path to selectively include paths that need to be indexed. The partition key property path isn't indexed by default with the exclude strategy and should be explicitly included if needed.
+
+- For paths with regular characters that include: alphanumeric characters and _ (underscore), you don't have to escape the path string around double quotes (for example, "/path/?"). For paths with other special characters, you need to escape the path string around double quotes (for example, "/\"path-abc\"/?"). If you expect special characters in your path, you can escape every path for safety. Functionally, it doesn't make any difference if you escape every path or just the ones that have special characters.
- The system property `_etag` is excluded from indexing by default, unless the etag is added to the included path for indexing.
When including and excluding paths, you may encounter the following attributes:
- `precision` is a number defined at the index level for included paths. A value of `-1` indicates maximum precision. We recommend always setting this value to `-1`. -- `dataType` can be either `String` or `Number`. This indicates the types of JSON properties which will be indexed.
+- `dataType` can be either `String` or `Number`. This indicates the types of JSON properties that will be indexed.
-It is no longer necessary to set these properties. When not specified, these properties will have the following default values:
+It's no longer necessary to set these properties. When not specified, these properties will have the following default values:
| **Property Name** | **Default Value** | | -- | -- |
Here's an example:
**Excluded Path**: `/food/ingredients/*`
-In this case, the included path takes precedence over the excluded path because it is more precise. Based on these paths, any data in the `food/ingredients` path or nested within would be excluded from the index. The exception would be data within the included path: `/food/ingredients/nutrition/*`, which would be indexed.
+In this case, the included path takes precedence over the excluded path because it's more precise. Based on these paths, any data in the `food/ingredients` path or nested within would be excluded from the index. The exception would be data within the included path: `/food/ingredients/nutrition/*`, which would be indexed.
Here are some rules for included and excluded paths precedence in Azure Cosmos DB:
When you define a spatial path in the indexing policy, you should define which i
* LineString
-Azure Cosmos DB, by default, will not create any spatial indexes. If you would like to use spatial SQL built-in functions, you should create a spatial index on the required properties. See [this section](sql-query-geospatial-index.md) for indexing policy examples for adding spatial indexes.
+Azure Cosmos DB, by default, won't create any spatial indexes. If you would like to use spatial SQL built-in functions, you should create a spatial index on the required properties. See [this section](sql-query-geospatial-index.md) for indexing policy examples for adding spatial indexes.
## Composite indexes Queries that have an `ORDER BY` clause with two or more properties require a composite index. You can also define a composite index to improve the performance of many equality and range queries. By default, no composite indexes are defined so you should [add composite indexes](how-to-manage-indexing-policy.md#composite-index) as needed.
-Unlike with included or excluded paths, you can't create a path with the `/*` wildcard. Every composite path has an implicit `/?` at the end of the path that you don't need to specify. Composite paths lead to a scalar value and this is the only value that is included in the composite index.
+Unlike with included or excluded paths, you can't create a path with the `/*` wildcard. Every composite path has an implicit `/?` at the end of the path that you don't need to specify. Composite paths lead to a scalar value that is the only value included in the composite index.
When defining a composite index, you specify:
When defining a composite index, you specify:
The following considerations are used when using composite indexes for queries with an `ORDER BY` clause with two or more properties: -- If the composite index paths do not match the sequence of the properties in the `ORDER BY` clause, then the composite index can't support the query.
+- If the composite index paths don't match the sequence of the properties in the `ORDER BY` clause, then the composite index can't support the query.
- The order of composite index paths (ascending or descending) should also match the `order` in the `ORDER BY` clause.
You should customize your indexing policy so you can serve all necessary `ORDER
If a query has filters on two or more properties, it may be helpful to create a composite index for these properties.
-For example, consider the following query which has both an equality and range filter:
+For example, consider the following query that has both an equality and range filter:
```sql SELECT *
FROM c
WHERE c.name = "John" AND c.age > 18 ```
-This query will be more efficient, taking less time and consuming fewer RU's, if it is able to leverage a composite index on `(name ASC, age ASC)`.
+This query will be more efficient, taking less time and consuming fewer RUs, if it's able to leverage a composite index on `(name ASC, age ASC)`.
Queries with multiple range filters can also be optimized with a composite index. However, each individual composite index can only optimize a single range filter. Range filters include `>`, `<`, `<=`, `>=`, and `!=`. The range filter should be defined last in the composite index.
FROM c
WHERE c.name = "John" AND c.age > 18 AND c._ts > 1612212188 ```
-This query will be more efficient with a composite index on `(name ASC, age ASC)` and `(name ASC, _ts ASC)`. However, the query would not utilize a composite index on `(age ASC, name ASC)` because the properties with equality filters must be defined first in the composite index. Two separate composite indexes are required instead of a single composite index on `(name ASC, age ASC, _ts ASC)` since each composite index can only optimize a single range filter.
+This query will be more efficient with a composite index on `(name ASC, age ASC)` and `(name ASC, _ts ASC)`. However, the query wouldn't utilize a composite index on `(age ASC, name ASC)` because the properties with equality filters must be defined first in the composite index. Two separate composite indexes are required instead of a single composite index on `(name ASC, age ASC, _ts ASC)` since each composite index can only optimize a single range filter.
The following considerations are used when creating composite indexes for queries with filters on multiple properties - Filter expressions can use multiple composite indexes.-- The properties in the query's filter should match those in composite index. If a property is in the composite index but is not included in the query as a filter, the query will not utilize the composite index.-- If a query has additional properties in the filter that were not defined in a composite index, then a combination of composite and range indexes will be used to evaluate the query. This will require fewer RU's than exclusively using range indexes.
+- The properties in the query's filter should match those in composite index. If a property is in the composite index but isn't included in the query as a filter, the query won't utilize the composite index.
+- If a query has other properties in the filter that aren't defined in a composite index, then a combination of composite and range indexes will be used to evaluate the query. This will require fewer RUs than exclusively using range indexes.
- If a property has a range filter (`>`, `<`, `<=`, `>=`, or `!=`), then this property should be defined last in the composite index. If a query has more than one range filter, it may benefit from multiple composite indexes. - When creating a composite index to optimize queries with multiple filters, the `ORDER` of the composite index will have no impact on the results. This property is optional.
ORDER BY c.firstName, c.lastName
The following considerations apply when creating composite indexes to optimize a query with a filter and `ORDER BY` clause:
-* If you do not define a composite index on a query with a filter on one property and a separate `ORDER BY` clause using a different property, the query will still succeed. However, the RU cost of the query can be reduced with a composite index, particularly if the property in the `ORDER BY` clause has a high cardinality.
-* If the query filters on properties, these should be included first in the `ORDER BY` clause.
+* If you don't define a composite index on a query with a filter on one property and a separate `ORDER BY` clause using a different property, the query will still succeed. However, the RU cost of the query can be reduced with a composite index, particularly if the property in the `ORDER BY` clause has a high cardinality.
+* If the query filters on properties, these properties should be included first in the `ORDER BY` clause.
* If the query filters on multiple properties, the equality filters must be the first properties in the `ORDER BY` clause. * If the query filters on multiple properties, you can have a maximum of one range filter or system function utilized per composite index. The property used in the range filter or system function should be defined last in the composite index. * All considerations for creating composite indexes for `ORDER BY` queries with multiple properties as well as queries with filters on multiple properties still apply.
The following considerations apply when creating composite indexes to optimize a
* If the query filters on multiple properties, the equality filters must be the first properties in the composite index. * You can have a maximum of one range filter per composite index and it must be on the property in the aggregate system function. * The property in the aggregate system function should be defined last in the composite index.
-* The `order` (`ASC` or `DESC`) does not matter.
+* The `order` (`ASC` or `DESC`) doesn't matter.
| **Composite Index** | **Sample Query** | **Supported by Composite Index?** | | - | | |
A container's indexing policy can be updated at any time [by using the Azure por
> [!NOTE] > You can track the progress of index transformation in the Azure portal or [by using one of the SDKs](how-to-manage-indexing-policy.md).
-There is no impact to write availability during any index transformations. The index transformation uses your provisioned RUs but at a lower priority than your CRUD operations or queries.
+There's no impact to write availability during any index transformations. The index transformation uses your provisioned RUs but at a lower priority than your CRUD operations or queries.
-There is no impact to read availability when adding new indexed paths. Queries will only utilize new indexed paths once an index transformation is complete. In other words, when adding a new indexed paths, queries that benefit from that indexed path will have the same performance before and during the index transformation. After the index transformation is complete, the query engine will begin to use the new indexed paths.
+There's no impact to read availability when adding new indexed paths. Queries will only utilize new indexed paths once an index transformation is complete. In other words, when adding a new indexed path, queries that benefit from that indexed path will have the same performance before and during the index transformation. After the index transformation is complete, the query engine will begin to use the new indexed paths.
-When removing indexed paths, you should group all your changes into one indexing policy transformation. If you remove multiple indexes and do so in one single indexing policy change, the query engine provides consistent and complete results throughout the index transformation. However, if you remove indexes through multiple indexing policy changes, the query engine will not provide consistent or complete results until all index transformations complete. Most developers do not drop indexes and then immediately try to run queries that utilize these indexes so, in practice, this situation is unlikely.
+When removing indexed paths, you should group all your changes into one indexing policy transformation. If you remove multiple indexes and do so in one single indexing policy change, the query engine provides consistent and complete results throughout the index transformation. However, if you remove indexes through multiple indexing policy changes, the query engine won't provide consistent or complete results until all index transformations complete. Most developers don't drop indexes and then immediately try to run queries that utilize these indexes so, in practice, this situation is unlikely.
-When you drop an indexed path, the query engine will immediately stop using it and instead do a full scan.
+When you drop an indexed path, the query engine will immediately stop using it, and will do a full scan instead.
> [!NOTE] > Where possible, you should always try to group multiple indexing changes into one single indexing policy modification
When you drop an indexed path, the query engine will immediately stop using it a
Using the [Time-to-Live (TTL) feature](time-to-live.md) requires indexing. This means that: -- it is not possible to activate TTL on a container where the indexing mode is set to `none`,-- it is not possible to set the indexing mode to None on a container where TTL is activated.
+- it isn't possible to activate TTL on a container where the indexing mode is set to `none`,
+- it isn't possible to set the indexing mode to None on a container where TTL is activated.
For scenarios where no property path needs to be indexed, but TTL is required, you can use an indexing policy with an indexing mode set to `consistent`, no included paths, and `/*` as the only excluded path.
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md
The following code shows how to register a pre-trigger using the JavaScript SDK:
```javascript const container = client.database("myDatabase").container("myContainer"); const triggerId = "trgPreValidateToDoItemTimestamp";
-await container.triggers.create({
+await container.scripts.triggers.create({
id: triggerId, body: require(`../js/${triggerId}`), triggerOperation: "create",
The following code shows how to register a post-trigger using the JavaScript SDK
```javascript const container = client.database("myDatabase").container("myContainer"); const triggerId = "trgPostUpdateMetadata";
-await container.triggers.create({
+await container.scripts.triggers.create({
id: triggerId, body: require(`../js/${triggerId}`), triggerOperation: "create",
cosmos-db Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/javascript-query-api.md
# JavaScript query API in Azure Cosmos DB [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-In addition to issuing queries using the API for NoSQL in Azure Cosmos DB, the [Azure Cosmos DB server-side SDK](https://github.com/Azure/azure-cosmosdb-js-server/) provides a JavaScript interface for performing optimized queries in Azure Cosmos DB Stored Procedures and Triggers. You don't have to be aware of the SQL language to use this JavaScript interface. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls, with a syntax familiar to ECMAScript5's array built-ins and popular JavaScript libraries like Lodash. Queries are parsed by the JavaScript runtime and efficiently executed using Azure Cosmos DB indices.
+In addition to issuing queries using the API for NoSQL in Azure Cosmos DB, the [Azure Cosmos DB server-side SDK](https://github.com/Azure/azure-cosmosdb-js-server/) provides a JavaScript interface for performing optimized queries in Azure Cosmos DB Stored Procedures and Triggers. You don't have to be aware of the SQL language to use this JavaScript interface. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls, with a syntax similar to ECMAScript5's array built-ins and popular JavaScript libraries like Lodash. Queries are parsed by the JavaScript runtime and efficiently executed using Azure Cosmos DB indices.
## Supported JavaScript functions
cosmos-db Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getting-started.md
Here are some examples of how to do **Point reads** with each SDK:
- [Java SDK](/java/api/com.azure.cosmos.cosmoscontainer.readitem#com-azure-cosmos-cosmoscontainer-(t)readitem(java-lang-string-com-azure-cosmos-models-partitionkey-com-azure-cosmos-models-cosmositemrequestoptions-java-lang-class(t))) - [Node.js SDK](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read) - [Python SDK](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-read-item)
+- [Go SDK](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#ContainerClient.ReadItem)
**SQL queries** - You can query data by writing queries using the Structured Query Language (SQL) as a JSON query language. Queries always cost at least 2.3 request units and, in general, will have a higher and more variable latency than point reads. Queries can return many items.
Here are some examples of how to do **SQL queries** with each SDK:
- [Java SDK](../samples-java.md#query-examples) - [Node.js SDK](../samples-nodejs.md#item-examples) - [Python SDK](../samples-python.md#item-examples)
+- [Go SDK](../samples-go.md#item-examples)
The remainder of this doc shows how to get started writing SQL queries in Azure Cosmos DB. SQL queries can be run through either the SDK or Azure portal.
cosmos-db Samples Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-go.md
Sample solutions that do CRUD operations and other common operations on Azure Co
## Database examples
-The [cosmos_client.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/dat) conceptual article.
+To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | |
The [cosmos_client.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/a
## Container examples
-The [cosmos_database.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/dat) conceptual article.
+To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | |
cosmos-db Tutorial Deploy App Bicep Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-deploy-app-bicep-aks.md
+
+ Title: 'Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep'
+description: Deploy an ASP.NET MVC web application with Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep.
++++++ Last updated : 10/17/2022++
+# Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep
++
+In this tutorial, you'll deploy a reference ASP.NET web application on an Azure Kubernetes Service (AKS) cluster that connects to Azure Cosmos DB for NoSQL.
+
+**[Azure Cosmos DB](../introduction.md)** is a fully managed distributed database platform for modern application development with NoSQL or relational databases.
+
+**[Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters.
+
+> [!IMPORTANT]
+>
+> - This article requires the latest version of Azure CLI. For more information, see [install Azure CLI](/cli/azure/install-azure-cli). If you are using the Azure Cloud Shell, the latest version is already installed.
+> - This article also requires the latest version of the Bicep CLI within Azure CLI. For more information, see [install Bicep tools](../../azure-resource-manager/bicep/install.md#azure-cli)
+> - If you are running the commands in this tutorial locally instead of in the Azure Cloud Shell, ensure you run the commands using an administrator account.
+>
+
+## Pre-requisites
+
+The following tools are required to compile the ASP.NET web application and create its container image.
+
+- [Docker Desktop](https://docs.docker.com/desktop/)
+- [Visual Studio Code](https://code.visualstudio.com/)
+ - [C# extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
+ - [Docker extension for Visual Studio Code](https://code.visualstudio.com/docs/containers/overview)
+ - [Azure Account extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account)
+
+## Overview
+
+This tutorial uses an [Infrastructure as Code (IaC)](/devops/deliver/what-is-infrastructure-as-code) approach to deploy the resources to Azure. We'll use **[Bicep](../../azure-resource-manager/bicep/overview.md)**, which is a new declarative language that offers the same capabilities as [ARM templates](../../azure-resource-manager/templates/overview.md). However, bicep includes a syntax that is more concise and easier to use.
+
+The Bicep modules will deploy the following Azure resources within the targeted subscription scope.
+
+1. A [resource group](../../azure-resource-manager/management/overview.md#resource-groups) to organize the resources
+1. A [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for authentication
+1. An [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md) for storing container images
+1. An [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster
+1. An [Azure Virtual Network (VNET)](../../virtual-network/network-overview.md) required for configuring AKS
+1. An [Azure Cosmos DB for NoSQL account](../introduction.md)) along with a database, container, and the [SQL role](/cli/azure/cosmosdb/sql/role)
+1. An [Azure Key Vault](../../key-vault/general/overview.md) to store secure keys
+1. (Optional) An [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md)f
+
+This tutorial uses the following security best practices with Azure Cosmos DB.
+
+1. Implements access control using [role-based access control](../../role-based-access-control/overview.md) and [managed identity](../../active-directory/managed-identities-azure-resources/overview.md). These features eliminate the need for developers to manage secrets, credentials, certificates, and keys used to secure communication between services.
+1. Limits Azure Cosmos DB access to the AKS subnet by [configuring a virtual network service endpoint](../how-to-configure-vnet-service-endpoint.md).
+1. Set `disableLocalAuth = true` in the **databaseAccount** resource to [enforce role-based access control as the only authentication method](../how-to-setup-rbac.md#disable-local-auth).
+
+> [!TIP]
+> The steps in this tutorial uses [Azure Cosmos DB for NoSQL](./quickstart-dotnet.md). However, the same concepts can also be applied to **[Azure Cosmos DB for MongoDB](../mongodb/introduction.md)**.
+
+## Download the Bicep modules
+
+Download or [clone](https://docs.github.com/repositories/creating-and-managing-repositories/cloning-a-repository) the Bicep modules from the **Bicep** folder of the [azure-samples/cosmos-aks-samples](https://github.com/Azure-Samples/cosmos-aks-samples/tree/main/Bicep) GitHub repository.
+
+```bash
+git clone https://github.com/Azure-Samples/cosmos-aks-samples.git
+
+cd Bicep/
+```
+
+## Connect to your Azure subscription
+
+Use [`az login`](/cli/azure/authenticate-azure-cli) to connect to your default Azure subscription.
+
+```azurecli
+az login
+```
+
+Optionally, use [`az account set`](/cli/azure/account#az-account-set) with the name or ID of a specific subscription to set the active subscription if you have multiple subscriptions.
+
+```azurecli
+az account set \
+ --subscription <subscription-id>
+```
+
+## Initialize the deployment parameters
+
+Create a **param.json** file by using the JSON in this example. Replace the `{resource group name}`, `{Azure Cosmos DB account name}`, and `{Azure Container Registry instance name}` placeholders with your own values for resource group name, Azure Cosmos DB account name, and Azure Container Registry instance name respectively.
+
+> [!IMPORTANT]
+> All resource names used in the steps below should be compliant with the **[naming rules and restrictions for Azure resources](../../azure-resource-manager/management/resource-name-rules.md)**, also ensure that the placeholders values are replaced consistently and match with values supplied in **param.json**.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "rgName": {
+ "value": "{resource group name}"
+ },
+ "cosmosName" :{
+ "value": "{Azure Cosmos DB account name}"
+ },
+ "acrName" :{
+ "value": "{Azure Container Registry instance name}"
+ }
+ }
+}
+```
+
+## Create a Bicep deployment
+
+Set shell variables using these commands replacing the `{deployment name}`, and `{location}` placeholders with your own values.
+
+```bash
+deploymentName='{deployment name}' # Name of the Deployment
+location='{location}' # Location for deploying the resources
+```
+
+Within the **Bicep** folder, use [`az deployment sub create`](/cli/azure/deployment/sub#az-deployment-sub-create) to deploy the template to the current subscription scope.
+
+```azurecli
+az deployment sub create \
+ --name $deploymentName \
+ --location $location \
+ --template-file main.bicep \
+ --parameters @param.json
+```
+
+During deployment, the console will output a message indicating that the deployment is still running:
+
+```output
+ / Running ..
+```
+
+The deployment could take somewhere around 20 to 30 minutes. Once provisioning is completed, the console will output JSON with `Succeeded` as the provisioning state.
+
+```output
+ }
+ ],
+ "provisioningState": "Succeeded",
+ "templateHash": "0000000000000000",
+ "templateLink": null,
+ "timestamp": "2022-01-01T00:00:00.000000+00:00",
+ "validatedResources": null
+ },
+ "tags": null,
+ "type": "Microsoft.Resources/deployments"
+}
+```
+
+You can also see the deployment status in the resource group
++
+> [!NOTE]
+> When creating an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks)
+
+## Link Azure Container Registry with AKS
+
+Replace the `{Azure Container Registry instance name}` and `{resource group name}` placeholders with your own values.
+
+```bash
+acrName='{Azure Container Registry instance name}'
+rgName='{resource group name}'
+aksName=$rgName'aks'
+```
+
+Run [`az aks update`](/cli/azure/aks#az-aks-update) to attach the existing ACR resource with the AKS cluster.
+
+```azurecli
+az aks update \
+ --resource-group $rgName \
+ --name $aksName \
+ --attach-acr $acrName
+```
+
+## Connect to the AKS cluster
+
+To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use [`az aks install-cli`](/cli/azure/aks#az-aks-install-cli):
+
+```azurecli
+az aks install-cli
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use [`az aks get-credentials`](/cli/azure/aks#az-aks-get-credentials). This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurecli
+az aks get-credentials \
+ --resource-group $rgName \
+ --name $aksName
+```
+
+## Connect the AKS pods to Azure Key Vault
+
+Azure Active Directory (Azure AD) pod-managed identities use AKS primitives to associate managed identities for Azure resources and identities in Azure AD with pods. We'll use these identities to grant access to the Azure Key Vault Secrets Provider for Secrets Store CSI driver.
+
+Use the command below to find the values of the Tenant ID (homeTenantId).
+
+```azurecli
+az account show
+```
+
+Use this YAML template to create a **secretproviderclass.yml** file. Make sure to update your own values for `{Tenant Id}` and `{resource group name}` placeholders. Ensure that the below values for resource group name placeholder match with values supplied in **param.json**.
+
+```yml
+# This is a SecretProviderClass example using aad-pod-identity to access the key vault
+apiVersion: secrets-store.csi.x-k8s.io/v1
+kind: SecretProviderClass
+metadata:
+ name: azure-kvname-podid
+spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "true"
+ keyvaultName: "{resource group name}kv" # Replace resource group name. Key Vault name is generated by Bicep
+ tenantId: "{Tenant Id}" # The tenant ID of your account, use 'homeTenantId' attribute value from the 'az account show' command output
+```
+
+## Apply the SecretProviderClass to the AKS cluster
+
+Use [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) to install the Secrets Store CSI Driver using the YAML.
+
+```bash
+kubectl apply \
+ --filename secretproviderclass.yml
+```
+
+## Build the ASP.NET web application
+
+Download or clone the web application source code from the **Application** folder of the [azure-samples/cosmos-aks-samples](https://github.com/Azure-Samples/cosmos-aks-samples/tree/main/Application) GitHub repository.
+
+```bash
+git clone https://github.com/Azure-Samples/cosmos-aks-samples.git
+
+cd Application/
+```
+
+Open the **Application folder** in **Visual Studio Code**. Run the application using either **F5** or the **Debug: Start Debugging** command.
+
+## Push the Docker container image to Azure Container Registry
+
+1. To create a container image from the Explorer tab on **Visual Studio Code**, open the context menu on the **Dockerfile** and select **Build Image...**. You'll then get a prompt asking for the name and version to tag the image. Enter the name `todo:latest`.
+
+ :::image type="content" source="./media/tutorial-deploy-app-bicep-aks/context-menu-build-docker-image.png" alt-text="Screenshot of the context menu in Visual Studio Code with the Build Image option selected.":::
+
+1. Use the Docker pane to push the built image to ACR. You'll find the built image under the **Images** node. Open the `todo` node, then open the context menu for the latest, and then finally select **Push...**.
+
+1. You'll then get prompts to select your Azure subscription, ACR resource, and image tags. The image tag format should be `{acrname}.azurecr.io/todo:latest`.
+
+ :::image type="content" source="./media/tutorial-deploy-app-bicep-aks/context-menu-push-docker-image.png" alt-text="Screenshot of the context menu in Visual Studio Code with the Push option selected.":::
+
+1. Wait for **Visual Studio Code** to push the container image to ACR.
+
+## Prepare Deployment YAML
+
+Use this YAML template to create an **akstododeploy.yml** file. Make sure to replace the values for `{ACR name}`, `{Image name}`, `{Version}`, and `{resource group name}` placeholders.
+
+```yml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: todo
+ labels:
+ aadpodidbinding: "cosmostodo-apppodidentity"
+ app: todo
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: todo
+ template:
+ metadata:
+ labels:
+ app: todo
+ aadpodidbinding: "cosmostodo-apppodidentity"
+ spec:
+ containers:
+ - name: mycontainer
+ image: "{ACR name}/{Image name}:{Version}" # update as per your environment, example myacrname.azurecr.io/todo:latest. Do NOT add https:// in ACR Name
+ ports:
+ - containerPort: 80
+ env:
+ - name: KeyVaultName
+ value: "{resource group name}kv" # Replace resource group name. Key Vault name is generated by Bicep
+ nodeSelector:
+ kubernetes.io/os: linux
+ volumes:
+ - name: secrets-store01-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: "azure-kvname-podid"
+
+
+kind: Service
+apiVersion: v1
+metadata:
+ name: todo
+spec:
+ selector:
+ app: todo
+ aadpodidbinding: "cosmostodo-apppodidentity"
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+```
+
+## Apply deployment YAML
+
+Use `kubectl apply` again to deploy the application pods and expose the pods via a load balancer.
+
+```bash
+kubectl apply \
+ --filename akstododeploy.yml \
+ --namespace 'my-app'
+```
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete
+
+Use [`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) to view the external IP exposed by the load balancer.
+
+```bash
+kubectl get services \
+ --namespace "my-app"
+```
+
+Open the IP received as output in a browser to access the application.
+
+## Clean up the resources
+
+To avoid Azure charges, you should clean up unneeded resources when the cluster is no longer needed. Use [`az group delete`](/cli/azure/group#az-group-delete) and [`az deployment sub delete`](/cli/azure/deployment/sub#az-deployment-sub-delete) to delete the resource group and subscription deployment respectively.
+
+```azurecli
+az group delete \
+ --resource-group $rgName
+ --yes
+
+az deployment sub delete \
+ --name $deploymentName
+```
+
+## Next steps
+
+- Learn how to [Develop a web application with Azure Cosmos DB](./tutorial-dotnet-web-app.md)
+- Learn how to [Query Azure Cosmos DB for NoSQL](./tutorial-query.md).
+- Learn how to [upgrade your cluster](../../aks/tutorial-kubernetes-upgrade-cluster.md)
+- Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md)
+- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md)
cosmos-db Howto Ingest Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-blob-storage.md
+
+ Title: Data ingestion with Azure Blob Storage - Azure Cosmos DB for PostgreSQL
+description: How to ingest data using Azure Blob Storage as a staging area
+++++ Last updated : 10/19/2022++
+# How to ingest data using Azure Blob Storage
++
+[Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/#features) (ABS) is a cloud-native scalable, durable and secure storage service. These characteristics of ABS make it a good choice of storing and moving existing data into the cloud.
+
+This article shows how to use the pg_azure_storage PostgreSQL extension to
+manipulate and load data into your Azure Cosmos DB for PostgreSQL directly from
+Azure Blob Storage.
+
+## Prepare database and blob storage
+
+To load data from Azure Blob Storage, install the `pg_azure_storage` PostgreSQL
+extension in your database:
+
+```sql
+SELECT * FROM create_extension('azure_storage');
+```
+
+We've prepared a public demonstration dataset for this article. To use your own
+dataset, follow [migrate your on-premises data to cloud
+storage](../../storage/common/storage-use-azcopy-migrate-on-premises-data.md)
+to learn how to get your datasets efficiently into Azure Blob Storage.
+
+> [!NOTE]
+>
+> Selecting "Container (anonymous read access for containers and blobs)" will allow you to ingest files from Azure Blob Storage using their public URLs and enumerating the container contents without the need to configure an account key in pg_azure_storage. Containers set to access level "Private (no anonymous access)" or "Blob (anonymous read access for blobs only)" will require an access key.
+
+## List container contents
+
+There's a demonstration Azure Blob Storage account and container pre-created for this how-to. The container's name is `github`, and it's in the `pgquickstart` account. We can easily see which files are present in the container by using the `azure_storage.blob_list(account, container)` function.
+
+```sql
+SELECT path, bytes, pg_size_pretty(bytes), content_type
+ FROM azure_storage.blob_list('pgquickstart','github');
+```
+
+```
+-[ RECORD 1 ]--+-
+path | events.csv.gz
+bytes | 41691786
+pg_size_pretty | 40 MB
+content_type | application/x-gzip
+-[ RECORD 2 ]--+-
+path | users.csv.gz
+bytes | 5382831
+pg_size_pretty | 5257 kB
+content_type | application/x-gzip
+```
+
+You can filter the output either by using a regular SQL `WHERE` clause, or by using the `prefix` parameter of the `blob_list` UDF. The latter will filter the returned rows on the Azure Blob Storage side.
++
+> [!NOTE]
+>
+> Listing container contents requires an account and access key or a container with enabled anonymous access.
++
+```sql
+SELECT * FROM azure_storage.blob_list('pgquickstart','github','e');
+```
+
+```
+-[ RECORD 1 ]-+
+path | events.csv.gz
+bytes | 41691786
+last_modified | 2022-10-12 18:49:51+00
+etag | 0x8DAAC828B970928
+content_type | application/x-gzip
+content_encoding |
+content_hash | 473b6ad25b7c88ff6e0a628889466aed
+```
+
+```sql
+SELECT *
+ FROM azure_storage.blob_list('pgquickstart','github')
+ WHERE path LIKE 'e%';
+```
+
+```
+-[ RECORD 1 ]-+
+path | events.csv.gz
+bytes | 41691786
+last_modified | 2022-10-12 18:49:51+00
+etag | 0x8DAAC828B970928
+content_type | application/x-gzip
+content_encoding |
+content_hash | 473b6ad25b7c88ff6e0a628889466aed
+```
+
+## Load data from ABS
+
+### Load data with the COPY command
+
+Start by creating a sample schema.
+
+```sql
+CREATE TABLE github_users
+(
+ user_id bigint,
+ url text,
+ login text,
+ avatar_url text,
+ gravatar_id text,
+ display_login text
+);
+
+CREATE TABLE github_events
+(
+ event_id bigint,
+ event_type text,
+ event_public boolean,
+ repo_id bigint,
+ payload jsonb,
+ repo jsonb,
+ user_id bigint,
+ org jsonb,
+ created_at timestamp
+);
+
+CREATE INDEX event_type_index ON github_events (event_type);
+CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops);
+
+SELECT create_distributed_table('github_users', 'user_id');
+SELECT create_distributed_table('github_events', 'user_id');
+```
+
+Loading data into the tables becomes as simple as calling the `COPY` command.
+
+```sql
+-- download users and store in table
+
+COPY github_users
+FROM 'https://pgquickstart.blob.core.windows.net/github/users.csv.gz';
+
+-- download events and store in table
+
+COPY github_events
+FROM 'https://pgquickstart.blob.core.windows.net/github/events.csv.gz';
+```
+
+Notice how the extension recognized that the URLs provided to the copy command are from Azure Blob Storage, the files we pointed were gzip compressed and that was also automatically handled for us.
+
+The `COPY` command supports more parameters and formats. In the above example, the format and compression were auto-selected based on the file extensions. You can however provide the format directly similar to the regular `COPY` command.
+
+```sql
+COPY github_users
+FROM 'https://pgquickstart.blob.core.windows.net/github/users.csv.gz'
+WITH (FORMAT 'csv');
+```
+
+Currently the extension supports the following file formats:
+
+|format|description|
+||--|
+|csv|Comma-separated values format used by PostgreSQL COPY|
+|tsv|Tab-separated values, the default PostgreSQL COPY format|
+|binary|Binary PostgreSQL COPY format|
+|text|A file containing a single text value (for example, large JSON or XML)|
+
+### Load data with blob_get()
+
+The `COPY` command is convenient, but limited in flexibility. Internally COPY uses the `blob_get` function, which you can use directly to manipulate data in much more complex scenarios.
+
+```sql
+SELECT *
+ FROM azure_storage.blob_get(
+ 'pgquickstart', 'github',
+ 'users.csv.gz', NULL::github_users
+ )
+ LIMIT 3;
+```
+
+```
+-[ RECORD 1 ]-+--
+user_id | 21
+url | https://api.github.com/users/technoweenie
+login | technoweenie
+avatar_url | https://avatars.githubusercontent.com/u/21?
+gravatar_id |
+display_login | technoweenie
+-[ RECORD 2 ]-+--
+user_id | 22
+url | https://api.github.com/users/macournoyer
+login | macournoyer
+avatar_url | https://avatars.githubusercontent.com/u/22?
+gravatar_id |
+display_login | macournoyer
+-[ RECORD 3 ]-+--
+user_id | 38
+url | https://api.github.com/users/atmos
+login | atmos
+avatar_url | https://avatars.githubusercontent.com/u/38?
+gravatar_id |
+display_login | atmos
+```
+
+> [!NOTE]
+>
+> In the above query, the file is fully fetched before `LIMIT 3` is applied.
+
+With this function, you can manipulate data on the fly in complex queries, and do imports as `INSERT FROM SELECT`.
+
+```sql
+INSERT INTO github_users
+ SELECT user_id, url, UPPER(login), avatar_url, gravatar_id, display_login
+ FROM azure_storage.blob_get('pgquickstart', 'github', 'users.csv.gz', NULL::github_users)
+ WHERE gravatar_id IS NOT NULL;
+```
+
+```
+INSERT 0 264308
+```
+
+In the above command, we filtered the data to accounts with a `gravatar_id` present and upper cased their logins on the fly.
+
+#### Options for blob_get()
+
+In some situations, you may need to control exactly what `blob_get` attempts to do by using the `decoder`, `compression` and `options` parameters.
+
+Decoder can be set to `auto` (default) or any of the following values:
+
+|format|description|
+||--|
+|csv|Comma-separated values format used by PostgreSQL COPY|
+|tsv|Tab-separated values, the default PostgreSQL COPY format|
+|binary|Binary PostgreSQL COPY format|
+|text|A file containing a single text value (for example, large JSON or XML)|
+
+`compression` can be either `auto` (default), `none` or `gzip`.
+
+Finally, the `options` parameter is of type `jsonb`. There are four utility functions that help building values for it.
+Each utility function is designated for the decoder matching its name.
+
+|decoder|options function |
+|-||
+|csv |`options_csv_get` |
+|tsv |`options_tsv` |
+|binary |`options_binary` |
+|text |`options_copy` |
+
+By looking at the function definitions, you can see which parameters are supported by which decoder.
+
+`options_csv_get` - delimiter, null_string, header, quote, escape, force_not_null, force_null, content_encoding
+`options_tsv` - delimiter, null_string, content_encoding
+`options_copy` - delimiter, null_string, header, quote, escape, force_quote, force_not_null, force_null, content_encoding.
+`options_binary` - content_encoding
+
+Knowing the above, we can discard recordings with null `gravatar_id` during parsing.
+
+```sql
+INSERT INTO github_users
+ SELECT user_id, url, UPPER(login), avatar_url, gravatar_id, display_login
+ FROM azure_storage.blob_get('pgquickstart', 'github', 'users.csv.gz', NULL::github_users,
+ options := azure_storage.options_csv_get(force_not_null := ARRAY['gravatar_id']));
+```
++
+```
+INSERT 0 264308
+```
+
+## Access private storage
+
+1. Obtain your account name and access key
+
+ Without an access key, we won't be allowed to list containers that are set to Private or Blob access levels.
+
+ ```sql
+ SELECT * FROM azure_storage.blob_list('mystorageaccount','privdatasets');
+ ```
+
+ ```
+ ERROR: azure_storage: missing account access key
+ HINT: Use SELECT azure_storage.account_add('<account name>', '<access key>')
+ ```
+
+ In your storage account, open **Access keys**. Copy the **Storage account name** and copy the **Key** from **key1** section (you have to select **Show** next to the key first).
+
+ :::image type="content" source="media/howto-ingestion/azure-blob-storage-account-key.png" alt-text="Screenshot of Security + networking > Access keys section of an Azure Blob Storage page in the Azure portal." border="true":::
+
+1. Adding an account to pg_azure_storage
+
+ ```sql
+ SELECT azure_storage.account_add('mystorageaccount', 'SECRET_ACCESS_KEY');
+ ```
+
+ Now you can list containers set to Private and Blob access levels for that storage but only as the `citus` user, which has the `azure_storage_admin` role granted to it. If you create a new user named `support`, it won't be allowed to access container contents by default.
+
+ ```sql
+ SELECT * FROM azure_storage.blob_list('pgabs','dataverse');
+ ```
+
+ ```
+ ERROR: azure_storage: current user support is not allowed to use storage account pgabs
+ ```
+
+1. Allow the `support` user to use a specific Azure Blob Storage account
+
+ Granting the permission is as simple as calling `account_user_add`.
+
+ ```sql
+ SELECT * FROM azure_storage.account_user_add('mystorageaccount', 'support');
+ ```
+
+ We can see the allowed users in the output of `account_list`, which shows all accounts with access keys defined.
+
+ ```sql
+ SELECT * FROM azure_storage.account_list();
+ ```
+
+ ```
+ account_name | allowed_users
+ +
+ mystorageaccount | {support}
+ (1 row)
+ ```
+
+ If you ever decide, that the user should no longer have access. Just call `account_user_remove`.
+
+
+ ```sql
+ SELECT * FROM azure_storage.account_user_remove('mystorageaccount', 'support');
+ ```
+
+## Next steps
+
+Congratulations, you just learned how to load data into Azure Cosmos DB for PostgreSQL directly from Azure Blob Storage.
+
+Learn how to create a [real-time dashboard](tutorial-design-database-realtime.md) with Azure Cosmos DB for PostgreSQL.
cosmos-db Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-distribute-tables.md
Previously updated : 08/11/2022 Last updated : 10/14/2022 # Create and distribute tables
SELECT create_distributed_table('github_events', 'user_id');
We're ready to fill the tables with sample data. For this quickstart, we'll use a dataset previously captured from the GitHub API.
-Run the following commands to download example CSV files and load them into the
-database tables. (The `curl` command downloads the files, and comes
-pre-installed in the Azure Cloud Shell.)
+We're going to use the pg_azure_storage extension, to load the data directly from a public container in Azure Blob Storage. First we need to create the extension in our database:
-```
+```sql
+SELECT * FROM create_extension('azure_storage');
+```
+
+Run the following commands to have the database fetch the example CSV files and load them into the
+database tables.
+
+```sql
-- download users and store in table
-\COPY github_users FROM PROGRAM 'curl https://examples.citusdata.com/users.csv' WITH (FORMAT CSV)
+COPY github_users FROM 'https://pgquickstart.blob.core.windows.net/github/users.csv.gz';
-- download events and store in table
-\COPY github_events FROM PROGRAM 'curl https://examples.citusdata.com/events.csv' WITH (FORMAT CSV)
+COPY github_events FROM 'https://pgquickstart.blob.core.windows.net/github/events.csv.gz';
```
+Notice how the extension recognized that the URLs provided to the copy command are from Azure Blob Storage, the files we pointed were gzip compressed and that was also automatically handled for us.
+ We can review details of our distributed tables, including their sizes, with the `citus_tables` view:
cost-management-billing Reporting Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/reporting-get-started.md
Title: Get started with Cost Management + Billing reporting - Azure
description: This article helps you to get started with Cost Management + Billing to understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs. Previously updated : 07/26/2022 Last updated : 10/18/2022
While cost analysis offers a rich, interactive experience for analyzing and surf
Need to go beyond the basics with Power BI? The Cost Management connector for Power BI lets you choose the data you need to help you seamlessly integrate costs with your own datasets or easily build out more complete dashboards and reports to meet your organization's needs. For more information about the connector, see [Connect to Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
-## Usage details and exports
+## Cost details and exports
If you're looking for raw data to automate business processes or integrate with other systems, start by exporting data to a storage account. Scheduled exports allow you to automatically publish your raw cost data to a storage account on a daily, weekly, or monthly basis. With special handling for large datasets, scheduled exports are the most scalable option for building first-class cost data integration. For more information, see [Create and manage exported data](tutorial-export-acm-data.md).
-If you need more fine-grained control over your data requests, the Usage Details API offers a bit more flexibility to pull raw data the way you need it. For more information, see the [Usage Details REST API](/rest/api/consumption/usage-details/list).
- :::image type="content" source="./media/reporting-get-started/exports.png" alt-text="Screenshot showing the list of exports." lightbox="./media/reporting-get-started/exports.png" :::
+If you need more fine-grained control over your data requests, the Cost Details API offers a bit more flexibility to pull raw data the way you need it. For more information, see [Cost Details API](../automate/usage-details-best-practices.md#cost-details-api).
+ ## Invoices and credits Cost analysis is a great tool for reviewing estimated, unbilled charges or for tracking historical cost trends, but it may not show your total billed amount because credits, taxes, and other refunds and charges not available in Cost Management. To estimate your projected bill at the end of the month, start in cost analysis to understand your forecasted costs, then review any available credit or prepaid commitment balance from **Credits** or **Payment methods** for your billing account or billing profile within the Azure portal. To review your final billed charges after the invoice is available, see **Invoices** for your billing account or billing profile.
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
The following examples illustrate how billing periods could end:
* Enterprise Agreement (EA) subscriptions ΓÇô If the billing month ends on March 31, estimated charges are updated up to 72 hours later. In this example, by midnight (UTC) April 4. * Pay-as-you-go subscriptions ΓÇô If the billing month ends on May 15, then the estimated charges might get updated up to 72 hours later. In this example, by midnight (UTC) May 19.
-Once cost and usage data becomes available in Cost Management, it will be retained for at least seven years. Only the last 13 months is available from the portal. For historical data before 13 months, please use [Exports](tutorial-export-acm-data.md) or the [UsageDetails API](/rest/api/consumption/usage-details/list).
+Once cost and usage data becomes available in Cost Management, it will be retained for at least seven years. Only the last 13 months is available from the portal. For historical data before 13 months, please use [Exports](tutorial-export-acm-data.md) or the [Cost Details API](../automate/usage-details-best-practices.md#cost-details-api).
### Rerated data
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | | EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). | | EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). - Self-service reservation transfers are supported. |
-| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
+| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). |
| MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | | MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | | MCA - individual | EA | ΓÇó For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó Self-service reservation transfers are supported. |
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| MCA - Enterprise | MOSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | | MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | | MCA - Enterprise | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
-| MCA - Enterprise | MPA | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
+| MCA - Enterprise | MPA | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). |
| Previous Azure offer in CSP | Previous Azure offer in CSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | | Previous Azure offer in CSP | MPA | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). | | MPA | EA | ΓÇó Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product.<br><br> ΓÇó Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. |
cost-management-billing Transfer Subscriptions Subscribers Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md
Title: Transfer Azure subscriptions between subscribers and CSPs description: Learn how you can transfer Azure subscriptions between subscribers and CSPs. -+ Previously updated : 09/23/2022 Last updated : 10/19/2022 # Transfer Azure subscriptions between subscribers and CSPs
-This article provides high-level steps used to transfer Azure subscriptions to and from Cloud Solution Provider (CSP) partners and their customers. The information here is intended for the Azure subscriber to help them coordinate with their partner. Information that Microsoft partners use for the transfer process is documented at [Learn how to transfer a customer's Azure subscriptions to another partner](/partner-center/switch-azure-subscriptions-to-a-different-partner).
+This article provides high-level steps used to transfer Azure subscriptions to and from Cloud Solution Provider (CSP) partners and their customers. This information is intended for the Azure subscriber to help them coordinate with their partner. Information that Microsoft partners use for the transfer process is documented at [Transfer subscriptions under and Azure plan from one partner to another](azure-plan-subscription-transfer-partners.md).
-Before you start a transfer request, you should download or export any cost and billing information that you want to keep. Billing and utilization information doesn't transfer with the subscription. For more information about exporting cost management data, see [Create and manage exported data](../costs/tutorial-export-acm-data.md). For more information about downloading your invoice and usage data, see [Download or view your Azure billing invoice and daily usage data](download-azure-invoice-daily-usage-date.md).
+Download or export cost and billing information that you want to keep before you start a transfer request. Billing and utilization information doesn't transfer with the subscription. For more information about exporting cost management data, see [Create and manage exported data](../costs/tutorial-export-acm-data.md). For more information about downloading your invoice and usage data, see [Download or view your Azure billing invoice and daily usage data](download-azure-invoice-daily-usage-date.md).
-## Transfer EA subscriptions to a CSP partner
+## Transfer EA or MCA enterprise subscriptions to a CSP partner
-CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure subscriptions for their customers that have a Direct Enterprise Agreement (EA). Subscription transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.
+CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure subscriptions for their customers. The customers must have a Direct Enterprise Agreement (EA) or a Microsoft account team (Microsoft Customer Agreement enterprise). Subscription transfers are allowed only for customers who have accepted an MCA and purchased an Azure plan with the CSP Program.
When the request is approved, the CSP can then provide a combined invoice to their customers. To learn more about CSPs transferring subscriptions, see [Get billing ownership of Azure subscriptions for your MPA account](mpa-request-ownership.md). >[!IMPORTANT]
-> After transfering an EA subscription to a CSP partner, any quota increases previously applied to the EA subscription will be reset to the default value. If additional quota is required after the subscription transfer, have your CSP provider submit a [quota increase](../../azure-portal/supportability/regional-quota-requests.md) request.
+> After transfering an EA or MCA enterprise subscription to a CSP partner, any quota increases previously applied to the EA subscription will be reset to the default value. If additional quota is required after the subscription transfer, have your CSP provider submit a [quota increase](../../azure-portal/supportability/regional-quota-requests.md) request.
## Other subscription transfers to a CSP partner
-To transfer any other Azure subscriptions to a CSP partner, the subscriber needs to move resources from source subscriptions to CSP subscriptions. Use the following guidance to move resources between subscriptions.
+To transfer any other Azure subscriptions that aren't supported for billing transfer to MPA as documented in the [Azure subscription transfer hub](subscription-transfer.md#product-transfer-support) article, the subscriber needs to move resources from source subscriptions to CSP subscriptions. Use the following guidance to move resources between subscriptions.
1. Establish a [reseller relationship](/partner-center/request-a-relationship-with-a-customer) with the customer. Review the [CSP Regional Authorization Overview](/partner-center/regional-authorization-overview) to ensure both customer and Partner tenant are within the same authorized regions. 1. Work with your CSP partner to create target Azure CSP subscriptions.
To transfer any other Azure subscriptions to a CSP partner, the subscriber needs
> [!IMPORTANT] > - Moving Azure resources between subscriptions might result in service downtime, based on resources in the subscriptions.
-## Transfer CSP subscription to other offer
+## Transfer CSP subscription to other offers
-To transfer any other subscriptions from a CSP Partner to any other Azure offer, the subscriber needs to move resources between source CSP subscriptions and target subscriptions. This is work done by a partner and a customer - it is not work done by a Microsoft representative.
+It's possible to transfer other subscriptions from a CSP Partner to other Azure offers that aren't supported for billing transfer from MPA as documented in the [Azure subscription transfer hub](subscription-transfer.md#product-transfer-support) article. However, the subscriber needs to manually move resources between source CSP subscriptions and target subscriptions. All work done by a partner and a customer - it isn't work done by a Microsoft representative.
1. The customer creates target Azure subscriptions. 1. Ensure that the source and target subscriptions are in the same Azure Active Directory (Azure AD) tenant. For more information about changing an Azure AD tenant, see [Associate or add an Azure subscription to your Azure Active Directory tenant](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
- Note that the change directory option isn't supported for the CSP subscription. For example, you're transferring from a CSP to a pay-as-you-go subscription. You need change the directory of the pay-as-you-go subscription to match the directory.
+ The change directory option isn't supported for the CSP subscription. For example, you're transferring from a CSP to a pay-as-you-go subscription. You need to change the directory of the pay-as-you-go subscription to match the directory.
> [!IMPORTANT] > - When you associate a subscription to a different directory, users that have roles assigned using [Azure RBAC](../../role-based-access-control/role-assignments-portal.md) lose their access. Classic subscription administrators, including Service Administrator and Co-Administrators, also lose access.
To transfer any other subscriptions from a CSP Partner to any other Azure offer,
> - Moving Azure resources between subscriptions might result in service downtime, based on resources in the subscriptions. ## Next steps+ - [Get billing ownership of Azure subscriptions for your MPA account](mpa-request-ownership.md). - Read about how to [Manage accounts and subscriptions with Azure Billing](../index.yml).
cost-management-billing Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/discount-application.md
The benefit is first applied to the product that has the greatest savings plan d
A savings plan discount only applies to resources associated with Enterprise Agreement, Microsoft Partner Agreement, and Microsoft Customer Agreements. Resources that run in a subscription with other offer types don't receive the discount.
+## Savings plans and VM reserved instances
+
+If you have both dynamic and stable workloads, you likely will have both Azure savings plans and VM reserved instances. Since reservation benefits are more restrictive than savings plans, and usually have greater discounts, Azure applies reservation benefits first.
+
+For example, VM *X* has the highest savings plan discount of all savings plan-eligible resources you used in a particular hour. If you have an available VM reservation that's compatible with *X*, the reservation is consumed instead of the savings plan. The approach reduces the possibility of waste and it ensures that youΓÇÖre always getting the best benefit.
+
+## Savings plan and Azure consumption discounts
+
+In most situations, an Azure savings plan provides the best combination of flexibility and pricing. If you're operating under an Azure consumption discount (ACD), in rare occasions, you may have some pay-as-you-go rates that are lower than the savings plan rate. In these cases, Azure uses the lower of the two rates.
+
+For example, VM *X* has the highest savings plan discount of all savings plan-eligible resources you used in a particular hour. If you have an ACD rate that is lower than the savings plan rate, the ACD rate is applied to your hourly usage. The result is decremented from your hourly commitment. The approach ensures you always get the best available rate.
+ ## Benefit allocation window With an Azure savings plan, you get significant and flexible discounts off your pay-as-you-go rates in exchange for a one or three-year spend commitment. When you use an Azure resource, usage details are periodically reported to the Azure billing system. The billing system is tasked with quickly applying your savings plan in the most beneficial manner possible. The plan benefits are applied to usage that has the largest discount percentage first. For the application to be most effective, the billing system needs visibility to your usage in a timely manner.
-The Azure savings plan benefit application operates under a best fit benefit model. When your benefit application is evaluated for a given hour, the billing system incorporates usage arriving up to 48 hours after the given hour. During the sliding 48-hour window, you may see changes to charges, including the possibility of savings plan utilization that's greater than 100%. This situation happens because the system is constantly working to provide the best possible benefit application. Keep the 48-hour window in mind when you inspect your usage.
+The Azure savings plan benefit application operates under a best fit benefit model. When your benefit application is evaluated for a given hour, the billing system incorporates usage arriving up to 48 hours after the given hour. During the sliding 48-hour window, you may see changes to charges, including the possibility of savings plan utilization that's greater than 100%. The situation happens because the system is constantly working to provide the best possible benefit application. Keep the 48-hour window in mind when you inspect your usage.
+
+## Utilize multiple savings plans
+
+Azure's intent is to always maximize the benefit you receive from savings plans. When you have multiple savings plans with the different term lengths, Azure applies the benefits from the three year plan first. The intent is to ensure that the best rates are applied first. If you have multiple savings plans that have different benefit scopes, Azure applies benefits from the more restrictively scoped plan first. The intent is to reduce the possibility of waste.
## When the savings plan term expires
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
npm run build export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxx
- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`. - `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template.
-If you would like to stop/ start only the updated triggers, instead use the below command (currently this capability is in preview and the functionality will be transparently merged into the above command during GA):
-```dos
-npm run build-preview export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput
-```
-- `RootFolder` is a mandatory field that represents where the Data Factory resources are located.-- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`.-- `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template.
+The ability to stop/start only the updated triggers is now generally available and is merged into the command shown above.
+ > [!NOTE] > The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline. - ### Validate Run `npm run build validate <rootFolder> <factoryId>` to validate all the resources of a given folder. Here's an example:
Follow these steps to get started:
{ "scripts":{ "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index",
- "build-preview":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index --preview"
}, "dependencies":{ "@microsoft/azure-data-factory-utilities":"^1.0.0"
data-factory Solution Templates Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-templates-introduction.md
Previously updated : 09/22/2022 Last updated : 10/18/2022 # Templates
After checking the **My templates** box in the **Template gallery** page, you ca
> [!NOTE] > To use the My Templates feature, you have to enable GIT integration. Both Azure DevOps GIT and GitHub are supported.+
+### Community Templates
+
+Community members are now welcome to contribute to the Template Gallery. You will be able to see these templates when you filter by **Contributor**.
++
+To learn how you can contribute to the template gallery, please read our [introduction](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-azure-data-factory-community-templates/ba-p/3650989) and [instructions](https://github.com/Azure/Azure-DataFactory/tree/main/community%20templates).
+
+> [!NOTE]
+> Community template submissions will be reviewed by the Azure Data Factory team. If your submission, does not meet our guidelines or quality checks, we will not merge your template into the gallery.
++
databox-online Azure Stack Edge Gpu Configure Gpu Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-configure-gpu-modules.md
Previously updated : 03/12/2021 Last updated : 10/19/2022 # Configure and run a module on GPU on Azure Stack Edge Pro device [!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ Your Azure Stack Edge Pro device contains one or more Graphics Processing Unit (GPU). GPUs are a popular choice for AI computations as they offer parallel processing capabilities and are faster at image rendering than Central Processing Units (CPUs). For more information on the GPU contained in your Azure Stack Edge Pro device, go to [Azure Stack Edge Pro device technical specifications](azure-stack-edge-gpu-technical-specifications-compliance.md). This article describes how to configure and run a module on the GPU on your Azure Stack Edge Pro device. In this article, you will use a publicly available container module **Digits** written for Nvidia T4 GPUs. This procedure can be used to configure any other modules published by Nvidia for these GPUs. - ## Prerequisites Before you begin, make sure that:
databox-online Azure Stack Edge Gpu Deploy Compute Module Simple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-compute-module-simple.md
Previously updated : 02/22/2021 Last updated : 10/19/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
[!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ This tutorial describes how to run a compute workload using an IoT Edge module on your Azure Stack Edge Pro GPU device. After you configure the compute, the device will transform the data before sending it to Azure. This procedure can take around 10 to 15 minutes to complete.
databox-online Azure Stack Edge Gpu Deploy Sample Module Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-sample-module-marketplace.md
Previously updated : 02/22/2021 Last updated : 10/19/2022
[!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ This article describes how to deploy a Graphics Processing Unit (GPU) enabled IoT Edge module from Azure Marketplace on your Azure Stack Edge Pro device. In this article, you learn how to:
databox-online Azure Stack Edge Gpu Deploy Sample Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-sample-module.md
Previously updated : 06/28/2022 Last updated : 10/19/2022
[!INCLUDE [applies-to-gpu-pro-pro2-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-pro-2-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ This article describes how to deploy a GPU enabled IoT Edge module on your Azure Stack Edge Pro GPU device. In this article, you learn how to:
databox-online Azure Stack Edge Gpu Modify Fpga Modules Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-modify-fpga-modules-gpu.md
Previously updated : 02/22/2021 Last updated : 10/18/2021
[!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ This article details the changes needed for a docker-based IoT Edge module that runs on Azure Stack Edge Pro FPGA so it can run on a Kubernetes-based IoT Edge platform on Azure Stack Edge Pro GPU device. ## About IoT Edge implementation
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Title: 'Azure DDoS Protection SKU Comparison'
+ Title: 'About Azure DDoS Protection SKU Comparison'
description: Learn about the available SKUs for Azure DDoS Protection. Previously updated : 10/13/2022 Last updated : 10/19/2022
devops-project Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/overview.md
# Overview of DevOps Starter
+[!IMPORTANT] DevOps Starter will be retired on March 31, 2023. [Learn more](/azure/devops-project/retirement-and-migration).
+ DevOps Starter makes it easy to get started on Azure using either GitHub actions or Azure DevOps. It helps you launch your favorite app on the Azure service of your choice in just a few quick steps from the Azure portal. DevOps Starter sets up everything you need for developing, deploying, and monitoring your application. You can use the DevOps Starter dashboard to monitor code commits, builds, and deployments, all from a single view in the Azure portal.
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
The role definition ID is the string identifier for the existing role definition
The subscription ID is obtained by using `subscription().subscriptionId` template function.
-You need to get the role definition for the `DevTest Labs User` built-in role. To get the GUID for the [DevTest Labs User](../role-based-access-control/built-in-roles.md#devtest-labs-user) role, you can use the [Role Assignments REST API](/rest/api/authorization/roleassignments) or the [Get-AzRoleDefinition](/powershell/module/az.resources/get-azroledefinition) cmdlet.
+You need to get the role definition for the `DevTest Labs User` built-in role. To get the GUID for the [DevTest Labs User](../role-based-access-control/built-in-roles.md#devtest-labs-user) role, you can use the [Role Assignments REST API](/rest/api/authorization/role-assignments) or the [Get-AzRoleDefinition](/powershell/module/az.resources/get-azroledefinition) cmdlet.
```powershell $dtlUserRoleDefId = (Get-AzRoleDefinition -Name "DevTest Labs User").Id
education-hub Access Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/access-education-hub.md
Previously updated : 06/30/2020 Last updated : 10/19/2022
To access the Azure Education Hub, you should have already received an email not
> [!IMPORTANT] > Confirm that you are signing-in with an Organizational/Work Account (like your institution's @domain.edu). If so, select this option on the left-side of the window first. This will take you to a different login screen.
- :::image type="content" source="media/access-education-hub/sign-in.png" alt-text="Organization sign-in dialog box." border="false":::
+ :::image type="content" source="media/access-education-hub/modern-sign-in.png" alt-text="Organization sign-in dialog box." border="false":::
1. After you're signed in, you'll be directed to the Azure portal. To find the Education Hub, go to the **All Services** menu and search for **Education**. The first time you log in, the Get Started page is displayed.
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you are remote and do not have fiber connectivity or you want to explore othe
| **[Data Foundry](https://www.datafoundry.com/services/cloud-connect)** | Megaport | Dallas | | **[Epsilon Telecommunications Limited](https://www.epsilontel.com/solutions/cloud-connect/)** | Equinix | London, Singapore, Washington DC | | **[Eurofiber](https://eurofiber.nl/microsoft-azure/)** | Equinix | Amsterdam |
-| **[Exponential E](https://www.exponential-e.com/services/connectivity-services/cloud-connect-exchange)** | Equinix | London |
+| **[Exponential E](https://www.exponential-e.com/services/connectivity-services/)** | Equinix | London |
| **[Fastweb S.p.A](https://www.fastweb.it/grandi-aziende/connessione-voce-e-wifi/scheda-prodotto/rete-privata-virtuale/)** | Equinix | Amsterdam | | **[Fibrenoire](https://www.fibrenoire.ca/en/cloudextn)** | Megaport | Quebec City | | **[FPT Telecom International](https://cloudconnect.vn/en)** |Equinix |Singapore|
firewall Tutorial Firewall Deploy Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal-policy.md
Previously updated : 08/26/2021 Last updated : 10/18/2022 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
Network traffic is subjected to the configured firewall rules when you route you
For this tutorial, you create a simplified single VNet with two subnets for easy deployment.
-For production deployments, a [hub and spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) is recommended, where the firewall is in its own VNet. The workload servers are in peered VNets in the same region with one or more subnets.
- * **AzureFirewallSubnet** - the firewall is in this subnet. * **Workload-SN** - the workload server is in this subnet. This subnet's network traffic goes through the firewall. ![Tutorial network infrastructure](media/tutorial-firewall-deploy-portal/tutorial-network.png)
+For production deployments, a [hub and spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) is recommended, where the firewall is in its own VNet. The workload servers are in peered VNets in the same region with one or more subnets.
+ In this tutorial, you learn how to: > [!div class="checklist"]
First, create a resource group to contain the resources needed to deploy the fir
The resource group contains all the resources for the tutorial. 1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Add**.
-4. For **Subscription**, select your subscription.
-1. For **Resource group name**, enter *Test-FW-RG*.
-1. For **Region**, select a region. All other resources that you create must be in the same region.
+1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page, then select **Add**. Enter or select the following values:
+
+ | Setting | Value |
+ | -- | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Enter *Test-FW-RG*. |
+ | Region | Select a region. All other resources that you create must be in the same region. |
+
1. Select **Review + create**. 1. Select **Create**.
This VNet will have two subnets.
1. On the Azure portal menu or from the **Home** page, select **Create a resource**. 1. Select **Networking**. 1. Search for **Virtual network** and select it.
-1. Select **Create**.
-1. For **Subscription**, select your subscription.
-1. For **Resource group**, select **Test-FW-RG**.
-1. For **Name**, type **Test-FW-VN**.
-1. For **Region**, select the same location that you used previously.
+1. Select **Create**, then enter or select the following values:
+
+ | Setting | Value |
+ | -- | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Name | Enter *Test-FW-VN*. |
+ | Region | Select the same location that you used previously. |
+ 1. Select **Next: IP addresses**. 1. For **IPv4 Address space**, accept the default **10.0.0.0/16**. 1. Under **Subnet**, select **default**.
This VNet will have two subnets.
Next, create a subnet for the workload server. 1. Select **Add subnet**.
-4. For **Subnet name**, type **Workload-SN**.
-5. For **Subnet address range**, type **10.0.2.0/24**.
-6. Select **Add**.
-7. Select **Review + create**.
-8. Select **Create**.
+1. For **Subnet name**, type **Workload-SN**.
+1. For **Subnet address range**, type **10.0.2.0/24**.
+1. Select **Add**.
+1. Select **Review + create**.
+1. Select **Create**.
### Create a virtual machine Now create the workload virtual machine, and place it in the **Workload-SN** subnet. 1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Select **Windows Server 2016 Datacenter**.
-4. Enter these values for the virtual machine:
-
- |Setting |Value |
- |||
- |Resource group |**Test-FW-RG**|
- |Virtual machine name |**Srv-Work**|
- |Region |Same as previous|
- |Image|Windows Server 2016 Datacenter|
- |Administrator user name |Type a user name|
- |Password |Type a password|
-
-4. Under **Inbound port rules**, **Public inbound ports**, select **None**.
-6. Accept the other defaults and select **Next: Disks**.
-7. Accept the disk defaults and select **Next: Networking**.
-8. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**.
-9. For **Public IP**, select **None**.
-11. Accept the other defaults and select **Next: Management**.
-12. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
-13. Review the settings on the summary page, and then select **Create**.
+1. Select **Windows Server 2019 Datacenter**.
+1. Enter or select these values for the virtual machine:
+
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Virtual machine name | Enter *Srv-Work*.|
+ | Region | Select the same location that you used previously. |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+
+1. Under **Inbound port rules**, **Public inbound ports**, select **None**.
+1. Accept the other defaults and select **Next: Disks**.
+1. Accept the disk defaults and select **Next: Networking**.
+1. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**.
+1. For **Public IP**, select **None**.
+1. Accept the other defaults and select **Next: Management**.
+1. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+1. Review the settings on the summary page, and then select **Create**.
1. After the deployment completes, select the **Srv-Work** resource and note the private IP address for later use. ## Deploy the firewall and policy
Deploy the firewall into the VNet.
3. Select **Firewall** and then select **Create**. 4. On the **Create a Firewall** page, use the following table to configure the firewall:
- |Setting |Value |
- |||
- |Subscription |\<your subscription\>|
- |Resource group |**Test-FW-RG** |
- |Name |**Test-FW01**|
- |Region |Select the same location that you used previously|
- |Firewall management|**Use a Firewall Policy to manage this firewall**|
- |Firewall policy|**Add new**:<br>**fw-test-pol**<br>your selected region
- |Choose a virtual network |**Use existing**: **Test-FW-VN**|
- |Public IP address |**Add new**:<br>**Name**: **fw-pip**|
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Name | Enter *Test-FW01*. |
+ | Region | Select the same location that you used previously. |
+ | Firewall management | Select **Use a Firewall Policy to manage this firewall**. |
+ | Firewall policy | Select **Add new**, and enter *fw-test-pol*. <br> Select the same region that you used previously.
+ | Choose a virtual network | Select **Use existing**, and then select **Test-FW-VN**. |
+ | Public IP address | Select **Add new**, and enter *fw-pip* for the **Name**. |
5. Accept the other default values, then select **Review + create**. 6. Review the summary, and then select **Create** to create the firewall.
Deploy the firewall into the VNet.
For the **Workload-SN** subnet, configure the outbound default route to go through the firewall. 1. On the Azure portal menu, select **All services** or search for and select *All services* from any page.
-2. Under **Networking**, select **Route tables**.
-3. Select **Add**.
-5. For **Subscription**, select your subscription.
-6. For **Resource group**, select **Test-FW-RG**.
-7. For **Region**, select the same location that you used previously.
-4. For **Name**, type **Firewall-route**.
+1. Under **Networking**, select **Route tables**.
+1. Select **Create**, then enter or select the following values:
+
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Region | Select the same location that you used previously. |
+ | Name | Enter *Firewall-route*. |
+ 1. Select **Review + create**. 1. Select **Create**. After deployment completes, select **Go to resource**.
-1. On the Firewall-route page, select **Subnets** and then select **Associate**.
+1. On the **Firewall-route** page, select **Subnets** and then select **Associate**.
1. Select **Virtual network** > **Test-FW-VN**. 1. For **Subnet**, select **Workload-SN**. Make sure that you select only the **Workload-SN** subnet for this route, otherwise your firewall won't work correctly.-
-13. Select **OK**.
-14. Select **Routes** and then select **Add**.
-15. For **Route name**, type **fw-dg**.
-16. For **Address prefix**, type **0.0.0.0/0**.
-17. For **Next hop type**, select **Virtual appliance**.
-
+1. Select **OK**.
+1. Select **Routes** and then select **Add**.
+1. For **Route name**, enter *fw-dg*.
+1. For **Address prefix**, enter *0.0.0.0/0*.
+1. For **Next hop type**, select **Virtual appliance**.
Azure Firewall is actually a managed service, but virtual appliance works in this situation.
-18. For **Next hop address**, type the private IP address for the firewall that you noted previously.
-19. Select **OK**.
+1. For **Next hop address**, enter the private IP address for the firewall that you noted previously.
+1. Select **OK**.
## Configure an application rule This is the application rule that allows outbound access to `www.google.com`.
-1. Open the **Test-FW-RG**, and select the **fw-test-pol** firewall policy.
+1. Open the **Test-FW-RG** resource group, and select the **fw-test-pol** firewall policy.
1. Select **Application rules**. 1. Select **Add a rule collection**.
-1. For **Name**, type **App-Coll01**.
-1. For **Priority**, type **200**.
+1. For **Name**, enter *App-Coll01*.
+1. For **Priority**, enter *200*.
1. For **Rule collection action**, select **Allow**.
-1. Under **Rules**, for **Name**, type **Allow-Google**.
+1. Under **Rules**, for **Name**, enter *Allow-Google*.
1. For **Source type**, select **IP address**.
-1. For **Source**, type **10.0.2.0/24**.
-1. For **Protocol:port**, type **http, https**.
+1. For **Source**, enter *10.0.2.0/24*.
+1. For **Protocol:port**, enter *http, https*.
1. For **Destination Type**, select **FQDN**.
-1. For **Destination**, type **`www.google.com`**
+1. For **Destination**, enter *`www.google.com`*
1. Select **Add**. Azure Firewall includes a built-in rule collection for infrastructure FQDNs that are allowed by default. These FQDNs are specific for the platform and can't be used for other purposes. For more information, see [Infrastructure FQDNs](infrastructure-fqdns.md).
This is the network rule that allows outbound access to two IP addresses at port
1. Select **Network rules**. 2. Select **Add a rule collection**.
-3. For **Name**, type **Net-Coll01**.
-4. For **Priority**, type **200**.
+3. For **Name**, enter *Net-Coll01*.
+4. For **Priority**, enter *200*.
5. For **Rule collection action**, select **Allow**. 1. For **Rule collection group**, select **DefaultNetworkRuleCollectionGroup**.
-1. Under **Rules**, for **Name**, type **Allow-DNS**.
+1. Under **Rules**, for **Name**, enter *Allow-DNS*.
1. For **Source type**, select **IP Address**.
-1. For **Source**, type **10.0.2.0/24**.
+1. For **Source**, enter *10.0.2.0/24*.
1. For **Protocol**, select **UDP**.
-1. For **Destination Ports**, type **53**.
+1. For **Destination Ports**, enter *53*.
1. For **Destination type** select **IP address**.
-1. For **Destination**, type **209.244.0.3,209.244.0.4**.<br>These are public DNS servers operated by CenturyLink.
+1. For **Destination**, enter *209.244.0.3,209.244.0.4*.<br>These are public DNS servers operated by CenturyLink.
2. Select **Add**. ## Configure a DNAT rule
-This rule allows you to connect a remote desktop to the Srv-Work virtual machine through the firewall.
+This rule allows you to connect a remote desktop to the **Srv-Work** virtual machine through the firewall.
1. Select the **DNAT rules**. 2. Select **Add a rule collection**.
-3. For **Name**, type **rdp**.
-1. For **Priority**, type **200**.
+3. For **Name**, enter *rdp*.
+1. For **Priority**, enter *200*.
1. For **Rule collection group**, select **DefaultDnatRuleCollectionGroup**.
-1. Under **Rules**, for **Name**, type **rdp-nat**.
+1. Under **Rules**, for **Name**, enter *rdp-nat*.
1. For **Source type**, select **IP address**.
-1. For **Source**, type **\***.
+1. For **Source**, enter *\**.
1. For **Protocol**, select **TCP**.
-1. For **Destination Ports**, type **3389**.
+1. For **Destination Ports**, enter *3389*.
1. For **Destination Type**, select **IP Address**.
-1. For **Destination**, type the firewall public IP address.
-1. For **Translated address**, type the **Srv-work** private IP address.
-1. For **Translated port**, type **3389**.
+1. For **Destination**, enter the firewall public IP address.
+1. For **Translated address**, enter the **Srv-work** private IP address.
+1. For **Translated port**, enter *3389*.
1. Select **Add**.
For testing purposes in this tutorial, configure the server's primary and second
2. Select the network interface for the **Srv-Work** virtual machine. 3. Under **Settings**, select **DNS servers**. 4. Under **DNS servers**, select **Custom**.
-5. Type **209.244.0.3** in the **Add DNS server** text box, and **209.244.0.4** in the next text box.
+5. Enter *209.244.0.3* in the **Add DNS server** text box, and *209.244.0.4* in the next text box.
6. Select **Save**. 7. Restart the **Srv-Work** virtual machine.
governance Agent Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/agent-notes.md
The guest configuration agent receives improvements on an ongoing basis. To stay
- Known issues - Bug fixes
-For information on release notes for the connected machine agent, please see [What's new with the connected machine agent](/azure/azure-arc/servers/agent-release-notes).
+For information on release notes for the connected machine agent, please see [What's new with the connected machine agent](../../azure-arc/servers/agent-release-notes.md).
## Release notes
az vm extension set --publisher Microsoft.GuestConfiguration --name Configurati
- [Assign your custom policy definition](../policy/assign-policy-portal.md) using Azure portal. - Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
+ [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
resource properties with different needs for compliance.
You use JavaScript Object Notation (JSON) to create a policy assignment. The policy assignment contains elements for: -- display name-- description-- metadata-- enforcement mode-- excluded scopes-- policy definition-- non-compliance messages-- parameters-- identity-- resource selectors (preview)-- overrides (preview)
+- [display name](#display-name-and-description)
+- [description](#display-name-and-description)
+- [metadata](#metadata)
+- [resource selectors (preview)](#resource-selectors-preview)
+- [overrides (preview)](#overrides-preview)
+- [enforcement mode](#enforcement-mode)
+- [excluded scopes](#excluded-scopes)
+- [policy definition](#policy-definition-id)
+- [non-compliance messages](#non-compliance-messages)
+- [parameters](#parameters)
+- [identity](#identity)
For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic parameters:
shows our policy assignment with two additional Azure regions added to the **SDP
Resource selectors have the following properties: - `name`: The name of the resource selector.-- `selectors`: The factor used to determine which subset of resources applicable to the policy assignment should be evaluated for compliance.
- - `kind`: The property of a `selector` that describes what characteristic will narrow down the set of evaluated resources. Each 'kind' can only be used once in a single resource selector. Allowed values are:
- - `resourceLocation`: This is used to select resources based on their type. Can be used in up to 10 resource selectors. Cannot be used in the same resource selector as `resourceWithoutLocation`.
+
+- `selectors`: (Optional) The property used to determine which subset of resources applicable to the policy assignment should be evaluated for compliance.
+
+ - `kind`: The property of a selector that describes what characteristic will narrow down the set of evaluated resources. Each kind can only be used once in a single resource selector. Allowed values are:
+
+ - `resourceLocation`: This is used to select resources based on their type. Cannot be used in the same resource selector as `resourceWithoutLocation`.
+ - `resourceType`: This is used to select resources based on their type.+ - `resourceWithoutLocation`: This is used to select resources at the subscription level which do not have a location. Currently only supports `subscriptionLevelResources`. Cannot be used in the same resource selector as `resourceLocation`.+ - `in`: The list of allowed values for the specified `kind`. Cannot be used with `notIn`. Can contain up to 50 values.+ - `notIn`: The list of not-allowed values for the specified `kind`. Cannot be used with `in`. Can contain up to 50 values.
-A **resource selector** can contain multiple **selectors**. To be applicable to a resource selector, a resource must meet requirements specified by all its selectors. Further, multiple **resource selectors** can be specified in a single assignment. In-scope resources are evaluated when they satisfy any one of these resource selectors.
+A **resource selector** can contain multiple **selectors**. To be applicable to a resource selector, a resource must meet requirements specified by all its selectors. Further, up to 10 **resource selectors** can be specified in a single assignment. In-scope resources are evaluated when they satisfy any one of these resource selectors.
## Overrides (preview)
Let's take a look at an example. Imagine you have a policy initiative named _Cos
} ```
-Note that one override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they are specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list, in cases where the effect is [parameterized](definition-structure.md#parameters).
+Overrides have the following properties:
+- `kind`: The property the assignment will override. The supported kind is `policyEffect`.
+
+- `value`: The new value which will override the existing value. The supported values are [effects](effects.md).
+
+- `selectors`: (Optional) The property used to determine what scope of the policy assignment should take on the override.
+
+ - `kind`: The property of a selector that describes what characteristic will narrow down the scope of the override. Allowed value for `kind: policyEffect` is:
+
+ - `policyDefinitionReferenceId`: This specifies which policy definitions within an initiative assignment should take on the effect override.
+
+ - `in`: The list of allowed values for the specified `kind`. Cannot be used with `notIn`. Can contain up to 50 values.
+
+ - `notIn`: The list of not-allowed values for the specified `kind`. Cannot be used with `in`. Can contain up to 50 values.
+
+Note that one override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they are specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list (in cases where the effect is [parameterized](definition-structure.md#parameters)).
## Enforcement mode
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
Attestations are used by Azure Policy to set compliance states of resources or scopes targeted by [manual policies](effects.md#manual-preview). They also allow users to provide additional metadata or link to evidence which accompanies the attested compliance state. > [!NOTE]
-> In preview, Attestations are available only through the Azure Resource Manager (ARM) API.
+> In preview, Attestations are available only through the [Azure Resource Manager (ARM) API](/rest/api/policy/attestations).
-Below is an example of creating a new attestation resource which sets the compliance state for resources within a desired resource group:
+## Best practices
+
+Attestations can be used to set the compliance state of an individual resource for a given manual policy. This means that each applicable resource requires one attestation per manual policy assignment. For ease of management, manual policies should be designed to target the scope which defines the boundary of resources whose compliance state needs to be attested.
+
+For example, suppose an organization divides teams by resource group, and each team is required to attest to development of procedures for handling resources within that resource group. In this scenario, the conditions of the policy rule should specify that type equals `Microsoft.Resources/resourceGroups`. This way, one attestation is required for the resource group, rather than for each individual resource within. Similarly, if the organization deivides teams by subscriptions, the policy rule should target `Microsoft.Resources/subscriptions`.
+
+Typically, the provided evidence should correspond with relevant scopes of the organizational structure. This pattern prevents the need to duplicate evidence across many attestations. Such duplications would make manual policies difficult to manage, and indicate that the policy definition targets the wrong resource(s).
+
+## Example attestation
+
+Below is an example of creating a new attestation resource which sets the compliance state for a resource group targeted by a manual policy assignment:
```http PUT http://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.PolicyInsights/attestations/{name}?api-version=2019-10-01 ```
-Attestations can be used to set the compliance state of an individual resource or a scope. A resource can have one attestation for an individual manual policy assignment.
## Request body
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Example: Gatekeeper v2 admission control rule to allow only the specified contai
## Manual (preview)
-The new `manual` (preview) effect enables you to define and track your own custom attestation
-resources. Unlike other Policy definitions that actively scan for evaluation, the Manual effect
-allows for manual changes to the compliance state. To change the compliance for a manual policy,
-you'll need to create an attestation for that compliance state.
+The new `manual` (preview) effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope which defines the boundary of resources whose compliance need attesting.
> [!NOTE] > During Public Preview, support for manual policy is available through various Microsoft Defender
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
see [Understand scope in Azure Policy](./scope.md). Azure Policy exemptions only
You use JavaScript Object Notation (JSON) to create a policy exemption. The policy exemption contains elements for: -- display name-- description-- metadata-- policy assignment-- policy definitions within an initiative-- exemption category-- expiration
+- [display name](#display-name-and-description)
+- [description](#display-name-and-description)
+- [metadata](#metadata)
+- [policy assignment](#policy-assignment-id)
+- [policy definitions within an initiative](#policy-definition-ids)
+- [exemption category](#exemption-category)
+- [expiration](#expiration)
+- [resource selectors](#resource-selectors-preview)
+- [assignment scope validation](#assignment-scope-validation-preview)
> [!NOTE] > A policy exemption is created as a child object on the resource hierarchy or the individual
two of the policy definitions in the initiative, the `customOrgPolicy` custom po
"allowedLocations" ], "exemptionCategory": "waiver",
- "expiresOn": "2020-12-31T23:59:00.0000000Z"
+ "expiresOn": "2020-12-31T23:59:00.0000000Z",
+ "assignmentScopeValidation": "Default"
} } ```
format `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
> The policy exemptions isn't deleted when the `expiresOn` date is reached. The object is preserved > for record-keeping, but the exemption is no longer honored.
+## Resource selectors (preview)
+
+Exemptions support an optional property `resourceSelectors`. This property works the same way in exemptions as it does in assignments, allowing for gradual rollout or rollback of an _exemption_ to certain subsets of resources in a controlled manner based on resource type, resource location, or whether the resource has a location. More details about how to use resource selectors can be found in the [assignment structure](assignment-structure.md#resource-selectors-preview). Below is an example exemption JSON which leverages resource selectors. In this example, only resources in `westcentralus` will be exempt from the policy assignment:
+
+```json
+{
+ "properties": {
+ "policyAssignmentId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
+ "policyDefinitionReferenceIds": [
+ "limitSku", "limitType"
+ ],
+ "exemptionCategory": "Waiver",
+ "resourceSelectors": [
+ {
+ "name": "TemporaryMitigation",
+ "selectors": [
+ {
+ "kind": "resourceLocation",
+ "in": [ "westcentralus" ]
+ }
+ ]
+ }
+ ]
+ },
+ "systemData": { ... },
+ "id": "/subscriptions/{subId}/resourceGroups/demoCluster/providers/Microsoft.Authorization/policyExemptions/DemoExpensiveVM",
+ "type": "Microsoft.Authorization/policyExemptions",
+ "name": "DemoExpensiveVM"
+}
+```
+
+Regions can be added or removed from the `resourceLocation` list in the example above. Resource selectors allow for greater flexibility of where and how exemptions can be created and managed.
+
+## Assignment scope validation (preview)
+
+In most scenarios, the exemption scope is validated to ensure it is at or under the policy assignment scope. The optional `assignmentScopeValidation` property can allow an exemption to bypass this validation and be created outside of the assignment scope. This is intended for situations where a subscription needs to be moved from one management group (MG) to another, but the move would be blocked by policy due to properties of resources within the subscription. In this scenario, an exemption could be created for the subscription in its current MG to exempt its resources from a policy assignment on the destination MG. That way, when the subscription is moved into the destination MG, the operation is not blocked because resources are already exempt from the policy assignment in question. The use of this property is illustrated below:
+
+```json
+{
+ "properties": {
+ "policyAssignmentId": "/providers/Microsoft.Management/managementGroups/{mgB}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
+ "policyDefinitionReferenceIds": [
+ "limitSku", "limitType"
+ ],
+ "exemptionCategory": "Waiver",
+ "assignmentScopeValidation": "DoNotValidate",
+ },
+ "systemData": { ... },
+ "id": "/subscriptions/{subIdA}/providers/Microsoft.Authorization/policyExemptions/DemoExpensiveVM",
+ "type": "Microsoft.Authorization/policyExemptions",
+ "name": "DemoExpensiveVM"
+}
+```
+
+Allowed values for `assignmentScopeValidation` are `Default`and `DoNotValidate`. If not specified, the default validation process will occur.
+ ## Required permissions The Azure RBAC permissions needed to manage Policy exemption objects are in the
assignment.
- Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+ [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
Condition(s) in the `if` block of the policy rule are evaluated for applicabilit
> [!NOTE] > Applicability is different from compliance, and the logic used to determine each is different. If a resource is **applicable** that means it is relevant to the policy. If a resource is **compliant** that means it adheres to the policy. Sometimes only certain conditions from the policy rule impact applicability, while all conditions of the policy rule impact compliance state.
-## Applicability logic for Append/Modify/Audit/Deny/DataPlane effects
+## Applicability logic for Append/Modify/Audit/Deny/RP Mode specific effects
Azure Policy evaluates only `type`, `name`, and `kind` conditions in the policy rule `if` expression and treats other conditions as true (or false when negated). If the final evaluation result is true, the policy is applicable. Otherwise, it's not applicable.
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
Azure Policy for Kubernetes supports the following cluster environments:
- [Azure Arc enabled Kubernetes](../../../azure-arc/kubernetes/overview.md) > [!IMPORTANT]
-> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Instructions can be found below for [removal of those add-ons](#remove-the-add-on). The Azure Policy Extension for Azure Arc enabled Kubernetes is in _preview_.
+> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Instructions can be found below for [removal of those add-ons](#remove-the-add-on).
## Overview
similar to the following output:
"identity": null } ```
-## <a name="install-azure-policy-extension-for-azure-arc-enabled-kubernetes"></a>Install Azure Policy Extension for Azure Arc enabled Kubernetes (preview)
+## <a name="install-azure-policy-extension-for-azure-arc-enabled-kubernetes"></a>Install Azure Policy Extension for Azure Arc enabled Kubernetes
[Azure Policy for Kubernetes](./policy-for-kubernetes.md) makes it possible to manage and report on the compliance state of your Kubernetes clusters from one place.
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Hive metastore operation takes much time and thus slow down Hive compilation. In
## Troubleshooting guide
-[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](/azure/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
+[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](./interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
## References
https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/
## Further reading
-* [HDInsight 4.0 Announcement](/azure/hdinsight/hdinsight-version-release.md)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0.md)
+* [HDInsight 4.0 Announcement](./hdinsight-version-release.md)
+* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0.md)
hdinsight Hdinsight Overview Before You Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md
HDInsight have two options to configure the databases in the clusters.
During cluster creation, default configuration will use internal database. Once the cluster is created, customer canΓÇÖt change the database type. Hence, it's recommended to create and use the external database. You can create custom databases for Ambari, Hive, and Ranger.
-For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](/azure/hdinsight/hdinsight-custom-ambari-db.md)
+For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](./hdinsight-custom-ambari-db.md)
## Keep your clusters up to date
As part of the best practices, we recommend you keep your clusters updated on re
HDInsight release happens every 30 to 60 days. It's always good to move to the latest release as early possible. The recommended maximum duration for cluster upgrades is less than six months.
-For more information, see how to [Migrate HDInsight cluster to a newer version](/azure/hdinsight/hdinsight-upgrade-cluster.md)
+For more information, see how to [Migrate HDInsight cluster to a newer version](./hdinsight-upgrade-cluster.md)
## Next steps * [Create Apache Hadoop cluster in HDInsight](./hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md) * [Create Apache Spark cluster - Portal](./spark/apache-spark-jupyter-spark-sql-use-portal.md)
-* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
+* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
This new feature allows you to add more disks in cluster, which will be used as
> The added disks are only configured for node manager local directories. >
-For more information, [see here](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters#configuration--pricing)
+For more information, [see here](./hdinsight-hadoop-provision-linux-clusters.md#configuration--pricing)
**2. Selective logging analysis**
Selective logging analysis is now available on all regions for public preview. Y
1. Selective Logging uses script action to disable/enable tables and their log types. Since it doesn't open any new ports or change any existing security setting hence, there are no security changes. 1. Script Action runs in parallel on all specified nodes and changes the configuration files for disabling/enabling tables and their log types.
-For more information, [see here](/azure/hdinsight/selective-logging-analysis)
+For more information, [see here](./selective-logging-analysis.md)
![Icon_showing_bug_fixes](media/hdinsight-release-notes/icon-for-bugfix.png)
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
### Known issues
-HDInsight is compatible with Apache HIVE 3.1.2. Due to a bug in this release, the Hive version is shown as 3.1.0 in hive interfaces. However, there's no impact on the functionality.
+HDInsight is compatible with Apache HIVE 3.1.2. Due to a bug in this release, the Hive version is shown as 3.1.0 in hive interfaces. However, there's no impact on the functionality.
hdinsight Manage Clusters Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/manage-clusters-runbooks.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites
-* An existing [Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).
+* An existing [Azure Automation account](../automation/quickstarts/create-azure-automation-account-portal.md).
* An existing [Azure Storage account](../storage/common/storage-account-create.md), which will be used as cluster storage. ## Install HDInsight modules
When no longer needed, delete the Azure Automation Account that was created to a
## Next steps > [!div class="nextstepaction"]
-> [Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell](hdinsight-administer-use-powershell.md)
+> [Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell](hdinsight-administer-use-powershell.md)
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
For more information about the Quickstart template and the Deploy to Azure butto
Azure provides Azure PowerShell and Azure CLI to speed up your configurations when used in enterprise environments. Deploying MedTech service with Azure PowerShell or Azure CLI can be useful for adding automation so that you can scale your deployment for a large number of developers. This method is more detailed but provides extra speed and efficiency because it allows you to automate your deployment.
-For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](/deploy-08-new-ps-cli.md).
+For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](/azure/healthcare-apis/iot/deploy-08-new-ps-cli).
## Manual deployment The manual deployment method uses Azure portal to implement each deployment task individually. There are no shortcuts. Because you will be able to see all the details of how to complete the sequence of each task, this procedure can be beneficial if you need to customize or troubleshoot your deployment process. This is the most complex method, but it provides valuable technical information and developmental options that will enable you to fine-tune your deployment very precisely.
-For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](/deploy-03-new-manual.md).
+For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](/azure/healthcare-apis/iot/deploy-03-new-manual).
## Deployment architecture overview
For information about granting access to the FHIR service, see [Granting access
In this article, you learned about the different types of deployment for MedTech service. To learn more about MedTech service, see >[!div class="nextstepaction"]
->[What is MedTech service?](/iot-connector-overview.md).
+>[What is MedTech service?](/rest/api/healthcareapis/iot-connectors).
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
Use your device (real or simulated) to send the sample heart rate message shown
This message will get routed to MedTech service, where the message will be transformed into a FHIR Observation resource and stored into FHIR service. > [!IMPORTANT]
-> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](/azure/iot-hub/iot-hub-devguide-messages-construct#anti-spoofing-properties).
+> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties).
> > To learn about IoT Hub device message enrichment and IotJsonPathContentTemplate mappings usage with the MedTech service device mapping, see [How to use IotJsonPathContentTemplate mappings](how-to-use-iot-jsonpath-content-mappings.md).
To learn about the different stages of data flow within MedTech service, see
>[!div class="nextstepaction"] >[MedTech service data flow](iot-data-flow.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
Metric category|Metric name|Metric description|
> [!TIP] >
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
> [!IMPORTANT] >
To learn about the MedTech service frequently asked questions (FAQs), see
> [!div class="nextstepaction"] > [Frequently asked questions about the MedTech service](iot-connector-faqs.md)
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Iot Jsonpath Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-jsonpath-content-mappings.md
With each of these examples, you're provided with:
* An example of what the MedTech service device message will look like after normalization. > [!IMPORTANT]
-> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](/azure/iot-hub/iot-hub-devguide-messages-construct#anti-spoofing-properties).
+> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties).
> [!TIP] > [Visual Studio Code with the Azure IoT Hub extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) is a recommended method for sending IoT device messages to your IoT Hub for testing and troubleshooting.
In this article, you learned how to use IotJsonPathContentTemplate mappings with
>[!div class="nextstepaction"] >[How to use the FHIR destination mapping](how-to-use-fhir-mappings.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-metrics-diagnostics-export.md
# How to configure diagnostic settings for exporting the MedTech service metrics
-In this article, you'll learn how to configure diagnostic settings for the MedTech service to export metrics to different destinations (for example: to [Azure storage](/azure/storage/) or an [Azure event hub](/azure/event-hubs/)) for audit, analysis, or backup.
+In this article, you'll learn how to configure diagnostic settings for the MedTech service to export metrics to different destinations (for example: to [Azure storage](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, or backup.
## Create a diagnostic setting for the MedTech service 1. To enable metrics export for your MedTech service, select **MedTech service** in your workspace under **Services**.
In this article, you'll learn how to configure diagnostic settings for the MedTe
To view the frequently asked questions (FAQs) about the MedTech service, see >[!div class="nextstepaction"]
->[MedTech service FAQs](iot-connector-faqs.md)
+>[MedTech service FAQs](iot-connector-faqs.md)
industrial-iot Overview What Is Industrial Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/overview-what-is-industrial-iot.md
Azure IIoT solutions are built from specific components:
The [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) acts as a central message hub for secure, bi-directional communications between any IoT application and the devices it manages. It's an open and flexible cloud platform as a service (PaaS) that supports open-source SDKs and multiple protocols.
-Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can process your combined data with Microsoft Azure services and tools, for example [Azure Stream Analytics](/azure/stream-analytics), or visualize in your Business Intelligence platform of choice such as [Power BI](https://powerbi.microsoft.com).
+Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can process your combined data with Microsoft Azure services and tools, for example [Azure Stream Analytics](../stream-analytics/index.yml), or visualize in your Business Intelligence platform of choice such as [Power BI](https://powerbi.microsoft.com).
### IoT Edge devices
You can read more about the OPC Publisher or get started with deploying the IIoT
> [!div class="nextstepaction"] > [Deploy the Industrial IoT Platform](tutorial-deploy-industrial-iot-platform.md)
->
+>
iot-develop Iot Device Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/iot-device-selection.md
Please submit an issue!
Other helpful resources include: -- [Overview of Azure IoT device types](/azure/iot-develop/concepts-iot-device-types)-- [Overview of Azure IoT Device SDKs](/azure/iot-develop/about-iot-sdks)-- [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](/azure/iot-develop/quickstart-send-telemetry-iot-hub?pivots=programming-language-ansi-c)-- [AzureRTOS ThreadX Documentation](/azure/rtos/threadx/)
+- [Overview of Azure IoT device types](./concepts-iot-device-types.md)
+- [Overview of Azure IoT Device SDKs](./about-iot-sdks.md)
+- [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](./quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c)
+- [AzureRTOS ThreadX Documentation](/azure/rtos/threadx/)
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
This tutorial shows you how to build a GPU-enabled virtual machine (VM). From th
We'll use the Azure portal, the Azure Cloud Shell, and your VM's command line to: * Build a GPU-capable VM
-* Install the [NVIDIA driver extension](/azure/virtual-machines/extensions/hpccompute-gpu-linux) on the VM
+* Install the [NVIDIA driver extension](../virtual-machines/extensions/hpccompute-gpu-linux.md) on the VM
* Configure a module on an IoT Edge device to allocate work to a GPU ## Prerequisites
We'll use the Azure portal, the Azure Cloud Shell, and your VM's command line to
* Azure IoT Edge device
- If you don't already have an IoT Edge device and need to quickly create one, run the following command. Use the [Azure Cloud Shell](/azure/cloud-shell/overview) located in the Azure portal. Create a new device name for `<DEVICE-NAME>` and replace the IoT `<IOT-HUB-NAME>` with your own.
+ If you don't already have an IoT Edge device and need to quickly create one, run the following command. Use the [Azure Cloud Shell](../cloud-shell/overview.md) located in the Azure portal. Create a new device name for `<DEVICE-NAME>` and replace the IoT `<IOT-HUB-NAME>` with your own.
```azurecli az iot hub device-identity create --device-id <YOUR-DEVICE-NAME> --edge-enabled --hub-name <YOUR-IOT-HUB-NAME>
We'll use the Azure portal, the Azure Cloud Shell, and your VM's command line to
## Create a GPU-optimized virtual machine
-To create a GPU-optimized virtual machine (VM), choosing the right size is important. Not all VM sizes will accommodate GPU processing. In addition, there are different VM sizes for different workloads. For more information, see [GPU optimized virtual machine sizes](/azure/virtual-machines/sizes-gpu) or try the [Virtual machines selector](https://azure.microsoft.com/pricing/vm-selector/).
+To create a GPU-optimized virtual machine (VM), choosing the right size is important. Not all VM sizes will accommodate GPU processing. In addition, there are different VM sizes for different workloads. For more information, see [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) or try the [Virtual machines selector](https://azure.microsoft.com/pricing/vm-selector/).
-Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview) template in GitHub, then configure it to be GPU-optimized.
+Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md) template in GitHub, then configure it to be GPU-optimized.
1. Go to the IoT Edge VM deployment template in GitHub: [Azure/iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.4).
Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](/azure/azure
> > Check which GPU VMs are supported in each region: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=us-central,us-east,us-east-2,us-north-central,us-south-central,us-west-central,us-west,us-west-2,us-west-3&products=virtual-machines). >
- > To check [which region your Azure subscription allows](/azure/azure-resource-manager/troubleshooting/error-sku-not-available?tabs=azure-cli#solution), try this Azure command from the Azure portal. The `N` in `Standard_N` means it's a GPU-enabled VM.
+ > To check [which region your Azure subscription allows](../azure-resource-manager/troubleshooting/error-sku-not-available.md?tabs=azure-cli#solution), try this Azure command from the Azure portal. The `N` in `Standard_N` means it's a GPU-enabled VM.
> ```azurecli > az vm list-skus --location <YOUR-REGION> --size Standard_N --all --output table > ```
Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](/azure/azure
1. Select the **Review + create** button at the bottom, then the **Create** button. Deployment can take up one minute to complete. ## Install the NVIDIA extension
-Now that we have a GPU-optimized VM, let's install the [NVIDIA extension](/azure/virtual-machines/extensions/hpccompute-gpu-linux) on the VM using the Azure portal.
+Now that we have a GPU-optimized VM, let's install the [NVIDIA extension](../virtual-machines/extensions/hpccompute-gpu-linux.md) on the VM using the Azure portal.
1. Open your VM in the Azure portal and select **Extensions + applications** from the left menu.
Now that we have a GPU-optimized VM, let's install the [NVIDIA extension](/azure
:::image type="content" source="media/configure-connect-verify-gpu/nvidia-driver-installed.png" alt-text="Screenshot of the NVIDIA driver table."::: > [!NOTE]
-> The NVIDIA extension is a simplified way to install the NVIDIA drivers, but you may need more customization. For more information about custom installations on N-series VMs, see [Install NVIDIA GPU drivers on N-series VMs running Linux](/azure/virtual-machines/linux/n-series-driver-setup).
+> The NVIDIA extension is a simplified way to install the NVIDIA drivers, but you may need more customization. For more information about custom installations on N-series VMs, see [Install NVIDIA GPU drivers on N-series VMs running Linux](../virtual-machines/linux/n-series-driver-setup.md).
## Enable a module with GPU acceleration
az group list
## Next steps
-This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try the learning path for [NVIDIA DeepStream development with Microsoft Azure](/training/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
+This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try the learning path for [NVIDIA DeepStream development with Microsoft Azure](/training/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
iot-edge How To Provision Devices At Scale Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-symmetric.md
Have the following information ready:
1. Update the values of `id_scope`, `registration_id`, and `symmetric_key` with your DPS and device information.
- The symmetric key parameter can accept a value of an inline key, a file URI, or a PKCS#11 URI. Uncomment just one symmetric key line, based on which format you're using.
+ The symmetric key parameter can accept a value of an inline key, a file URI, or a PKCS#11 URI. Uncomment just one symmetric key line, based on which format you're using. When using an inline key, use a base64-encoded key like the example. When using a file URI, your file should contain the raw bytes of the key.
If you use any PKCS#11 URIs, find the **PKCS#11** section in the config file and provide information about your PKCS#11 configuration.
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Previously updated : 08/30/2022 Last updated : 10/18/2022
To build and deploy your module image, you need Docker to build the module image
> You can use a local Docker registry for prototype and testing purposes instead of a cloud registry. - Install the [Azure CLI](/cli/azure/install-azure-cli)++ - Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) in order to set up your local development environment to debug, run, and test your IoT Edge solution. If you haven't already done so, install [Python (3.6/3.7/3.8) and Pip3](https://www.python.org/) and then install the IoT Edge Dev Tool (iotedgedev) by running this command in your terminal. ```cmd
To build and deploy your module image, you need Docker to build the module image
> > For more information setting up your development machine, see [iotedgedev development setup](https://github.com/Azure/iotedgedev/blob/main/docs/environment-setup/manual-dev-machine-setup.md). + Install prerequisites specific to the language you're developing in: # [C](#tab/c)
iot-edge Troubleshoot Iot Edge For Linux On Windows Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-common-errors.md
The following section addresses the common errors when installing the EFLOW MSI
## Provisioning and IoT Edge runtime The following section addresses the common errors when provisioning the EFLOW virtual machine and interact with the IoT Edge runtime. Ensure you have an understanding of the following EFLOW concepts:-- [What is Azure IoT Hub Device Provisioning Service?](/azure/iot-dps/about-iot-dps)
+- [What is Azure IoT Hub Device Provisioning Service?](../iot-dps/about-iot-dps.md)
- [Understand the Azure IoT Edge runtime and its architecture](./iot-edge-runtime.md) - [Troubleshoot your IoT Edge device](./troubleshoot.md)
The following section addresses the common errors related to EFLOW networking an
Do you think that you found a bug in the IoT Edge for Linux on Windows? [Submit an issue](https://github.com/Azure/iotedge-eflow/issues) so that we can continue to improve.
-If you have more questions, create a [Support request](https://portal.azure.com/#create/Microsoft.Support) for help.
+If you have more questions, create a [Support request](https://portal.azure.com/#create/Microsoft.Support) for help.
iot-hub-device-update Device Update Configure Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configure-repo.md
Following this document, learn how to configure a package repository using [OSCo
You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure Portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started: - Create a Device Update account and instance in your IoT Hub. See [how to create it](create-device-update-account.md).-- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device).
+- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](../iot-edge/how-to-provision-single-device-linux-symmetric.md?preserve-view=true&tabs=azure-portal%2cubuntu&view=iotedge-2020-11#install-iot-edge) or higher is already installed on the device).
- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent.md#manually-prepare-a-device). - Install the OSConfig agent on the device. See [how to](/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom). - Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure)
Follow the below steps to update Azure IoT Edge on Ubuntu Server 18.04 x64 by co
2. Upload your packages to the above configured repository. 3. Create an [APT manifest](device-update-apt-manifest.md) to provide the Device Update agent with the information it needs to download and install the packages (and their dependencies) from the repository. 4. Follow steps from [here](device-update-ubuntu-agent.md#prerequisites) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale.
-5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent.md#monitor-the-update-deployment).
+5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent.md#monitor-the-update-deployment).
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
The **ImportDevicesAsync** method takes two parameters:
SharedAccessBlobPermissions.Read ```
-* A *string* that contains a URI of an [Azure Storage](/azure/storage/) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
+* A *string* that contains a URI of an [Azure Storage](../storage/index.yml) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
```csharp SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Read
To further explore the capabilities of IoT Hub, see:
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
+* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
iot-hub Iot Hub Create Use Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-use-iot-toolkit.md
This article shows you how to use the [Azure IoT Tools for Visual Studio Code](h
- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) installed for Visual Studio Code -- An Azure resource group: [create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal#create-resource-groups) in the Azure portal
+- An Azure resource group: [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) in the Azure portal
## Create an IoT hub without an IoT Project
Now that you've deployed an IoT hub using the Azure IoT Tools for Visual Studio
* [Use the Azure IoT Tools for Visual Studio Code for Azure IoT Hub device management](iot-hub-device-management-iot-toolkit.md)
-* [See the Azure IoT Hub for VS Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki).
+* [See the Azure IoT Hub for VS Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki).
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
For a complete list of options to update an IoT hub, see the [**az iot hub updat
## Register a new device in the IoT hub
-In this section, you create a device identity in the identity registry in your IoT hub. A device can't connect to a hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). This device identity is [IoT Edge](/azure/iot-edge) enabled.
+In this section, you create a device identity in the identity registry in your IoT hub. A device can't connect to a hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). This device identity is [IoT Edge](../iot-edge/index.yml) enabled.
Run the following command to create a device identity. Use your IoT hub name and create a new device ID name in place of `{iothub_name}` and `{device_id}`. This command creates a device identity with default authorization (shared private key).
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
The following list describes the endpoints:
* **Service endpoints**. Each IoT hub exposes a set of endpoints for your solution back end to communicate with your devices. With one exception, these endpoints are only exposed using the [AMQP](https://www.amqp.org/) and AMQP over WebSockets protocols. The direct method invocation endpoint is exposed over the HTTPS protocol.
- * *Receive device-to-cloud messages*. This endpoint is compatible with [Azure Event Hubs](/azure/event-hubs/). A back-end service can use it to read the [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) sent by your devices. You can create custom endpoints on your IoT hub in addition to this built-in endpoint.
+ * *Receive device-to-cloud messages*. This endpoint is compatible with [Azure Event Hubs](../event-hubs/index.yml). A back-end service can use it to read the [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) sent by your devices. You can create custom endpoints on your IoT hub in addition to this built-in endpoint.
* *Send cloud-to-device messages and receive delivery acknowledgments*. These endpoints enable your solution back end to send reliable [cloud-to-device messages](iot-hub-devguide-messages-c2d.md), and to receive the corresponding delivery or expiration acknowledgments.
Other reference topics in this IoT Hub developer guide include:
* [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) * [Quotas and throttling](iot-hub-devguide-quotas-throttling.md) * [IoT Hub MQTT support](iot-hub-mqtt-support.md)
-* [Understand your IoT hub IP address](iot-hub-understand-ip-address.md)
+* [Understand your IoT hub IP address](iot-hub-understand-ip-address.md)
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Device identities can also be exported and imported from an IoT Hub via the Serv
The device data that a given IoT solution stores depends on the specific requirements of that solution. But, as a minimum, a solution must store device identities and authentication keys. Azure IoT Hub includes an identity registry that can store values for each device such as IDs, authentication keys, and status codes. A solution can use other Azure services such as Table storage, Blob storage, or Azure Cosmos DB to store any additional device data.
-*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](/azure/iot-dps).
+*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](../iot-dps/index.yml).
## Device heartbeat
To try out some of the concepts described in this article, see the following IoT
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](/azure/iot-dps)
+* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
# Read device-to-cloud messages from the built-in endpoint
-By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](/azure/event-hubs/). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
+By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](../event-hubs/index.yml). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
| Property | Description | | - | -- |
You can use the Event Hubs SDKs to read from the built-in endpoint in environmen
For more detail, see the [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial.
-* If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md).
+* If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md).
iot-hub Iot Hub How To Order Connection State Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-order-connection-state-events.md
# Order device connection events from Azure IoT Hub using Azure Cosmos DB
-[Azure Event Grid](/azure/event-grid/overview) helps you build event-based applications and easily integrates IoT events in your business solutions. This article walks you through a setup using Cosmos DB, Logic App, IoT Hub Events, and a simulated Raspberry Pi to collect and store connection and disconnection events of a device.
+[Azure Event Grid](../event-grid/overview.md) helps you build event-based applications and easily integrates IoT events in your business solutions. This article walks you through a setup using Cosmos DB, Logic App, IoT Hub Events, and a simulated Raspberry Pi to collect and store connection and disconnection events of a device.
From the moment your device runs, an order of operations activates:
This sample app will trigger a device connected event.
:::image type="content" source="media/iot-hub-how-to-order-connection-state-events/raspmsg.png" alt-text="Screenshot of what to expect in your output console when you run the Raspberry Pi." lightbox="media/iot-hub-how-to-order-connection-state-events/raspmsg.png":::
-1. You can check your Logic App **Overview** page to check if your logic is being triggered. It'll say **Succeeded** or **Failed**. Checking here let's you know your logic app state if troubleshooting is needed. Expect a 15-30 second delay from when your trigger runs. If you need to troubleshoot your logic app, view this [Troubleshoot errors](/azure/logic-apps/logic-apps-diagnosing-failures?tabs=consumption) article.
+1. You can check your Logic App **Overview** page to check if your logic is being triggered. It'll say **Succeeded** or **Failed**. Checking here let's you know your logic app state if troubleshooting is needed. Expect a 15-30 second delay from when your trigger runs. If you need to troubleshoot your logic app, view this [Troubleshoot errors](../logic-apps/logic-apps-diagnosing-failures.md?tabs=consumption) article.
:::image type="content" source="media/iot-hub-how-to-order-connection-state-events/logic-app-log.jpg" alt-text="Screenshot of the status updates on your logic app Overview page." lightbox="media/iot-hub-how-to-order-connection-state-events/logic-app-log.jpg":::
To remove an Azure Cosmos DB account from the Azure portal, go to your resource
* Learn about what else you can do with [Event Grid](../event-grid/overview.md)
-* Learn how to use Event Grid and Azure Monitor to [Monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
+* Learn how to use Event Grid and Azure Monitor to [Monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
You can use the [IoT Hub Resource](/rest/api/iothub/iothubresource) REST API to
## Prerequisites
-* [Azure PowerShell module](/powershell/azure/install-az-ps) or [Azure Cloud Shell](/azure/cloud-shell/overview)
+* [Azure PowerShell module](/powershell/azure/install-az-ps) or [Azure Cloud Shell](../cloud-shell/overview.md)
* [Postman](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman) or [cURL](https://curl.se/)
To learn more about developing for IoT Hub, see the following articles:
To further explore the capabilities of IoT Hub, see:
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
+* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
In this tutorial, you perform the following tasks:
* Make sure that port 8883 is open in your firewall. The sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
-* Optionally, install [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer). This tool helps you observe the messages as they arrive at your IoT hub.
+* Optionally, install [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer). This tool helps you observe the messages as they arrive at your IoT hub. This article uses Azure IoT Explorer.
# [Azure portal](#tab/portal)
Now that you have a device ID and key, use the sample code to start sending devi
dotnet restore ```
-1. In an editor of your choice, open the `Paramaters.cs` file. This file shows the parameters that are supported by the sample. Only the first three required parameters will be used in this article when running the sample. Review the code in this file. No changes are needed.
+1. In an editor of your choice, open the `Parameters.cs` file. This file shows the parameters that are supported by the sample. Only the `PrimaryConnectionString` parameter will be used in this article when running the sample. Review the code in this file. No changes are needed.
+ 1. Build and run the sample code using the following command:
- * Replace `<myDeviceId>` with the device ID that you assigned when registering the device.
- * Replace `<iotHubUri>` with the hostname of your IoT hub, which takes the format `IOTHUB_NAME.azure-devices.net`.
- * Replace `<deviceKey>` with the device key that you copied from the device identity information.
+ Replace `<myDevicePrimaryConnectionString>` with your primary connection string from your device in your IoT hub.
```cmd
- dotnet run --d <myDeviceId> --u <iotHubUri> --k <deviceKey>
+ dotnet run --PrimaryConnectionString <myDevicePrimaryConnectionString>
``` 1. You should start to see messages printed to output as they are sent to IoT Hub. Leave this program running for the duration of the tutorial.
Now, use that connection string to configure IoT Explorer for your IoT hub.
1. Select **Save**. 1. Once you connect to your IoT hub, you should see a list of devices. Select the device ID that you created for this tutorial. 1. Select **Telemetry**.
-1. Select **Start**.
+1. With your device still running, select **Start**. If you're device is not running you won't see telemetry.
![Start monitoring device telemetry in IoT Explorer.](./media/tutorial-routing/iot-explorer-start-monitoring-telemetry.png)
Now, use that connection string to configure IoT Explorer for your IoT hub.
![View messages arriving at IoT hub on the built-in endpoint.](./media/tutorial-routing/iot-explorer-view-messages.png)
-Watch the incoming messages for a few moments to verify that you see three different types of messages: normal, storage, and critical.
+ Watch the incoming messages for a few moments to verify that you see three different types of messages: normal, storage, and critical. After seeing this, you can stop your device.
These messages are all arriving at the default built-in endpoint for your IoT hub. In the next sections, we're going to create a custom endpoint and route some of these messages to storage based on the message properties. Those messages will stop appearing in IoT Explorer because messages only go to the built-in endpoint when they don't match any other routes in IoT hub.
Create an Azure Storage account and a container within that account, which will
1. In the storage account menu, select **Containers** from the **Data storage** section.
-1. Select **Container** to create a new container.
+1. Select **+ Container** to create a new container.
![Create a storage container](./media/tutorial-routing/create-storage-container.png)
Now set up the routing for the storage account. In this section you define a new
1. Select **Message Routing** from the **Hub settings** section of the menu.
-1. In the **Routes** tab, select **Add**.
+1. In the **Routes** tab, select **+ Add**.
![Add a new message route.](./media/tutorial-routing/add-route.png)
-1. Select **Add endpoint** next to the **Endpoint** field, then select **Storage** from the dropdown menu.
+1. Select **+ Add endpoint** next to the **Endpoint** field, then select **Storage** from the dropdown menu.
![Add a new endpoint for a route.](./media/tutorial-routing/add-storage-endpoint.png)
Once the route is created in IoT Hub and enabled, it will immediately start rout
### Monitor the built-in endpoint with IoT Explorer
-Return to the IoT Explorer session on your development machine. Recall that the IoT Explorer monitors the built-in endpoint for your IoT hub. That means that now you should be seeing only the messages that are *not* being routed by the custom route we created. Watch the incoming messages for a few moments and you should only see messages where `level` is set to `normal` or `critical`.
+Return to the IoT Explorer session on your development machine. Recall that the IoT Explorer monitors the built-in endpoint for your IoT hub. That means that now you should be seeing only the messages that are *not* being routed by the custom route we created.
+
+Start the sample again by running the code. Watch the incoming messages for a few moments and you should only see messages where `level` is set to `normal` or `critical`.
### View messages in the storage container
Verify that the messages are arriving in the storage container.
![Find routed messages in storage.](./media/tutorial-routing/view-messages-in-storage.png)
-1. Download the JSON file and confirm that it contains messages from your device that have the `level` property set to `storage`.
+1. Select the JSON file, then select **Download** to download the JSON file. Confirm that the file contains messages from your device that have the `level` property set to `storage`.
+
+1. Stop running the sample.
## Clean up resources
key-vault Integrate Databricks Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/integrate-databricks-blob-storage.md
az keyvault secret set --vault-name contosoKeyVault10 --name storageKey --value
## Create an Azure Databricks workspace and add Key Vault secret scope
-This section can't be completed through the command line. Follow this [guide](/azure/key-vault/general/integrate-databricks-blob-storage#create-an-azure-databricks-workspace-and-add-a-secret-scope). You'll need to access the [Azure portal](https://portal.azure.com/#home) to:
+This section can't be completed through the command line. You'll need to access the [Azure portal](https://portal.azure.com/#home) to:
1. Create your Azure Databricks resource 1. Launch your workspace
This section can't be completed through the command line. Follow this [guide](/a
## Access your blob container from Azure Databricks workspace
-This section can't be completed through the command line. Follow this [guide](/azure/key-vault/general/integrate-databricks-blob-storage#access-your-blob-container-from-azure-databricks). You'll need to use the Azure Databricks workspace to:
+This section can't be completed through the command line. You'll need to use the Azure Databricks workspace to:
1. Create a **New Cluster** 1. Create a **New Notebook**
key-vault Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/versions.md
Private endpoints now available in preview. Azure Private Link Service enables y
New features and integrations released this year: - Integration with Azure Functions. For an example scenario leveraging [Azure Functions](../../azure-functions/index.yml) for key vault operations, see [Automate the rotation of a secret](../secrets/tutorial-rotation.md). -- [Integration with Azure Databricks](/azure/key-vault/general/integrate-databricks-blob-storage). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
+- [Integration with Azure Databricks](./integrate-databricks-blob-storage.md). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
- [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md). ## 2016
First preview version (version 2014-12-08-preview) was announced on January 8, 2
- [About keys, secrets, and certificates](about-keys-secrets-certificates.md) - [Keys](../keys/index.yml) - [Secrets](../secrets/index.yml)-- [Certificates](../certificates/index.yml)
+- [Certificates](../certificates/index.yml)
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/whats-new.md
Private endpoints now available in preview. Azure Private Link Service enables y
New features and integrations released this year: - Integration with Azure Functions. For an example scenario leveraging [Azure Functions](../../azure-functions/index.yml) for key vault operations, see [Automate the rotation of a secret](../secrets/tutorial-rotation.md).-- [Integration with Azure Databricks](/azure/key-vault/general/integrate-databricks-blob-storage). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
+- [Integration with Azure Databricks](./integrate-databricks-blob-storage.md). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
- [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md). ## 2016
First preview version (version 2014-12-08-preview) was announced on January 8, 2
## Next steps
-If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
+If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
key-vault Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/access-control.md
The following table shows the endpoints for the management and data planes.
## Management plane and Azure RBAC
-In the management plane, you use Azure RBAC to authorize the operations a caller can execute. In the Azure RBAC model, each Azure subscription has an instance of Azure Active Directory. You grant access to users, groups, and applications from this directory. Access is granted to manage resources in the Azure subscription that use the Azure Resource Manager deployment model. To grant access, use the [Azure portal](https://portal.azure.com/), the [Azure CLI](/cli/azure/install-classic-cli), [Azure PowerShell](/powershell/azureps-cmdlets-docs), or the [Azure Resource Manager REST APIs](/rest/api/authorization/roleassignments).
+In the management plane, you use Azure RBAC to authorize the operations a caller can execute. In the Azure RBAC model, each Azure subscription has an instance of Azure Active Directory. You grant access to users, groups, and applications from this directory. Access is granted to manage resources in the Azure subscription that use the Azure Resource Manager deployment model. To grant access, use the [Azure portal](https://portal.azure.com/), the [Azure CLI](/cli/azure/install-classic-cli), [Azure PowerShell](/powershell/azureps-cmdlets-docs), or the [Azure Resource Manager REST APIs](/rest/api/authorization/role-assignments).
You create a key vault in a resource group and manage access by using Azure Active Directory. You grant users or groups the ability to manage the key vaults in a resource group. You grant the access at a specific scope level by assigning appropriate Azure roles. To grant access to a user to manage key vaults, you assign a predefined `key vault Contributor` role to the user at a specific scope. The following scopes levels can be assigned to an Azure role:
lab-services Azure Polices For Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/azure-polices-for-lab-services.md
Azure Policy helps you manage and prevent IT issues by applying policy definitio
1. Lab Services should require non-admin user for labs 1. Lab Services should restrict allowed virtual machine SKU sizes
-For a full list of built-in policies, including policies for Lab Services, see [Azure Policy built-in policy definitions](/azure/governance/policy/samples/built-in-policies#lab-services).
+For a full list of built-in policies, including policies for Lab Services, see [Azure Policy built-in policy definitions](../governance/policy/samples/built-in-policies.md#lab-services).
This policy enforces that all [shutdown options](how-to-configure-auto-shutdown-
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs will show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when all shutdown options are not enabled for a lab. |
+|**Audit**|Labs will show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when all shutdown options are not enabled for a lab. |
|**Deny**|Lab creation will fail if all shutdown options are not enabled. | ## Lab Services should not allow template virtual machines for labs
This policy can be used to restrict [customization of lab templates](tutorial-se
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs will show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when a template virtual machine is used for a lab.|
+|**Audit**|Labs will show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when a template virtual machine is used for a lab.|
|**Deny**|Lab creation to fail if ΓÇ£create a template virtual machineΓÇ¥ option is used for a lab.| ## Lab Services requires non-admin user for labs
During the policy assignment, the lab administrator can choose the following eff
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when non-admin accounts are not used while creating the lab.|
+|**Audit**|Labs show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when non-admin accounts are not used while creating the lab.|
|**Deny**|Lab creation will fail if ΓÇ£Give lab users a non-admin account on their virtual machinesΓÇ¥ is not checked while creating a lab.| ## Lab Services should restrict allowed virtual machine SKU sizes
During the policy assignment, the Lab Administrator can choose the following eff
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when a non-allowed SKU is used while creating the lab.|
+|**Audit**|Labs show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when a non-allowed SKU is used while creating the lab.|
|**Deny**|Lab creation will fail if SKU chosen while creating a lab is not allowed as per the policy assignment.| ## Custom policies
Learn how to create custom policies:
See the following articles: - [How to use the Lab Services should restrict allowed virtual machine SKU sizes Azure policy](how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md)-- [Built-in Policies](/azure/governance/policy/samples/built-in-policies#lab-services)-- [What is Azure policy?](/azure/governance/policy/overview)
+- [Built-in Policies](../governance/policy/samples/built-in-policies.md#lab-services)
+- [What is Azure policy?](../governance/policy/overview.md)
lab-services How To Connect Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-vnet-injection.md
Before you configure advanced networking for your lab plan, complete the followi
1. [Create a subnet](../virtual-network/virtual-network-manage-subnet.md) for the virtual network. 1. [Delegate the subnet](#delegate-the-virtual-network-subnet-for-use-with-a-lab-plan) to **Microsoft.LabServices/labplans**. 1. [Create a network security group (NSG)](../virtual-network/manage-network-security-group.md).
-1. [Create an inbound rule to allow traffic from SSH and RDP ports](/azure/virtual-network/manage-network-security-group).
+1. [Create an inbound rule to allow traffic from SSH and RDP ports](../virtual-network/manage-network-security-group.md).
1. [Associate the NSG to the delegated subnet](#associate-delegated-subnet-with-nsg). Now that the prerequisites have been completed, you can [use advanced networking to connect your virtual network during lab plan creation](#connect-the-virtual-network-during-lab-plan-creation).
See the following articles:
- As an admin, [attach a compute gallery to a lab plan](how-to-attach-detach-shared-image-gallery.md). - As an admin, [configure automatic shutdown settings for a lab plan](how-to-configure-auto-shutdown-lab-plans.md).-- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
+- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
lab-services How To Use Restrict Allowed Virtual Machine Sku Sizes Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md
Now you have a lab plan resource ID, you can use it to exclude the lab plan as y
## Next steps See the following articles: - [WhatΓÇÖs new with Azure Policy for Lab Services?](azure-polices-for-lab-services.md)-- [Built-in Policies](/azure/governance/policy/samples/built-in-policies#lab-services)-- [What is Azure policy?](/azure/governance/policy/overview)-
+- [Built-in Policies](../governance/policy/samples/built-in-policies.md#lab-services)
+- [What is Azure policy?](../governance/policy/overview.md)
lab-services Reliability In Azure Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/reliability-in-azure-lab-services.md
Last updated 08/18/2022
# What is reliability in Azure Lab Services?
-This article describes reliability support in Azure Lab Services, and covers regional resiliency with availability zones. For a more detailed overview of reliability in Azure, see [Azure resiliency](/azure/availability-zones/overview).
+This article describes reliability support in Azure Lab Services, and covers regional resiliency with availability zones. For a more detailed overview of reliability in Azure, see [Azure resiliency](../availability-zones/overview.md).
## Availability zone support
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones allow the services to fail over to the other availability zones to provide continuity in service with minimal interruption. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones allow the services to fail over to the other availability zones to provide continuity in service with minimal interruption. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](../availability-zones/az-overview.md).
Azure availability zones-enabled services are designed to provide the right level of resiliency and flexibility. They can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal, with instances pinned to a specific zone. You can also combine these approaches. For more information on zonal vs. zone-redundant architecture, see [Build solutions with availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
Currently, the service is not zonal. That is, you canΓÇÖt configure a lab or the
There are no increased SLAs available for availability in Azure Lab Services. For the monthly uptime SLAs for Azure Lab Services, see [SLA for Azure Lab Services](https://azure.microsoft.com/support/legal/sla/lab-services/v1_0/).
-The Azure Lab Services infrastructure uses Azure Cosmos DB storage. The Azure Cosmos DB storage region is the same as the region where the lab plan is located. All the regional Azure Cosmos DB accounts are single region. In the zone-redundant regions listed in this article, the Azure Cosmos DB accounts are single region with Availability Zones. In the other regions, the accounts are single region without Availability Zones. For high availability capabilities for these account types, see [SLAs for Azure Cosmos DB](/azure/cosmos-db/high-availability#slas).
+The Azure Lab Services infrastructure uses Azure Cosmos DB storage. The Azure Cosmos DB storage region is the same as the region where the lab plan is located. All the regional Azure Cosmos DB accounts are single region. In the zone-redundant regions listed in this article, the Azure Cosmos DB accounts are single region with Availability Zones. In the other regions, the accounts are single region without Availability Zones. For high availability capabilities for these account types, see [SLAs for Azure Cosmos DB](../cosmos-db/high-availability.md#slas).
### Zone down experience
In the event of a zone outage in these regions, you can still perform the follow
- Configure lab schedules - Create/manage labs and VMs in regions unaffected by the zone outage.
-Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB region. For more information, see [Region outages](/azure/cosmos-db/high-availability#region-outages).
+Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB region. For more information, see [Region outages](../cosmos-db/high-availability.md#region-outages).
For regions not listed, access to the Azure Lab Services infrastructure is not guaranteed when there is a zone outage in the region containing the lab plan. You will only be able to perform the following tasks:
Azure Lab Services does not provide regional failover support. If you want to pr
### Outage detection, notification, and management
-Azure Lab Services does not provide any service-specific signals about an outage, but is dependent on Azure communications that inform customers about outages. For more information on service health, see [Resource health overview](/azure/service-health/resource-health-overview).
+Azure Lab Services does not provide any service-specific signals about an outage, but is dependent on Azure communications that inform customers about outages. For more information on service health, see [Resource health overview](../service-health/resource-health-overview.md).
## Next steps > [!div class="nextstepaction"]
-> [Resiliency in Azure](/azure/availability-zones/overview)
+> [Resiliency in Azure](../availability-zones/overview.md)
lab-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot.md
This article provides several common reasons why an educator might not be able t
Possible issues: -- The Azure Compute Gallery is not connected to the lab plan. To connect an Azure Compute Gallery, see [Attach or detach a compute gallery](/azure/lab-services/how-to-attach-detach-shared-image-gallery).
+- The Azure Compute Gallery is not connected to the lab plan. To connect an Azure Compute Gallery, see [Attach or detach a compute gallery](./how-to-attach-detach-shared-image-gallery.md).
- The image is not enabled by the administrator. This applies to both Marketplace images and Azure Compute Gallery images. To enable images, see [Specify marketplace images for labs](specify-marketplace-images.md). -- The image in the attached Azure Compute Gallery is not replicated to the same location as the lab plan. For more information, see [Store and share images in an Azure Compute Gallery](/azure/virtual-machines/shared-image-galleries).
+- The image in the attached Azure Compute Gallery is not replicated to the same location as the lab plan. For more information, see [Store and share images in an Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).
- Image sizes greater than 127GB or with multiple disks are not supported.
lab-services Tutorial Create Lab With Advanced Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md
An Azure account with an active subscription. [Create an account for free](https
[!INCLUDE [resource group definition](../../includes/resource-group.md)]
-The following steps show how to use the Azure portal to [create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal). For simplicity, we'll put all resources for this tutorial in the same resource group.
+The following steps show how to use the Azure portal to [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md). For simplicity, we'll put all resources for this tutorial in the same resource group.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Resource groups**.
The following steps show how to use the Azure portal to create a virtual network
1. One the **IP Addresses** tab, create a subnet that will be used by the labs. 1. Select **+ Add subnet** 1. For **Subnet name**, enter **labservices-subnet**.
- 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 will have enough IP addresses for 251 lab VMs. (Five IP addresses are reserved by Azure for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](/azure/virtual-network/virtual-network-manage-subnet).
+ 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 will have enough IP addresses for 251 lab VMs. (Five IP addresses are reserved by Azure for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](../virtual-network/virtual-network-manage-subnet.md).
1. Select **OK**. 1. Select **Review + Create**.
The following steps show how to use the Azure portal to create a virtual network
## Delegate subnet to Azure Lab Services
-In this section, we'll configure the subnet to be used with Azure Lab Services. To tell Azure Lab Services that a subnet may be used, the subnet must be [delegated to the service](/azure/virtual-network/manage-subnet-delegation).
+In this section, we'll configure the subnet to be used with Azure Lab Services. To tell Azure Lab Services that a subnet may be used, the subnet must be [delegated to the service](../virtual-network/manage-subnet-delegation.md).
1. Open the **MyVirtualNetwork** resource. 1. Select the **Subnets** item on the left menu.
If you're not going to continue to use this application, delete the virtual netw
## Next steps >[!div class="nextstepaction"]
->[Add students to the labs](how-to-configure-student-usage.md)
+>[Add students to the labs](how-to-configure-student-usage.md)
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
az vm create \
* Limit of 100 IP addresses in the backend pool for IP based LBs * The backend resources must be in the same virtual network as the load balancer for IP based LBs * A load balancer with IP based Backend Pool canΓÇÖt function as a Private Link service
- * [Private endpoint resources](/azure/private-link/private-endpoint-overview) can't be placed in a IP based backend pool
+ * [Private endpoint resources](../private-link/private-endpoint-overview.md) can't be placed in a IP based backend pool
* ACI containers aren't currently supported by IP based LBs * Load balancers or services such as Application Gateway canΓÇÖt be placed in the backend pool of the load balancer * Inbound NAT Rules canΓÇÖt be specified by IP address
In this article, you learned about Azure Load Balancer backend pool management a
Learn more about [Azure Load Balancer](load-balancer-overview.md).
-Review the [REST API](/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backend pool management.
+Review the [REST API](/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backend pool management.
load-balancer Move Across Regions Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-powershell.md
The following steps show how to prepare the internal load balancer for the move
}, ``` For more information on the differences between basic and standard sku load balancers, see [Azure Standard Load Balancer overview](./load-balancer-overview.md)
+
+ * **Availability zone**. You can change the zone(s) of the load balancer's frontend by changing the zone property. If the zone property isn't specified, the frontend will be created as no-zone. You can specify a single zone to create a zonal frontend or all 3 zones for a zone-redundant frontend.
+ ```json
+ "frontendIPConfigurations": [
+ {
+ "name": "myfrontendIPinbound",
+ "id": "[concat(resourceId('Microsoft.Network/loadBalancers', parameters('loadBalancers_myLoadBalancer_name')), '/frontendIPConfigurations/myfrontendIPinbound')]"
+ "type": "Microsoft.Network/loadBalancers/frontendIPConfigurations",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "privateIPAddress": "10.0.0.1",
+ "privateIPAllocationMethod": "Static",
+ "subnet": {
+ "id": "[concat(resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name')), '/subnet-1')]"
+ },
+ "privateIPAddressVersion": "IPv4"
+ },
+ "zones": [
+ "1",
+ "2",
+ "3"
+ ]
+ }
+ ],
+ ```
* **Load balancing rules** - You can add or remove load balancing rules in the configuration by adding or removing entries to the **loadBalancingRules** section of the **\<resource-group-name>.json** file:
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
The script migrates the following from the Basic load balancer to the Standard l
- Inbound NAT Rules: - All NAT rules will be migrated to the new Standard load balancer - Outbound Rules:
- - Basic load balancers don't support configured outbound rules. The script will create an outbound rule in the Standard load balancer to preserve the outbound behavior of the Basic load balancer. For more information about outbound rules, see [Outbound rules](/azure/load-balancer/outbound-rules).
+ - Basic load balancers don't support configured outbound rules. The script will create an outbound rule in the Standard load balancer to preserve the outbound behavior of the Basic load balancer. For more information about outbound rules, see [Outbound rules](./outbound-rules.md).
- Network security group - Basic load balancer doesn't require a network security group to allow outbound connectivity. In case there's no network security group associated with the virtual machine scale set, a new network security group will be created to preserve the same functionality. This new network security group will be associated to the virtual machine scale set backend pool member network interfaces. It will allow the same load balancing rules ports and protocols and preserve the outbound connectivity. - Backend pools:
The script migrates the following from the Basic load balancer to the Standard l
- If there's a virtual machine scale set using Rolling Upgrade policy, the script will update the virtual machine scale set upgrade policy to "Manual" during the migration process and revert it back to "Rolling" after the migration is completed. >[!NOTE]
-> Network security group are not configured as part of Internal Load Balancer upgrade. To learn more about NSGs, see [Network security groups](/azure/virtual-network/network-security-groups-overview)
+> Network security group are not configured as part of Internal Load Balancer upgrade. To learn more about NSGs, see [Network security groups](../virtual-network/network-security-groups-overview.md)
### What happens if my upgrade fails mid-migration? The module is designed to accommodate failures, either due to unhandled errors or unexpected script termination. The failure design is a 'fail forward' approach, where instead of attempting to move back to the Basic load balancer, you should correct the issue causing the failure (see the error output or log file), and retry the migration again, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters. For public load balancers, because the Public IP Address SKU has been updated to Standard, moving the same IP back to a Basic load balancer won't be possible. The basic failure recovery procedure is: 1. Address the cause of the migration failure. Check the log file `Start-AzBasicLoadBalancerUpgrade.log` for details
- 1. [Remove the new Standard load balancer](/azure/load-balancer/update-load-balancer-with-vm-scale-set) (if created). Depending on which stage of the migration failed, you may have to remove the Standard load balancer reference from the virtual machine scale set network interfaces (IP configurations) and health probes in order to remove the Standard load balancer and try again.
+ 1. [Remove the new Standard load balancer](./update-load-balancer-with-vm-scale-set.md) (if created). Depending on which stage of the migration failed, you may have to remove the Standard load balancer reference from the virtual machine scale set network interfaces (IP configurations) and health probes in order to remove the Standard load balancer and try again.
1. Locate the basic load balancer state backup file. This will either be in the directory where the script was executed, or at the path specified with the `-RecoveryBackupPath` parameter during the failed execution. The file will be named: `State_<basicLBName>_<basicLBRGName>_<timestamp>.json` 1. Rerun the migration script, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters instead of -BasicLoadBalancerName or passing the Basic load balancer over the pipeline ## Next steps
-[Learn about Azure Load Balancer](load-balancer-overview.md)
+[Learn about Azure Load Balancer](load-balancer-overview.md)
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-In logic app workflows, some triggers and actions support using a managed identity for authenticating access to resources protected by Azure Active Directory (Azure AD). When you use a managed identity to authenticate your connection, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information. For more information, see [What are managed identities for Azure resources?](/active-directory/managed-identities-azure-resources/overview.md).
+In logic app workflows, some triggers and actions support using a managed identity for authenticating access to resources protected by Azure Active Directory (Azure AD). When you use a managed identity to authenticate your connection, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
Azure Logic Apps supports the [*system-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md) and the [*user-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md). The following list describes some differences between these identity types:
logic-apps Logic Apps Create Logic Apps From Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-logic-apps-from-templates.md
Title: Create logic app workflows faster with prebuilt templates
-description: Quickly build logic app workflows with prebuilt templates in Azure Logic Apps.
+description: Quickly build logic app workflows with prebuilt templates in Azure Logic Apps, and find out about available templates.
ms.suite: integration Previously updated : 08/01/2022 Last updated : 10/12/2022
+#Customer intent: As an Azure Logic Apps developer, I want to build a logic app workflow from a template so that I can reduce development time.
-# Create logic app workflows from prebuilt templates
+# Create a logic app workflow from a prebuilt template
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-To get you started creating workflows more quickly,
-Logic Apps provides templates, which are prebuilt
-logic apps that follow commonly used patterns.
-Use these templates as provided or edit them to fit your scenario.
+To get you started creating workflows quickly, Azure Logic Apps provides templates, which are prebuilt logic app workflows that follow commonly used patterns.
+
+This how-to guide shows how to use these templates as provided or edit them to fit your scenario.
Here are some template categories: | Template type | Description | | - | -- |
-| Enterprise cloud templates | For integrating Azure Blob, Dynamics CRM, Salesforce, Box, and includes other connectors for your enterprise cloud needs. For example, you can use these templates to organize business leads or back up your corporate file data. |
-| Personal productivity templates | Improve personal productivity by setting daily reminders, turning important work items into to-do lists, and automating lengthy tasks down to a single user approval step. |
-| Consumer cloud templates | For integrating social media services such as Twitter, Slack, and email. Useful for strengthening social media marketing initiatives. These templates also include tasks such as cloud copying, which increases productivity by saving time on traditionally repetitive tasks. |
-| Enterprise integration pack templates | For configuring VETER (validate, extract, transform, enrich, route) pipelines, receiving an X12 EDI document over AS2 and transforming to XML, and handling X12, EDIFACT, and AS2 messages. |
-| Protocol pattern templates | For implementing protocol patterns such as request-response over HTTP and integrations across FTP and SFTP. Use these templates as provided, or build on them for complex protocol patterns. |
+| Enterprise cloud | For integrating Azure Blob Storage, Dynamics CRM, Salesforce, and Box. Also includes other connectors for your enterprise cloud needs. For example, you can use these templates to organize business leads or back up your corporate file data. |
+| Personal productivity | For improving personal productivity. You can use these templates to set daily reminders, turn important work items into to-do lists, and automate lengthy tasks down to a single user-approval step. |
+| Consumer cloud | For integrating social media services such as Twitter, Slack, and email. Useful for strengthening social media marketing initiatives. These templates also include tasks such as cloud copying, which increases productivity by saving time on traditionally repetitive tasks. |
+| Enterprise integration pack | For configuring validate, extract, transform, enrich, and route (VETER) pipelines. Also for receiving an X12 EDI document over AS2 and transforming it to XML, and for handling X12, EDIFACT, and AS2 messages. |
+| Protocol pattern | For implementing protocol patterns such as request-response over HTTP and integrations across FTP and SFTP. Use these templates as provided, or build on them for complex protocol patterns. |
|||
-If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/) before you begin. For more information about building a logic app, see [Create a logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+## Prerequisites
-## Create logic apps from templates
+- An Azure account and subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A basic understanding of how to build a logic app workflow. For more information, see [Create a Consumption logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-1. If you haven't already, sign in to the
-[Azure portal](https://portal.azure.com "Azure portal").
+## Create a logic app workflow from a template
-2. From the main Azure menu, choose
-**Create a resource** > **Enterprise Integration** > **Logic App**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- ![Azure portal, New, Enterprise Integration, Logic App](./media/logic-apps-create-logic-apps-from-templates/azure-portal-create-logic-app.png)
+1. Select **Create a resource** > **Integration** > **Logic App**.
-3. Create your logic app with the settings in the table under this image:
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/azure-portal-create-logic-app.png" alt-text="Screenshot of the Azure portal. Under 'Popular Azure services,' 'Logic App' is highlighted. On the navigation menu, 'Integration' is highlighted.":::
- ![Provide logic app details](./media/logic-apps-create-logic-apps-from-templates/logic-app-settings.png)
+1. In the **Create Logic App** page, enter the following values:
| Setting | Value | Description | | - | -- | -- |
- | **Name** | *your-logic-app-name* | Provide a unique logic app name. |
- | **Subscription** | *your-Azure-subscription-name* | Select the Azure subscription that you want to use. |
- | **Resource group** | *your-Azure-resource-group-name* | Create or select an [Azure resource group](../azure-resource-manager/management/overview.md) for this logic app and to organize all resources associated with this app. |
- | **Location** | *your-Azure-datacenter-region* | Select the datacenter region for deploying your logic app, for example, West US. |
- | **Log Analytics** | **Off** (default) or **On** | Set up [diagnostic logging](../logic-apps/monitor-logic-apps-log-analytics.md) for your logic app by using [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md). Requires that you already have a Log Analytics workspace. |
- ||||
-
-4. When you're ready, select **Pin to dashboard**.
-That way, your logic app automatically appears on
-your Azure dashboard and opens after deployment.
-Choose **Create**.
+ | **Subscription** | <*your-Azure-subscription-name*> | Select the Azure subscription that you want to use. |
+ | **Resource Group** | <*your-Azure-resource-group-name*> | Create or select an [Azure resource group](../azure-resource-manager/management/overview.md) for this logic app resource and its associated resources. |
+ | **Logic App name** | <*your-logic-app-name*> | Provide a unique logic app resource name. |
+ | **Region** | <*your-Azure-datacenter-region*> | Select the datacenter region for deploying your logic app, for example, **West US**. |
+ | **Enable log analytics** | **No** (default) or **Yes** | To set up [diagnostic logging](../logic-apps/monitor-logic-apps-log-analytics.md) for your logic app resource by using [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), select **Yes**. This selection requires that you already have a Log Analytics workspace. |
+ | **Plan type** | **Consumption** or **Standard** | Select **Consumption** to create a Consumption logic app workflow. |
+ | **Zone redundancy** | **Disabled** (default) or **Enabled** | If this option is available, select **Enabled** if you want to protect your logic app resource from a regional failure. But first [check that zone redundancy is available in your Azure region](/azure/logic-apps/set-up-zone-redundancy-availability-zones?tabs=consumption#considerations). |
+ ||||
- > [!NOTE]
- > If you don't want to pin your logic app,
- > you must manually find and open your logic app
- > after deployment so you can continue.
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-settings.png" alt-text="Screenshot of the 'Create Logic App' page. The 'Consumption' plan type is selected, and values are visible in other input fields.":::
- After Azure deploys your logic app, the Logic Apps Designer
- opens and shows a page with an introduction video.
- Under the video, you can find templates for common logic app patterns.
+1. Select **Review + Create**.
-5. Scroll past the introduction video and common triggers to **Templates**.
-Choose a prebuilt template. For example:
+1. Review the values, and then select **Create**.
- ![Choose a logic app template](./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/create-logic-app.png" alt-text="Screenshot of the 'Create Logic App' page. The name, subscription, and other values are visible, and the 'Create' button is highlighted.":::
- > [!TIP]
- > To create your logic app from scratch, choose **Blank Logic App**.
+1. When deployment is complete, select **Go to resource**. The designer opens and shows a page with an introduction video. Under the video, you can find templates for common logic app workflow patterns.
+
+1. Scroll past the introduction video and common triggers to **Templates**. Select a prebuilt template.
+
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png" alt-text="Screenshot of the designer. Under 'Templates,' three templates are visible. One called 'Delete old Azure blobs' is highlighted.":::
- When you select a prebuilt template,
- you can view more information about that template.
- For example:
+ When you select a prebuilt template, you can view more information about that template.
- ![Choose a prebuilt template](./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png" alt-text="Screenshot that shows information about the 'Delete old Azure blobs' template, including a description and a diagram that shows a recurring schedule.":::
-6. To continue with the selected template,
-choose **Use this template**.
+1. To continue with the selected template, select **Use this template**.
-7. Based on the connectors in the template,
-you are prompted to perform any of these steps:
+1. Based on the connectors in the template, you're prompted to perform any of these steps:
- * Sign in with your credentials to systems or services
- that are referenced by the template.
+ * Sign in with your credentials to systems or services that are referenced by the template.
- * Create connections for any services or systems
- referenced by the template. To create a connection,
- provide a name for your connection, and if necessary,
- select the resource that you want to use.
+ * Create connections for any systems or services that are referenced by the template. To create a connection, provide a name for your connection, and if necessary, select the resource that you want to use.
- * If you already set up these connections,
- choose **Continue**.
+ > [!NOTE]
+ > Many templates include connectors that have required properties that are prepopulated. Other templates require that you provide values before you can properly deploy the logic app workflow. If you try to deploy without completing the missing property fields, you get an error message.
- For example:
+1. After you set up your required connections, select **Continue**.
- ![Create connections](./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection.png" alt-text="Screenshot of the designer. A connection for Azure Blob Storage is visible, and the 'Continue' button is highlighted.":::
- When you're done, your logic app opens
- and appears in the Logic Apps Designer.
+ The designer opens and displays your logic app workflow.
> [!TIP]
- > To return to the template viewer, choose **Templates**
- > on the designer toolbar. This action discards any unsaved changes,
- > so a warning message appears to confirm your request.
+ > To return to the template viewer, select **Templates** on the designer toolbar. This action discards any unsaved changes, so a warning message appears to confirm your request.
-8. Continue building your logic app.
+1. Continue building your logic app workflow.
- > [!NOTE]
- > Many templates include connectors that might
- > already have prepopulated required properties.
- > However, some templates might still require that you provide
- > values before you can properly deploy the logic app.
- > If you try to deploy without completing the missing property fields,
- > you get an error message.
+## Update a logic app workflow with a template
-## Update logic apps with templates
+1. In the [Azure portal](https://portal.azure.com), go to your logic app resource.
-1. In the [Azure portal](https://portal.azure.com "Azure portal"),
-find and open your logic app in th Logic App Designer.
+1. On the logic app navigation menu, select **Logic app designer**.
-2. On the designer toolbar, choose **Templates**.
-This action discards any unsaved changes,
-so a warning message appears so you can confirm
-that you want to continue. To confirm, choose **OK**.
-For example:
+1. On the designer toolbar, select **Templates**. This action discards any unsaved changes, so a warning message appears. To confirm that you want to continue, select **OK**.
- ![Choose "Templates"](./media/logic-apps-create-logic-apps-from-templates/logic-app-update-existing-with-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-update-existing-with-template.png" alt-text="Screenshot of the designer. The top part of a logic app workflow is visible. On the toolbar, 'Templates' is highlighted.":::
-3. Scroll past the introduction video and common triggers to **Templates**.
-Choose a prebuilt template. For example:
+1. Scroll past the introduction video and common triggers to **Templates**. Select a prebuilt template.
- ![Choose a logic app template](./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png" alt-text="Screenshot of the designer. Under 'Templates,' three templates are visible. One template called 'Delete old Azure blobs' is highlighted.":::
- When you select a prebuilt template,
- you can view more information about that template.
- For example:
+ When you select a prebuilt template, you can view more information about that template.
- ![Choose a prebuilt template](./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png" alt-text="Screensho