Updates from: 10/20/2022 01:09:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Enable Authentication Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app.md
The resources referenced by the *https://docsupdatetracker.net/index.html* file are detailed in the following
|[`ui.js`](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/ui.js) | Controls the UI elements. | | | |
-To render the SPA index file, in the *myApp* folder, create a file named *https://docsupdatetracker.net/index.html*, which contains the following HTML snippet.
+To render the SPA index file, in the *myApp* folder, create a file named *https://docsupdatetracker.net/index.html*, which contains the following HTML snippet:
```html <!DOCTYPE html>
To specify your Azure AD B2C user flows, do the following:
In this step, implement the methods to initialize the sign-in flow, API access token acquisition, and the sign-out methods.
-For more information, see the [MSAL PublicClientApplication class reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html), and [Use the Microsoft Authentication Library (MSAL) to sign in the user](../active-directory/develop/tutorial-v2-javascript-spa.md#use-the-microsoft-authentication-library-msal-to-sign-in-the-user) articles.
+For more information, see the [MSAL PublicClientApplication class reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html), and [Use the Microsoft Authentication Library (MSAL) to sign in the user](../active-directory/develop/tutorial-v2-javascript-spa.md#use-the-msal-to-sign-in-the-user) articles.
To sign in the user, do the following:
To call your web API by using the token you acquired, do the following:
## Step 10: Add the UI elements reference
-The SPA app uses JavaScript to control the UI elements. For example, it displays the sign-in and sign-out buttons, and renders the users ID token claims to the screen.
+The SPA app uses JavaScript to control the UI elements. For example, it displays the sign-in and sign-out buttons, and renders the users' ID token claims to the screen.
To add the UI elements reference, do the following:
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
The smart lockout feature uses many factors to determine when an account should
- Passwords such as 12456! and 1234567! (or newAccount1234 and newaccount1234) are so similar that the algorithm interprets them as human error and counts them as a single try. - Larger variations in pattern, such as 12456! and ABCD2!, are counted as separate tries.
-When testing the smart lockout feature, use a distinctive pattern for each password you enter. Consider using password generation web apps, such as `https://passwordsgenerator.net/`.
+When testing the smart lockout feature, use a distinctive pattern for each password you enter. Consider using password generation web apps, such as `https://password-generator.net/`.
When the smart lockout threshold is reached, you'll see the following message while the account is locked: **Your account is temporarily locked to prevent unauthorized use. Try again later**. The error messages can be [localized](localization-string-ids.md#sign-up-or-sign-in-error-messages).
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 08/17/2022 Last updated : 10/17/2022
Applications that support the SCIM profile described in this article can be conn
**To connect an application that supports SCIM:**
-1. Sign in to the [Azure AD portal](https://aad.portal.azure.com). You can get access a free trial for Azure AD with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/office/dev-program)
+1. Sign in to the [Azure AD portal](https://aad.portal.azure.com). You can get access a free trial for Azure AD with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/microsoft-365/dev-program))
1. Select **Enterprise applications** from the left pane. A list of all configured apps is shown, including apps that were added from the gallery. 1. Select **+ New application** > **+ Create your own application**. 1. Enter a name for your application, choose the option "*integrate any other application you don't find in the gallery*" and select **Add** to create an app object. The new app is added to the list of enterprise applications and opens to its app management screen.
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for multifactor authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both multifactor authentication and SSPR. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ) > [!NOTE]
-> Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration.
->
-> After Sept. 30th, 2022, all users will register security information through the combined registration experience.
+> Effective Oct. 1st, 2022, we will begin to enable combined registration for all users in Azure AD tenants created before August 15th, 2020. Tenants created after this date are enabled with combined registration.
This article outlines what combined security registration is. To get started with combined security registration, see the following article:
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
Last updated 02/02/2021 --++
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for Azure AD Multi-Factor Authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both Azure AD Multi-Factor Authentication and SSPR. > [!NOTE]
-> Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
->
-> After Sept. 30th, 2022, all users will register security information through the combined registration experience.
+> Effective Oct. 1st, 2022, we will begin to enable combined registration for all users in Azure AD tenants created before August 15th, 2020. Tenants created after this date are enabled with combined registration.
To make sure you understand the functionality and effects before you enable the new experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
Title: "How to use Continuous Access Evaluation enabled APIs in your applications" description: How to increase app security and resilience by adding support for Continuous Access Evaluation, enabling long-lived access tokens that can be revoked based on critical events and policy evaluation. ---+ Last updated 07/09/2021-++ # Customer intent: As an application developer, I want to learn how to use Continuous Access Evaluation for building resiliency through long-lived, refreshable tokens that can be revoked based on critical events and policy evaluation.
active-directory Mobile Sso Support Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-sso-support-overview.md
Title: Support single sign-on and app protection policies in mobile apps you develop description: Explanation and overview of building mobile applications that support single sign-on and app protection policies using the Microsoft identity platform and integrating with Azure Active Directory. ---+ Last updated 10/14/2020-++ #Customer intent: As an app developer, I want to know how to implement an app that supports single sign-on and app protection policies using the Microsoft identity platform and integrating with Azure Active Directory.
Finally, [add the Intune SDK](/mem/intune/developer/app-sdk-get-started) to your
- [Authorization agents and how to enable them](./msal-android-single-sign-on.md) - [Get started with the Microsoft Intune App SDK](/mem/intune/developer/app-sdk-get-started) - [Configure settings for the Intune App SDK](/mem/intune/developer/app-sdk-ios#configure-settings-for-the-intune-app-sdk)-- [Microsoft Intune protected apps](/mem/intune/apps/apps-supported-intune-apps)
+- [Microsoft Intune protected apps](/mem/intune/apps/apps-supported-intune-apps)
active-directory Msal Client Application Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-application-configuration.md
Currently, the only way to get an app to sign in users with only personal Micros
## Client ID
-The client ID is the unique **Application (client) ID** assigned to your app by Azure AD when the app was registered.
+The client ID is the unique **Application (client) ID** assigned to your app by Azure AD when the app was registered. You can find the **Application (Client) ID** in your Azure subscription by Azure AD => Enterprise applications => Application ID.
## Redirect URI
active-directory Scenario Desktop Acquire Token Username Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-username-password.md
For more information on all the modifiers that can be applied to `AcquireTokenBy
# [Java](#tab/java)
-The following extract is from the [MSAL Java code samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/public-client/).
+The following extract is from the [MSAL Java code samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/msal4j-sdk/src/samples/public-client/UsernamePasswordFlow.java).
```java PublicClientApplication pca = PublicClientApplication.builder(clientId)
active-directory Tutorial Blazor Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-server.md
Title: Tutorial - Create a Blazor Server app that uses the Microsoft identity platform for authentication description: In this tutorial, you set up authentication using the Microsoft identity platform in a Blazor Server app.---++
active-directory Tutorial Blazor Webassembly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-webassembly.md
Title: Tutorial - Sign in users and call a protected API from a Blazor WebAssembly app description: In this tutorial, sign in users and call a protected API using the Microsoft identity platform in a Blazor WebAssembly (WASM) app.---++
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
Title: "Tutorial: Create a JavaScript single-page app that uses the Microsoft identity platform for authentication"
-description: In this tutorial, you build a JavaScript single-page app (SPA) that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
+ Title: "Tutorial: Create a JavaScript single-page application that uses the Microsoft identity platform for authentication"
+description: In this tutorial, you build a JavaScript single-page application (SPA) that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
-# Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application (SPA)
+# Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application
-In this tutorial, build a JavaScript single-page application (SPA) that signs in users and calls Microsoft Graph by using the implicit flow of OAuth 2.0. This SPA uses MSAL.js v1.x, which uses the implicit grant flow for SPAs. For all new applications, use [MSAL.js v2.x and the authorization code flow with PKCE and CORS](tutorial-v2-javascript-auth-code.md), which provides more security than the implicit flow
+In this tutorial, you build a JavaScript single-page application (SPA) that signs in users and calls Microsoft Graph by using the implicit flow of OAuth 2.0. This SPA uses MSAL.js v1.x, which uses the implicit grant flow for SPAs. For all new applications, use [MSAL.js v2.x and the authorization code flow with PKCE and CORS](tutorial-v2-javascript-auth-code.md). The authorization code flow provides more security than the implicit flow.
In this tutorial:
In this tutorial:
> * Create a JavaScript project with `npm` > * Register the application in the Azure portal > * Add code to support user sign-in and sign-out
-> * Add code to call Microsoft Graph API
+> * Add code to call the Microsoft Graph API
> * Test the app
-> * Gain understanding of how the process works behind the scenes
+> * Gain an understanding of how the process works behind the scenes
-At the end of this tutorial, you'll have created the folder structure below (listed in order of creation), along with the *.js* and *.html* files by copying the code blocks in the upcoming sections.
+At the end of this tutorial, you'll have the following folder and file structure (listed in order of creation):
```txt sampleApp/
sampleApp/
## Prerequisites * [Node.js](https://nodejs.org/en/download/) for running a local web server.
-* [Visual Studio Code](https://code.visualstudio.com/download) or other editor for modifying project files.
-* A modern web browser. **Internet Explorer** is **not supported** by the app you build in this tutorial due to the app's use of [ES6](http://www.ecma-international.org/ecma-262/6.0/) conventions.
+* [Visual Studio Code](https://code.visualstudio.com/download) or another editor for modifying project files.
+* A modern web browser. The app that you build in this tutorial uses [ES6](http://www.ecma-international.org/ecma-262/6.0/) conventions and *does not support Internet Explorer*.
-## How the sample app generated by this guide works
+## How the sample app works
-![Shows how the sample app generated by this tutorial works](media/active-directory-develop-guidedsetup-javascriptspa-introduction/javascriptspa-intro.svg)
+![Diagram that shows how the sample app generated by this tutorial works.](media/active-directory-develop-guidedsetup-javascriptspa-introduction/javascriptspa-intro.svg)
-The sample application created by this guide enables a JavaScript SPA to query the Microsoft Graph API. This can also work for a web API that is set up to accept tokens from the Microsoft identity platform. After the user signs in, an access token is requested and added to the HTTP requests through the authorization header. This token will be used to acquire the user's profile and mails via **MS Graph API**.
+The application that you create in this tutorial enables a JavaScript SPA to query the Microsoft Graph API. This querying can also work for a web API that's set up to accept tokens from the Microsoft identity platform. After the user signs in, the SPA requests an access token and adds it to the HTTP requests through the authorization header. The SPA will use this token to acquire the user's profile and emails via the Microsoft Graph API.
-Token acquisition and renewal are handled by the [Microsoft Authentication Library (MSAL) for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js).
+The [Microsoft Authentication Library (MSAL) for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js) handles token acquisition and renewal.
## Set up the web server or project
-> Prefer to download this sample's project instead? [Download the project files](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/archive/quickstart.zip).
->
-> To configure the code sample before you execute it, skip to the [registration step](#register-the-application).
+If you prefer, you can [download the project files](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/archive/quickstart.zip).
+
+To configure the code sample before you run it, skip to the [registration step](#register-the-application).
## Create the project
-Make sure [*Node.js*](https://nodejs.org/en/download/) is installed, and then create a folder to host the application. Name the folder *sampleApp*. In this folder, an [*Express*](https://expressjs.com/) web server is created to serve the *https://docsupdatetracker.net/index.html* file.
+1. Make sure that [Node.js](https://nodejs.org/en/download/) is installed, and then create a folder to host the application. Name the folder *sampleApp*. In this folder, an [Express](https://expressjs.com/) web server is created to serve the *https://docsupdatetracker.net/index.html* file.
-1. Using a terminal (such as Visual Studio Code integrated terminal), locate the project folder, move into it, then type:
+1. By using a terminal (such as the Visual Studio Code integrated terminal), locate the project folder and move into it. Then enter:
-```console
-npm init
-```
+ ```console
+ npm init
+ ```
-2. A series of prompts will appear in order to create the application. Notice that the folder, *sampleApp* is now all lowercase. The items in brackets `()` are generated by default. Feel free to experiment, however for the purposes of this tutorial, you don't need to enter anything, and can press **Enter** to continue to the next prompt.
+2. A series of prompts appears for creation of the application. Notice that the folder *sampleApp* is now all lowercase. The items in parentheses `()` are generated by default.
```console package name: (sampleapp)
npm init
license: (ISC) ```
+
+ Feel free to experiment. However, for the purposes of this tutorial, you don't need to enter anything. Select the Enter key to continue to the next prompt.
-3. The final consent prompt will contain the following output on the assumption no values were entered in the previous step. Press **Enter** and the JSON written to a file called *package.json*.
+3. The final consent prompt contains the following output if you didn't enter any values in the previous step.
```console {
npm init
Is this OK? (yes) ```
-4. Next, install the required dependencies. Express.js is a Node.js module designed to simplify the creation of web servers and APIs. Morgan.js is used to log HTTP requests and errors. Upon installation, the *package-lock.json* file and *node_modules* folder are created.
+ Select the Enter key, and the JSON is written to a file called *package.json*.
+
+4. Install the required dependencies by entering the following code:
```console npm install express --save npm install morgan --save ```
-5. Now, create a *.js* file named *server.js* in your current folder, and add the following code:
+ Express.js is a Node.js module that simplifies the creation of web servers and APIs. Morgan.js is used to log HTTP requests and errors. Installing them creates the *package-lock.json* file and *node_modules* folder.
+
+5. Create a *.js* file named *server.js* in your current folder, and add the following code:
```JavaScript const express = require('express'); const morgan = require('morgan'); const path = require('path');
- //initialize express.
+ //Initialize Express.
const app = express(); // Initialize variables. const port = 3000; // process.env.PORT || 3000;
- // Configure morgan module to log all requests.
+ // Configure the morgan module to log all requests.
app.use(morgan('dev')); // Set the front-end folder to serve public assets.
sampleApp/
ΓööΓöÇΓöÇ server.js ```
-In the next steps you'll create a new folder for the JavaScript SPA, and set up the user interface (UI).
+In the next steps, you'll create a new folder for the JavaScript SPA and set up the user interface (UI).
> [!TIP]
-> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization, and is primarily associated with a domain, like Microsoft.com. If you wish to learn how applications can work with multiple tenants, refer to the [application model](/articles/active-directory/develop/application-model.md).
+> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization. It's primarily associated with a domain, like Microsoft.com. If you want to learn how applications can work with multiple tenants, refer to the [application model](/articles/active-directory/develop/application-model.md).
## Create the SPA UI
-1. Create a new folder, *JavaScriptSPA* and then move into that folder.
+1. Create a new folder, *JavaScriptSPA*, and then move into that folder.
-1. From there, create an *https://docsupdatetracker.net/index.html* file for the SPA. This file implements a UI built with [*Bootstrap 4 Framework*](https://www.javatpoint.com/bootstrap-4-layouts#:~:text=Bootstrap%204%20is%20the%20newest%20version%20of%20Bootstrap.,framework%20directed%20at%20responsive%2C%20mobile-first%20front-end%20web%20development.) and imports script files for configuration, authentication and API call.
+1. Create an *https://docsupdatetracker.net/index.html* file for the SPA. This file implements a UI that's built with the [Bootstrap 4 framework](https://www.javatpoint.com/bootstrap-4-layouts#:~:text=Bootstrap%204%20is%20the%20newest%20version%20of%20Bootstrap.,framework%20directed%20at%20responsive%2C%20mobile-first%20front-end%20web%20development.). The file also imports script files for configuration, authentication, and API calls.
In the *https://docsupdatetracker.net/index.html* file, add the following code:
In the next steps you'll create a new folder for the JavaScript SPA, and set up
<br> <br>
- <!-- importing bootstrap.js and supporting js libraries -->
+ <!-- importing bootstrap.js and supporting .js libraries -->
<script src="https://code.jquery.com/jquery-3.4.1.slim.min.js" integrity="sha384-J6qa4849blE2+poT4WnyKhv5vZF5SrPo0iEjwBvKU7imGFAV0wwj1yYfoRSJoZ+n" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js" integrity="sha384-wfSDF2E50Y2D1uUdj0O3uMBJnjuUD4Ih7YwaYd1iqfktj0Uod8GCExl3Og8ifwB6" crossorigin="anonymous"></script>
In the next steps you'll create a new folder for the JavaScript SPA, and set up
<script type="text/javascript" src="./graphConfig.js"></script> <script type="text/javascript" src="./ui.js"></script>
- <!-- replace next line with authRedirect.js if you would like to use the redirect flow -->
+ <!-- replace the next line with authRedirect.js if you want to use the redirect flow -->
<!-- <script type="text/javascript" src="./authRedirect.js"></script> --> <script type="text/javascript" src="./authPopup.js"></script> <script type="text/javascript" src="./graph.js"></script>
In the next steps you'll create a new folder for the JavaScript SPA, and set up
</html> ```
-2. Now, create a *.js* file named *ui.js*, which accesses and updates the Document Object Model (DOM) elements, and add the following code:
+2. Create a file named *ui.js*, and add this code to access and update the Document Object Model (DOM) elements:
```JavaScript // Select DOM elements to work with
In the next steps you'll create a new folder for the JavaScript SPA, and set up
## Register the application
-Before proceeding further with authentication, register the application on **Azure Active Directory**.
+Before you proceed with authentication, register the application on Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Go to **Azure Active Directory**.
+1. On the left panel, under **Manage**, select **App registrations**. Then, on the top menu bar, select **New registration**.
+1. For **Name**, enter a name for the application (for example, **sampleApp**). You can change the name later if necessary.
+1. Under **Supported account types**, select **Accounts in this organizational directory only**.
+1. In the **Redirect URI** section, select the **Web** platform from the dropdown list.
-1. Sign in to the [*Azure portal*](https://portal.azure.com/).
-1. Navigate to **Azure Active Directory**.
-1. Go to the left panel, and under **Manage**, select **App registrations**, then in the top menu bar, select **New registration**.
-1. Enter a **Name** for the application, for example **sampleApp**. The name can be changed later if necessary.
-1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. In the **Redirect URI** section, select the **Web** platform from the drop-down list. To the right, enter the value of the local host to be used. Enter either of the following options:
- 1. `http://localhost:3000/`
- 1. If you wish to use a custom TCP port, use `http://localhost:<port>/` (where `<port>` is the custom TCP port number).
+ To the right, enter `http://localhost:3000/`.
1. Select **Register**.
-1. This opens the **Overview** page of the application. Note the **Application (client) ID** and **Directory (tenant) ID**. Both of them are needed when the *authConfig.js* file is created in the following steps.
-1. In the left panel, under **Manage**, select **Authentication**.
+
+ The **Overview** page of the application opens. Note the **Application (client) ID** and **Directory (tenant) ID** values. You'll need both of them when you create the *authConfig.js* file in later steps.
+1. Under **Manage**, select **Authentication**.
1. In the **Implicit grant and hybrid flows** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app must sign in users and call an API.
-1. Select **Save**. You can navigate back to the **Overview** panel by selecting it in the left panel.
+1. Select **Save**. You can go back to the **Overview** page by selecting it on the left panel.
-The redirect URI can be changed at anytime by going to the **Overview** page, and selecting **Add a Redirect URI**.
+You can change the redirect URI anytime by going to the **Overview** page and selecting **Add a Redirect URI**.
## Configure the JavaScript SPA
-1. In the *JavaScriptSPA* folder, create a new file, *authConfig.js*, and copy the following code. This code contains the configuration parameters for authentication (Client ID, Tenant ID, Redirect URI).
-
-```javascript
- const msalConfig = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
- redirectUri: "Enter_the_Redirect_URI_Here",
- },
- cache: {
- cacheLocation: "sessionStorage", // This configures where your cache will be stored
- storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
- }
- };
+1. In the *JavaScriptSPA* folder, create a new file, *authConfig.js*. Then copy the following code. This code contains the configuration parameters for authentication (client ID, tenant ID, redirect URI).
- // Add here scopes for id token to be used at MS Identity Platform endpoints.
- const loginRequest = {
- scopes: ["openid", "profile", "User.Read"]
- };
+ ```javascript
+ const msalConfig = {
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
+ redirectUri: "Enter_the_Redirect_URI_Here",
+ },
+ cache: {
+ cacheLocation: "sessionStorage", // This configures where your cache will be stored
+ storeAuthStateInCookie: false, // Set this to "true" if you're having issues on Internet Explorer 11 or Edge
+ }
+ };
- // Add here scopes for access token to be used at MS Graph API endpoints.
- const tokenRequest = {
- scopes: ["Mail.Read"]
- };
-```
+ // Add scopes for the ID token to be used at Microsoft identity platform endpoints.
+ const loginRequest = {
+ scopes: ["openid", "profile", "User.Read"]
+ };
-Modify the values in the `msalConfig` section. You can refer to your app's **Overview** page on Azure for some of these values:
- - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+ // Add scopes for the access token to be used at Microsoft Graph API endpoints.
+ const tokenRequest = {
+ scopes: ["Mail.Read"]
+ };
+ ```
-2. Modify the values in the `msalConfig` section as described below. Refer to the **Overview** page of the application for these values:
- - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered. You can copy this directly from **Azure**.
+2. Modify the values in the `msalConfig` section. Refer to the **Overview** page of the application for these values:
+ - `Enter_the_Application_Id_Here` is the **Application (client) ID** value for the application that you registered.
- - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), refer to [*National clouds*](./authentication-national-cloud.md).
- - Set `Enter_the_Tenant_info_here` to one of the following options:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Directory (tenant) ID** or **Tenant name** (for example, *contoso.microsoft.com*).
- - `Enter_the_Redirect_URI_Here` is the default URL that you set in the previous section, `http://localhost:3000/`.
+ - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For national clouds (for example, China), refer to [National clouds](./authentication-national-cloud.md).
+ - Replace `Enter_the_Tenant_info_here` with the **Directory (tenant) ID** (a GUID) or **Tenant name** value (for example, *contoso.onmicrosoft.com*).
+ - `Enter_the_Redirect_URI_Here` is the default URL that you set in the previous section: `http://localhost:3000/`.
> [!TIP]
-> There are other options for `Enter_the_Tenant_info_here` depending on what you want your application to support.
+> There are other options for `Enter_the_Tenant_info_here`, depending on what you want your application to support:
> - If your application supports *accounts in any organizational directory*, replace this value with **organizations**. > - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**.
-## Use the Microsoft Authentication Library (MSAL) to sign in the user
+## Use the MSAL to sign in the user
-In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, which contains the authentication and token acquisition logic, and add the following code:
+In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, which contains the authentication and token acquisition logic. Add the following code:
```JavaScript const myMSALObj = new Msal.UserAgentApplication(msalConfig);
In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, whi
console.log(error); console.log("silent token acquisition fails. acquiring token using popup");
- // fallback to interaction when silent call fails
+ // fallback to interaction when the silent call fails
return myMSALObj.acquireTokenPopup(request) .then(tokenResponse => { return tokenResponse;
In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, whi
} ```
-## More information
+## Use tokens for validation
+
+The first time a user selects the **Sign In** button, the `signIn` function that you added to the *authPopup.js* file calls MSAL's `loginPopup` function to start the sign-in process. This function opens a pop-up window that prompts the user to enter their credentials.
-The first time a user selects the **Sign In** button, the `signIn` function you added to the *authPopup.js* file calls MSAL's `loginPopup` function to start the sign-in process. This method opens a pop-up window with the *Microsoft identity platform endpoint* to prompt and validate the user's credentials. After a successful sign-in, the user is redirected back to the original *https://docsupdatetracker.net/index.html* page. A token is received, processed by *msal.js*, and the information contained in the token is cached. This token is known as the *ID token* and contains basic information about the user, such as the user display name. If you plan to use any data provided by this token for any purposes, make sure this token is validated by your backend server to guarantee that the token was issued to a valid user for your application.
+After a successful sign-in, the user is redirected back to the original *https://docsupdatetracker.net/index.html* page. The *msal.js* file receives and processes an *ID token*, and the information in the token is cached. The ID token contains basic information about the user, such as the user's display name. If you plan to use any data in the ID token for any purpose, make sure that your back-end server validates the token to guarantee that the token was issued to a valid user for your application.
-The SPA generated by this tutorial calls `acquireTokenSilent` and/or `acquireTokenPopup` to acquire an *access token* used to query the Microsoft Graph API for the user's profile info. If you need a sample that validates the ID token, refer to the following [sample application](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/3-Authorization-II/1-call-api/README.md "GitHub active-directory-javascript-singlepageapp-dotnet-webapi-v2 sample") in GitHub, which uses an ASP.NET web API for token validation.
+The app that you create in this tutorial calls `acquireTokenSilent` and/or `acquireTokenPopup` to acquire an *access token*. The app uses this token to query the Microsoft Graph API for the user's profile info. If you need a sample that validates the ID token, refer to the [sample application](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/3-Authorization-II/1-call-api/README.md "GitHub active-directory-javascript-singlepageapp-dotnet-webapi-v2 sample") in GitHub, which uses an ASP.NET web API for token validation.
### Get a user token interactively
-After the initial sign-in, users shouldn't need to reauthenticate every time they need to request a token to access a resource. Therefore, `acquireTokenSilent` should be used most of the time to acquire tokens. There are situations, however, where you force users to interact with Microsoft identity platform. Examples include when:
+After the initial sign-in, users shouldn't need to reauthenticate every time they need to request a token to access a resource. Most of the time, the app will use `acquireTokenSilent` to acquire tokens. But you might force users to interact with the Microsoft identity platform in situations like these:
- Users need to reenter their credentials because the password has expired.-- An application is requesting access to a resource, and the user's consent is needed.
+- An application is requesting access to a resource and needs the user's consent.
- Two-factor authentication is required. Calling `acquireTokenPopup` opens a pop-up window (or `acquireTokenRedirect` redirects users to the Microsoft identity platform). In that window, users need to interact by confirming their credentials, giving consent to the required resource, or completing the two-factor authentication. ### Get a user token silently
-The `acquireTokenSilent` method handles token acquisition and renewal without any user interaction. After `loginPopup` (or `loginRedirect`) is executed for the first time, `acquireTokenSilent` is the method commonly used to obtain tokens used to access protected resources for subsequent calls. (Calls to request or renew tokens are made silently.) It's worth noting that `acquireTokenSilent` may fail in some cases, such as when a user's password expires. The application can then handle this exception in two ways:
+The `acquireTokenSilent` method handles token acquisition and renewal without any user interaction. After `loginPopup` (or `loginRedirect`) is executed for the first time, subsequent calls use `acquireTokenSilent` to get tokens for accessing protected resources. (Calls to request or renew tokens are made silently.)
-1. Making a call to `acquireTokenPopup` immediately, which triggers a user sign-in prompt. This pattern is commonly used in online applications where there's no unauthenticated content in the application available to the user. The sample generated by this guided setup uses this pattern.
+The `acquireTokenSilent` method might fail in some cases, such as when a user's password expires. The application can handle this exception in two ways:
-1. Making a visual indication to the user that an interactive sign-in is required. The user can then select the right time to sign in, or the application can retry `acquireTokenSilent` at a later time. This is commonly used when the user can use other functionality of the application without being disrupted. For example, there might be unauthenticated content available in the application. In this situation, the user can decide when they want to sign in to access the protected resource, or to refresh the outdated information.
+- Making a call to `acquireTokenPopup` immediately, which triggers a user sign-in prompt. This pattern is commonly used in online applications where no unauthenticated content is available to the user. The sample that you create in this tutorial uses this pattern.
+
+- Making a visual indication to the user that an interactive sign-in is required. The user can then select the right time to sign in, or the application can retry `acquireTokenSilent` at a later time.
+
+ This pattern is commonly used when the user can use other functionality of the application without being disrupted. For example, unauthenticated content might be available in the application. In this situation, the user can decide when they want to sign in to access the protected resource or refresh the outdated information.
> [!NOTE]
-> This tutorial uses the `loginPopup` and `acquireTokenPopup` methods by default. If you are using Internet Explorer as your browser, it is recommended to use `loginRedirect` and `acquireTokenRedirect` methods, due to a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues-on-IE-and-Edge-Browser#issues) related to the way Internet Explorer handles pop-up windows. If you would like to see how to achieve the same result using *Redirect methods*, please see the [sample code](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/blob/quickstart/JavaScriptSPA/authRedirect.js).
+> This tutorial uses the `loginPopup` and `acquireTokenPopup` methods by default. If you're using Internet Explorer as your browser, we recommend that you use the `loginRedirect` and `acquireTokenRedirect` methods because of a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues-on-IE-and-Edge-Browser#issues) with the way Internet Explorer handles pop-up windows.
+>
+> If you want to see how to achieve the same result by using *redirect methods*, see the [sample code](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/blob/quickstart/JavaScriptSPA/authRedirect.js).
-## Call the Microsoft Graph API using the acquired token
+## Call the Microsoft Graph API by using the acquired token
-1. In the *JavaScriptSPA* folder create a *.js* file named *graphConfig.js*, which stores the Representational State Transfer ([REST](/rest/api/azure/)) endpoints. Add the following code:
+1. In the *JavaScriptSPA* folder, create a *.js* file named *graphConfig.js*, which stores the [Representational State Transfer (REST)](/rest/api/azure/) endpoints. Add the following code:
```JavaScript const graphConfig = {
The `acquireTokenSilent` method handles token acquisition and renewal without an
}; ```
- where:
- - `Enter_the_Graph_Endpoint_Here` is the instance of Microsoft Graph API. For the global Microsoft Graph API endpoint, this can be replaced with `https://graph.microsoft.com`. For national cloud deployments, refer to [Graph API Documentation](/graph/deployments).
+ `Enter_the_Graph_Endpoint_Here` is the instance of the Microsoft Graph API. For the global Microsoft Graph API endpoint, you can replace this with `https://graph.microsoft.com`. For national cloud deployments, refer to the [Microsoft Graph API documentation](/graph/deployments).
-1. Next, create a *.js* file named *graph.js*, which will make a REST call to the Microsoft Graph API. This is a way of accessing web services in a simple and flexible way without having any processing. Add the following code:
+1. Create a file named *graph.js*, which will make a REST call to the Microsoft Graph API. The SPA can then access web services in a simple and flexible way without any processing. Add the following code:
```javascript function callMSGraph(endpoint, token, callback) {
The `acquireTokenSilent` method handles token acquisition and renewal without an
### More information about REST calls against a protected API
-In the sample application created by this guide, the `callMSGraph()` method is used to make an HTTP `GET` request against a protected resource that requires a token. The request then returns the content to the caller. This method adds the acquired token in the *HTTP Authorization header*. For the sample application created by this guide, the resource is the Microsoft Graph API `me` endpoint, which displays the user's profile information.
+The sample application that you create in this tutorial uses the `callMSGraph()` method to make an HTTP `GET` request against a protected resource that requires a token. The request then returns the content to the caller.
+
+This method adds the acquired token in the *HTTP Authorization header*. For the sample application, the resource is the Microsoft Graph API `me` endpoint, which displays the user's profile information.
## Test the code
-Now that the code is set up, it needs to be tested.
+Now that you've set up the code, you need to test it:
-1. The server needs to be configured to listen to a TCP port that's based on the location of the *https://docsupdatetracker.net/index.html* file. For Node.js, the web server can be started to listen to the port that is specified in the previous section by running the following commands at a command-line prompt from the *JavaScriptSPA* folder:
+1. Configure the server to listen to a TCP port that's based on the location of the *https://docsupdatetracker.net/index.html* file. For Node.js, you can start the web server to listen to the port that you specified earlier. Run the following commands at a command-line prompt from the *JavaScriptSPA* folder:
```bash npm install npm start ```
-1. In the browser, enter `http://localhost:3000` (or `http://localhost:<port>` if a custom port was chosen). You should see the contents of the *https://docsupdatetracker.net/index.html* file and a **Sign In** button on the top right of the screen.
-
+1. In the browser, enter `http://localhost:3000`. You should see the contents of the *https://docsupdatetracker.net/index.html* file and a **Sign In** button on the upper right of the screen.
> [!IMPORTANT]
-> Be sure to enable popups and redirects for your site in your browser settings.
+> Be sure to enable pop-ups and redirects for your site in your browser settings.
-After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You'll now be prompted to sign in with the Microsoft identity platform:
+After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You're prompted to sign in with the Microsoft identity platform.
### Provide consent for application access The first time that you sign in to your application, you're prompted to grant it access to your profile and sign you in. Select **Accept** to continue. ### View application results
-After you sign in, you can select **Read More under your displayed name, and your user profile information is returned in the Microsoft Graph API response that's displayed:
+After you sign in, you can select **Read More** under your displayed name. Your user profile information is returned in the displayed Microsoft Graph API response.
### More information about scopes and delegated permissions
-The Microsoft Graph API requires the `User.Read` scope to read a user's profile. By default, this scope is automatically added in every application that's registered on the registration portal. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require more scopes. For example, the Microsoft Graph API requires the `Mail.Read` scope in order to list the userΓÇÖs mails.
+The Microsoft Graph API requires the `User.Read` scope to read a user's profile. By default, this scope is automatically added in every application that's registered on the registration portal. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require more scopes. For example, the Microsoft Graph API requires the `Mail.Read` scope to list the user's emails.
> [!NOTE] > The user might be prompted for additional consents as you increase the number of scopes.
The Microsoft Graph API requires the `User.Read` scope to read a user's profile.
## Next steps
-Delve deeper into single-page application (SPA) development on the Microsoft identity platform in our multi-part scenario series.
+Delve deeper into SPA development on the Microsoft identity platform in the first part of a scenario series:
> [!div class="nextstepaction"] > [Scenario: Single-page application](scenario-spa-overview.md)
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md
# Microsoft identity platform UserInfo endpoint
-Part of the OpenID Connect (OIDC) standard, the [UserInfo endpoint](https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) is returns information about an authenticated user. In the Microsoft identity platform, the UserInfo endpoint is hosted by Microsoft Graph at https://graph.microsoft.com/oidc/userinfo.
+As part of the OpenID Connect (OIDC) standard, the [UserInfo endpoint](https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) returns information about an authenticated user. In the Microsoft identity platform, the UserInfo endpoint is hosted by Microsoft Graph at https://graph.microsoft.com/oidc/userinfo.
## Find the .well-known configuration endpoint You can find the UserInfo endpoint programmatically by reading the `userinfo_endpoint` field of the OpenID configuration document at `https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration`. We don't recommend hard-coding the UserInfo endpoint in your applications. Instead, use the OIDC configuration document to find the endpoint at runtime.
-The UserInfo endpoint is typically called automatically by [OIDC-compliant libraries](https://openid.net/developers/certified/) to get information about the user.From the [list of claims identified in the OIDC standard](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims), the Microsoft identity platform produces the name claims, subject claim, and email when available and consented to.
+The UserInfo endpoint is typically called automatically by [OIDC-compliant libraries](https://openid.net/developers/certified/) to get information about the user. From the [list of claims identified in the OIDC standard](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims), the Microsoft identity platform produces the name claims, subject claim, and email when available and consented to.
## Consider using an ID token instead
If you require more details about the user like manager or job title, call the [
## Calling the UserInfo endpoint
-UserInfo is a standard OAuth bearer token API hosted by Microsoft Graph. Call the UserInfo endpoint as you would any Microsoft Graph API by using the access token your application received when it requested access to Microsoft Graph. The UserInfo endpoint returns a JSON response containing claims about the user.
+UserInfo is a standard OAuth bearer token API hosted by Microsoft Graph. Call the UserInfo endpoint as you would call any Microsoft Graph API by using the access token your application received when it requested access to Microsoft Graph. The UserInfo endpoint returns a JSON response containing claims about the user.
### Permissions
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJub25jZSI6Il…
```jsonc { "sub": "OLu859SGc2Sr9ZsqbkG-QbeLgJlb41KcdiPoLYNpSFA",
- "name": "Mikah Ollenburg", // names all require the ΓÇ£profileΓÇ¥ scope.
+ "name": "Mikah Ollenburg", // all names require the ΓÇ£profileΓÇ¥ scope.
"family_name": " Ollenburg", "given_name": "Mikah", "picture": "https://graph.microsoft.com/v1.0/me/photo/$value",
- "email": "mikoll@contoso.com" //requires the ΓÇ£emailΓÇ¥ scope.
+ "email": "mikoll@contoso.com" // requires the ΓÇ£emailΓÇ¥ scope.
} ```
active-directory Five Steps To Full Application Integration With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad.md
Title: Five steps for integrating all your apps with Azure AD description: This guide explains how to integrate all your applications with Azure AD. In each step, we explain the value and provide links to resources that will explain the technical details. --++ Last updated 08/05/2020- # Five steps for integrating all your apps with Azure AD
active-directory Resilience App Development Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-app-development-overview.md
--++ Last updated 11/23/2020
active-directory Resilience Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-client-app.md
--++ Last updated 11/23/2020
active-directory Resilience Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-daemon-app.md
--++ Last updated 11/23/2020
When a request times out applications should not retry immediately. Implement an
- [Build resilience into applications that sign-in users](resilience-client-app.md) - [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md)-- [Build resilience in your CIAM systems](resilience-b2c.md)
+- [Build resilience in your CIAM systems](resilience-b2c.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information about how to better secure your organization by using autom
In September 2021, we have added following 44 new applications in our App gallery with Federation support
-[Studybugs](https://studybugs.com/signin), [Yello](https://yello.co/yello-for-microsoft-teams/), [LawVu](../saas-apps/lawvu-tutorial.md), [Formate eVo Mail](https://www.document-genetics.co.uk/formate-evo-erp-output-management), [Revenue Grid](https://app.revenuegrid.com/login), [Orbit for Office 365](https://azuremarketplace.microsoft.com/marketplace/apps/aad.orbitforoffice365?tab=overview), [Upmarket](https://app.upmarket.ai/), [Alinto Protect](https://protect.alinto.net/), [Cloud Concinnity](https://cloudconcinnity.com/), [Matlantis](https://matlantis.com/), [ModelGen for Visio (MG4V)](https://crecy.com.au/model-gen/), [NetRef: Classroom Management](https://oauth.net-ref.com/microsoft/sso), [VergeSense](../saas-apps/vergesense-tutorial.md), [iAuditor](../saas-apps/iauditor-tutorial.md), [Secutraq](https://secutraq.net/login), [Active and Thriving](../saas-apps/active-and-thriving-tutorial.md), [Inova](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=1bacdba3-7a3b-410b-8753-5cc0b8125f81&response_type=code&redirect_uri=https:%2f%2fbroker.partneringplace.com%2fpartner-companion%2f&code_challenge_method=S256&code_challenge=YZabcdefghijklmanopqrstuvwxyz0123456789._-~&scope=1bacdba3-7a3b-410b-8753-5cc0b8125f81/.default), [TerraTrue](../saas-apps/terratrue-tutorial.md), [Beyond Identity Admin Console](../saas-apps/beyond-identity-admin-console-tutorial.md), [Visult](https://visult.app), [ENGAGE TAG](https://app.engagetag.com/), [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-tutorial.md), [CrowdStrike Falcon Platform](../saas-apps/crowdstrike-falcon-platform-tutorial.md), [MY Emergency Control](https://my-emergency.co.uk/app/auth/login), [AlexisHR](../saas-apps/alexishr-tutorial.md), [Teachme Biz](../saas-apps/teachme-biz-tutorial.md), [Zero Networks](../saas-apps/zero-networks-tutorial.md), [Mavim iMprove](https://improve.mavimcloud.com/), [Azumuta](https://app.azumuta.com/login?microsoft=true), [Frankli](https://beta.frankli.io/login), [Amazon Managed Grafana](../saas-apps/amazon-managed-grafana-tutorial.md), [Productive](../saas-apps/productive-tutorial.md), [Create!Webフロー](../saas-apps/createweb-tutorial.md), [Evercate](https://evercate.com/us/sign-up/), [Ezra Coaching](../saas-apps/ezra-coaching-tutorial.md), [Baldwin Safety and Compliance](../saas-apps/baldwin-safety-&-compliance-tutorial.md), [Nulab Pass (Backlog,Cacoo,Typetalk)](../saas-apps/nulab-pass-tutorial.md), [Metatask](../saas-apps/metatask-tutorial.md), [Contrast Security](../saas-apps/contrast-security-tutorial.md), [Animaker](../saas-apps/animaker-tutorial.md), [Traction Guest](../saas-apps/traction-guest-tutorial.md), [True Office Learning - LIO](../saas-apps/true-office-learning-lio-tutorial.md), [Qiita Team](../saas-apps/qiita-team-tutorial.md)
+[Studybugs](https://studybugs.com/signin), [Yello](https://yello.co/yello-for-microsoft-teams/), [LawVu](../saas-apps/lawvu-tutorial.md), [Formate eVo Mail](https://www.document-genetics.co.uk/formate-evo-erp-output-management), [Revenue Grid](https://app.revenuegrid.com/login), [Orbit for Office 365](https://azuremarketplace.microsoft.com/marketplace/apps/aad.orbitforoffice365?tab=overview), [Upmarket](https://app.upmarket.ai/), [Alinto Protect](https://protect.alinto.net/), [Cloud Concinnity](https://cloudconcinnity.com/), [Matlantis](https://matlantis.com/), [ModelGen for Visio (MG4V)](https://crecy.com.au/model-gen/), [NetRef: Classroom Management](https://oauth.net-ref.com/microsoft/sso), [VergeSense](../saas-apps/vergesense-tutorial.md), [iAuditor](../saas-apps/iauditor-tutorial.md), [Secutraq](https://secutraq.net/login), [Active and Thriving](../saas-apps/active-and-thriving-tutorial.md), [Inova](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=1bacdba3-7a3b-410b-8753-5cc0b8125f81&response_type=code&redirect_uri=https:%2f%2fbroker.partneringplace.com%2fpartner-companion%2f&code_challenge_method=S256&code_challenge=YZabcdefghijklmanopqrstuvwxyz0123456789._-~&scope=1bacdba3-7a3b-410b-8753-5cc0b8125f81/.default), [TerraTrue](../saas-apps/terratrue-tutorial.md), [Beyond Identity Admin Console](../saas-apps/beyond-identity-admin-console-tutorial.md), [Visult](https://visult.app), [ENGAGE TAG](https://app.engagetag.com/), [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-tutorial.md), [CrowdStrike Falcon Platform](../saas-apps/crowdstrike-falcon-platform-tutorial.md), [MY Emergency Control](https://my-emergency.co.uk/app/auth/login), [AlexisHR](../saas-apps/alexishr-tutorial.md), [Teachme Biz](../saas-apps/teachme-biz-tutorial.md), [Zero Networks](../saas-apps/zero-networks-tutorial.md), [Mavim iMprove](https://improve.mavimcloud.com/), [Azumuta](https://app.azumuta.com/login?microsoft=true), [Frankli](https://beta.frankli.io/login), [Amazon Managed Grafana](../saas-apps/amazon-managed-grafana-tutorial.md), [Productive](../saas-apps/productive-tutorial.md), [Create!Webフロー](../saas-apps/createweb-tutorial.md), [Evercate](https://evercate.com/), [Ezra Coaching](../saas-apps/ezra-coaching-tutorial.md), [Baldwin Safety and Compliance](../saas-apps/baldwin-safety-&-compliance-tutorial.md), [Nulab Pass (Backlog,Cacoo,Typetalk)](../saas-apps/nulab-pass-tutorial.md), [Metatask](../saas-apps/metatask-tutorial.md), [Contrast Security](../saas-apps/contrast-security-tutorial.md), [Animaker](../saas-apps/animaker-tutorial.md), [Traction Guest](../saas-apps/traction-guest-tutorial.md), [True Office Learning - LIO](../saas-apps/true-office-learning-lio-tutorial.md), [Qiita Team](../saas-apps/qiita-team-tutorial.md)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
active-directory E2open Cm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/e2open-cm-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with e2open CM-Global'
+description: Learn how to configure single sign-on between Azure Active Directory and e2open CM-Global.
++++++++ Last updated : 10/12/2022++++
+# Tutorial: Azure AD SSO integration with e2open CM-Global
+
+In this tutorial, you'll learn how to integrate e2open CM-Global with Azure Active Directory (Azure AD). When you integrate e2open CM-Global with Azure AD, you can:
+
+* Control in Azure AD who has access to e2open CM-Global.
+* Enable your users to be automatically signed-in to e2open CM-Global with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* e2open CM-Global single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* e2open CM-Global supports **SP** initiated SSO.
+
+## Add e2open CM-Global from the gallery
+
+To configure the integration of e2open CM-Global into Azure AD, you need to add e2open CM-Global from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **e2open CM-Global** in the search box.
+1. Select **e2open CM-Global** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for e2open CM-Global
+
+Configure and test Azure AD SSO with e2open CM-Global using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in e2open CM-Global.
+
+To configure and test Azure AD SSO with e2open CM-Global, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure e2open CM-Global SSO](#configure-e2open-cm-global-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create e2open CM-Global test user](#create-e2open-cm-global-test-user)** - to have a counterpart of B.Simon in e2open CM-Global that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **e2open CM-Global** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `http://pingone.com/<cmglobalCustomGUID>`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://sso.connect.pingidentity.com/sso/sp/ACS.saml2`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://sso.connect.pingidentity.com/sso/sp/initsso?saasid=<saasid>&idpid=<idpid>`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [e2open CM-Global support team](mailto:customersupport@e2open.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up e2open CM-Global** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to e2open CM-Global.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **e2open CM-Global**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure e2open CM-Global SSO
+
+To configure single sign-on on **e2open CM-Global** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [e2open CM-Global support team](mailto:customersupport@e2open.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create e2open CM-Global test user
+
+In this section, you create a user called Britta Simon in e2open CM-Global. Work with [e2open CM-Global support team](mailto:customersupport@e2open.com) to add the users in the e2open CM-Global platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to e2open CM-Global Sign-on URL where you can initiate the login flow.
+
+* Go to e2open CM-Global Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the e2open CM-Global tile in the My Apps, this will redirect to e2open CM-Global Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure e2open CM-Global you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Fuse Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fuse-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Fuse | Microsoft Docs'
+ Title: Azure Active Directory integration with Fuse
description: Learn how to configure single sign-on between Azure Active Directory and Fuse.
- Previously updated : 06/03/2021+ Last updated : 10/19/2022
-# Tutorial: Azure Active Directory integration with Fuse
+# Azure Active Directory integration with Fuse
-In this tutorial, you'll learn how to integrate Fuse with Azure Active Directory (Azure AD). When you integrate Fuse with Azure AD, you can:
+In this article, you'll learn how to integrate Fuse with Azure Active Directory (Azure AD). Fuse is a learning platform that enables learners within an organization to access the necessary knowledge and expertise they need to improve their skills at work. When you integrate Fuse with Azure AD, you can:
-* Control in Azure AD who has access to Fuse.
-* Enable your users to be automatically signed-in to Fuse with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
+- Control in Azure AD who has access to Fuse.
+- Enable your users to be automatically signed-in to Fuse with their Azure AD accounts.
+- Manage your accounts in one central location - the Azure portal.
-## Prerequisites
-
-To get started, you need the following items:
+You'll configure and test Azure AD single sign-on for Fuse in a test environment. Fuse supports **SP** initiated single sign-on.
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Fuse single sign-on (SSO) enabled subscription.
-
-## Scenario description
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Fuse supports **SP** initiated SSO.
+## Prerequisites
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+To integrate Azure Active Directory with Fuse, you need:
-## Add Fuse from the gallery
+- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+- Fuse single sign-on (SSO) enabled subscription.
-To configure the integration of Fuse into Azure AD, you need to add Fuse from the gallery to your list of managed SaaS apps.
+## Add application and assign a test user
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Fuse** in the search box.
-1. Select **Fuse** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Before you begin the process of configuring single sign-on, you need to add the Fuse application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+### Add Fuse from the Azure AD gallery
-## Configure and test Azure AD SSO for Fuse
+Add Fuse from the Azure AD application gallery to configure single sign-on with Fuse. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
-Configure and test Azure AD SSO with Fuse using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Fuse.
+### Create and assign Azure AD test user
-To configure and test Azure AD SSO with Fuse, perform the following steps:
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Fuse SSO](#configure-fuse-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Fuse test user](#create-fuse-test-user)** - to have a counterpart of B.Simon in Fuse that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
-## Configure Azure AD SSO
+## Configure Azure AD single sign-on
-Follow these steps to enable Azure AD SSO in the Azure portal.
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
1. In the Azure portal, on the **Fuse** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, perform the following step:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
+1. On the **Basic SAML Configuration** section, in the **Sign-on URL** text box, the appropriate URL using the following pattern:
`https://{tenantname}.fuseuniversal.com/` > [!NOTE] > The value is not real. Update the value with the actual Sign-On URL. Contact [Fuse Client support team](mailto:support@fusion-universal.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
![The Certificate download link](common/certificatebase64.png)
-6. On the **Set up Fuse** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set up Fuse** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Fuse.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Fuse**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Fuse SSO
+## Configure Fuse single sign-on
-To configure single sign-on on **Fuse** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Fuse support team](mailto:support@fusion-universal.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Fuse** side, send the downloaded **Certificate (Base64)** and the copied URLs from Azure portal to [Fuse support team](mailto:support@fusion-universal.com). The support team will use the copied URLs to configure the single sign-on on the application.
### Create Fuse test user
-In this section, you create a user called Britta Simon in Fuse. Work with [Fuse support team](mailto:support@fusion-universal.com) to add the users in the Fuse platform. Users must be created and activated before you use single sign-on.
+To be able to test and use single sign-on, you have to create and activate users in the fuse application.
-## Test SSO
+In this section, you create a user called Britta Simon in Fuse that corresponds with the Azure AD user you already created in the previous section. Work with [Fuse support team](mailto:support@fusion-universal.com) to add the user in the Fuse platform.
-In this section, you test your Azure AD single sign-on configuration with following options.
+## Test single sign-on
-* Click on **Test this application** in Azure portal. This will redirect to Fuse Sign-on URL where you can initiate the login flow.
+In this section, you test your Azure AD single sign-on configuration with the following options.
-* Go to Fuse Sign-on URL directly and initiate the login flow from there.
+- In the **Test single sign-on with Fuse** section on the **SAML-based Sign-on** pane, select **Test this application** in Azure portal. You'll be redirected to Fuse Sign-on URL where you can initiate the sign-in flow.
+- Go to Fuse Sign-on URL directly and initiate the sign-in flow from application's side.
+- You can use Microsoft My Apps. When you select the Fuse tile in the My Apps, you'll be redirected to Fuse Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-* You can use Microsoft My Apps. When you click the Fuse tile in the My Apps, this will redirect to Fuse Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+## Additional resources
+- [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+- [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md)
## Next steps Once you configure Fuse you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Optimizely Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/optimizely-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Optimizely | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Optimizely'
description: Learn how to configure single sign-on between Azure Active Directory and Optimizely.
Previously updated : 05/24/2021 Last updated : 10/19/2022
-# Tutorial: Azure Active Directory integration with Optimizely
+# Tutorial: Azure AD SSO integration with Optimizely
In this tutorial, you'll learn how to integrate Optimizely with Azure Active Directory (Azure AD). When you integrate Optimizely with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
`urn:auth0:optimizely:contoso` > [!NOTE]
- > These values are not the real. You will update the value with the actual Sign-on URL and Identifier, which is explained later in the tutorial. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. You will update these values with the actual Sign-on URL and Identifier which is explained later in the tutorial. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. Your Optimizely application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open **User Attributes** dialog.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Optimizely SSO
-1. To configure single sign-on on **Optimizely** side, contact your Optimizely Account Manager and provide the downloaded **Certificate (Base64)** and appropriate copied URLs.
-
-2. In response to your email, Optimizely provides you with the Sign On URL (SP-initiated SSO) and the Identifier (Service Provider Entity ID) values.
-
- a. Copy the **SP-initiated SSO URL** provided by Optimizely, and paste into the **Sign On URL** textbox in **Basic SAML Configuration** section on Azure portal.
-
- b. Copy the **Service Provider Entity ID** provided by Optimizely, and paste into the **Identifier** textbox in **Basic SAML Configuration** section on Azure portal.
-
-3. In a different browser window, sign-on to your Optimizely application.
-
-4. Click you account name in the top right corner and then **Account Settings**.
-
- ![Screenshot that shows the account name selected in the top-right corner, with "Account Settings" selected from the menu.](./media/optimizely-tutorial/settings.png)
-
-5. In the Account tab, check the box **Enable SSO** under Single Sign On in the **Overview** section.
-
- ![Azure AD Single Sign-On](./media/optimizely-tutorial/account.png)
-
-6. Click **Save**.
+To configure single sign-on on the Optimizely side, contact your Optimizely Customer Success Manager or [file an online ticket for Optimizely Experimentation Support](https://support.optimizely.com/hc/articles/4410284179469-File-online-tickets-for-support) directly.
### Create Optimizely test user
-In this section, you create a user called Britta Simon in Optimizely.
-
-1. On the home page, select **Collaborators** tab.
-
-2. To add new collaborator to the project, click **New Collaborator**.
-
- ![Screenshot that shows the Optimizely home page with the "Collaborators" tab and "New Collaborator" button selected.](./media/optimizely-tutorial/collaborator.png)
-
-3. Fill in the email address and assign them a role. Click **Invite**.
-
- ![Creating an Azure AD test user](./media/optimizely-tutorial/invite-collaborator.png)
-
-4. They receive an email invite. Using the email address, they have to log in to Optimizely.
+Contact your Optimizely Customer Success Manager or [file an online ticket for Optimizely Experimentation Support](https://support.optimizely.com/hc/articles/4410284179469-File-online-tickets-for-support) directly to add the users in the Optimizely platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
Title: 'Tutorial: Configure ServiceNow for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+ Title: Configure ServiceNow for automatic user provisioning with Azure Active Directory
description: Learn how to automatically provision and deprovision user accounts from Azure AD to ServiceNow.
- Previously updated : 05/10/2021+ Last updated : 10/19/2022
-# Tutorial: Configure ServiceNow for automatic user provisioning
+# Configure ServiceNow for automatic user provisioning
-This tutorial describes the steps that you perform in both ServiceNow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When Azure AD is configured, it automatically provisions and deprovisions users and groups to [ServiceNow](https://www.servicenow.com/) by using the Azure AD provisioning service.
-
-For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This article describes the steps that you'll take in both ServiceNow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When Azure AD is configured, it automatically provisions and deprovisions users and groups to [ServiceNow](https://www.servicenow.com/) by using the Azure AD provisioning service.
+For more information on the Azure AD automatic user provisioning service, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported+ > [!div class="checklist"]
-> * Create users in ServiceNow
-> * Remove users in ServiceNow when they don't need access anymore
-> * Keep user attributes synchronized between Azure AD and ServiceNow
-> * Provision groups and group memberships in ServiceNow
-> * Allow [single sign-on](servicenow-tutorial.md) to ServiceNow (recommended)
+> - Create users in ServiceNow
+> - Remove users in ServiceNow when they don't need access anymore
+> - Keep user attributes synchronized between Azure AD and ServiceNow
+> - Provision groups and group memberships in ServiceNow
+> - Allow [single sign-on](servicenow-tutorial.md) to ServiceNow (recommended)
## Prerequisites
-The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (Application Administrator, Cloud Application Administrator, Application Owner, or Global Administrator)
-* A [ServiceNow instance](https://www.servicenow.com/) of Calgary or higher
-* A [ServiceNow Express instance](https://www.servicenow.com/) of Helsinki or higher
-* A user account in ServiceNow with the admin role
+- An Azure AD user account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- A [ServiceNow instance](https://www.servicenow.com/) of Calgary or higher
+- A [ServiceNow Express instance](https://www.servicenow.com/) of Helsinki or higher
+- A user account in ServiceNow with the admin role
## Step 1: Plan your provisioning deployment
-1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and ServiceNow](../app-provisioning/customize-application-attributes.md).
+
+- Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+- Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+- Determine what data to [map between Azure AD and ServiceNow](../app-provisioning/customize-application-attributes.md).
## Step 2: Configure ServiceNow to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
![Screenshot that shows a ServiceNow instance.](media/servicenow-provisioning-tutorial/servicenow-instance.png)
-2. Obtain credentials for an admin in ServiceNow. Go to the user profile in ServiceNow and verify that the user has the admin role.
+1. Obtain credentials for an admin in ServiceNow. Go to the user profile in ServiceNow and verify that the user has the admin role.
![Screenshot that shows a ServiceNow admin role.](media/servicenow-provisioning-tutorial/servicenow-admin-role.png)
Add ServiceNow from the Azure AD application gallery to start managing provision
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application, or based on attributes of the user or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the [steps to assign users and groups to the application](../manage-apps/assign-user-or-group-access-portal.md). If you choose to scope who will be provisioned based solely on attributes of the user or group, you can [use a scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-Keep these tips in mind:
+Keep the following tips in mind:
-* When you're assigning users and groups to ServiceNow, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+- When you're assigning users and groups to ServiceNow, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
-* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+- If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
## Step 5: Configure automatic user provisioning to ServiceNow
To configure automatic user provisioning for ServiceNow in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
- ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-
-2. In the list of applications, select **ServiceNow**.
-
- ![Screenshot that shows a list of applications.](common/all-applications.png)
+ ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-3. Select the **Provisioning** tab.
+1. In the list of applications, select **ServiceNow**.
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
+1. Select the **Provisioning** tab.
-4. Set **Provisioning Mode** to **Automatic**.
+1. Set **Provisioning Mode** to **Automatic**.
- ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
-
-5. In the **Admin Credentials** section, enter your ServiceNow admin credentials and username. Select **Test Connection** to ensure that Azure AD can connect to ServiceNow. If the connection fails, ensure that your ServiceNow account has admin permissions and try again.
+1. In the **Admin Credentials** section, enter your ServiceNow admin credentials and username. Select **Test Connection** to ensure that Azure AD can connect to ServiceNow. If the connection fails, ensure that your ServiceNow account has admin permissions and try again.
![Screenshot that shows the Service Provisioning page, where you can enter admin credentials.](./media/servicenow-provisioning-tutorial/servicenow-provisioning.png)
-6. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
-
- ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
+1. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
-7. Select **Save**.
+1. Select **Save**.
-8. In the **Mappings** section, select **Synchronize Azure Active Directory Users to ServiceNow**.
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to ServiceNow**.
-9. Review the user attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in ServiceNow for update operations.
+1. Review the user attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in ServiceNow for update operations.
If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the ServiceNow API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
-10. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to ServiceNow**.
-
-11. Review the group attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in ServiceNow for update operations. Select the **Save** button to commit any changes.
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to ServiceNow**.
-12. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Review the group attributes that are synchronized from Azure AD to ServiceNow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in ServiceNow for update operations. Select the **Save** button to commit any changes.
-13. To enable the Azure AD provisioning service for ServiceNow, change **Provisioning Status** to **On** in the **Settings** section.
+1. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
- ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
+1. To enable the Azure AD provisioning service for ServiceNow, change **Provisioning Status** to **On** in the **Settings** section.
-14. Define the users and groups that you want to provision to ServiceNow by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and groups that you want to provision to ServiceNow by choosing the desired values in **Scope** in the **Settings** section.
![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
-15. When you're ready to provision, select **Save**.
-
- ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
+1. When you're ready to provision, select **Save**.
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles. Subsequent cycles occur about every 40 minutes, as long as the Azure AD provisioning service is running. ## Step 6: Monitor your deployment+ After you've configured provisioning, use the following resources to monitor your deployment: - Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
After you've configured provisioning, use the following resources to monitor you
- If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. [Learn more about quarantine states](../app-provisioning/application-provisioning-quarantine-status.md). ## Troubleshooting tips
-* When you're provisioning certain attributes (such as **Department** and **Location**) in ServiceNow, the values must already exist in a reference table in ServiceNow. If they don't, you'll get an **InvalidLookupReference** error.
+
+- When you're provisioning certain attributes (such as **Department** and **Location**) in ServiceNow, the values must already exist in a reference table in ServiceNow. If they don't, you'll get an **InvalidLookupReference** error.
For example, you might have two locations (Seattle, Los Angeles) and three departments (Sales, Finance, Marketing) in a certain table in ServiceNow. If you try to provision a user whose department is "Sales" and whose location is "Seattle," that user will be provisioned successfully. If you try to provision a user whose department is "Sales" and whose location is "LA," the user won't be provisioned. The location "LA" must be added to the reference table in ServiceNow, or the user attribute in Azure AD must be updated to match the format in ServiceNow.
-* If you get an **EntryJoiningPropertyValueIsMissing** error, review your [attribute mappings](../app-provisioning/customize-application-attributes.md) to identify the matching attribute. This value must be present on the user or group you're trying to provision.
-* To understand any requirements or limitations (for example, the format to specify a country code for a user), review the [ServiceNow SOAP API](https://docs.servicenow.com/bundle/rome-application-development/page/integrate/web-services-apis/reference/r_DirectWebServiceAPIFunctions.html).
-* Provisioning requests are sent by default to https://{your-instance-name}.service-now.com/{table-name}. If you need a custom tenant URL, you can provide the entire URL as the instance name.
-* The **ServiceNowInstanceInvalid** error indicates a problem communicating with the ServiceNow instance. Here's the text of the error:
+- If you get an **EntryJoiningPropertyValueIsMissing** error, review your [attribute mappings](../app-provisioning/customize-application-attributes.md) to identify the matching attribute. This value must be present on the user or group you're trying to provision.
+- To understand any requirements or limitations (for example, the format to specify a country code for a user), review the [ServiceNow SOAP API](https://docs.servicenow.com/bundle/rome-application-development/page/integrate/web-services-apis/reference/r_DirectWebServiceAPIFunctions.html).
+- Provisioning requests are sent by default to https://{your-instance-name}.service-now.com/{table-name}. If you need a custom tenant URL, you can provide the entire URL as the instance name.
+- The **ServiceNowInstanceInvalid** error indicates a problem communicating with the ServiceNow instance. Here's the text of the error:
`Details: Your ServiceNow instance name appears to be invalid. Please provide a current ServiceNow administrative user name and password along with the name of a valid ServiceNow instance.`
After you've configured provisioning, use the following resources to monitor you
![Screenshot that shows the option for authorizing SOAP requests.](media/servicenow-provisioning-tutorial/servicenow-webservice.png)
- If you still can't resolve your problem, contact ServiceNow support and ask them to turn on SOAP debugging to help troubleshoot.
+ If you still can't resolve your problem, contact ServiceNow support, and ask them to turn on SOAP debugging to help troubleshoot.
-* The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allow list of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
+- The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allowlist of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
-* Self-hosted ServiceNow instances are not supported.
+- Self-hosted ServiceNow instances aren't supported.
## Additional resources
-* [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What are application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+- [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+- [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+- [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Snowflake Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (Application Administrator, Cloud Application Administrator, Application Owner, or Global Administrator) * [A Snowflake tenant](https://www.Snowflake.com/pricing/)
-* A user account in Snowflake with admin permissions
+* At least one user in Snowflake with the **ACCOUNTADMIN** role.
## Step 1: Plan your provisioning deployment
Before you configure Snowflake for automatic user provisioning with Azure AD, yo
select system$generate_scim_access_token('AAD_PROVISIONING'); ```
-2. Use the ACCOUNTADMIN role.
+1. Use the ACCOUNTADMIN role.
![Screenshot of a worksheet in the Snowflake UI with the SCIM access token called out.](media/Snowflake-provisioning-tutorial/step-2.png)
-3. Create the custom role AAD_PROVISIONER. All users and roles in Snowflake created by Azure AD will be owned by the scoped down AAD_PROVISIONER role.
+1. Create the custom role AAD_PROVISIONER. All users and roles in Snowflake created by Azure AD will be owned by the scoped down AAD_PROVISIONER role.
![Screenshot showing the custom role.](media/Snowflake-provisioning-tutorial/step-3.png)
-4. Let the ACCOUNTADMIN role create the security integration using the AAD_PROVISIONER custom role.
+1. Let the ACCOUNTADMIN role create the security integration using the AAD_PROVISIONER custom role.
![Screenshot showing the security integrations.](media/Snowflake-provisioning-tutorial/step-4.png)
-5. Create and copy the authorization token to the clipboard and store securely for later use. Use this token for each SCIM REST API request and place it in the request header. The access token expires after six months and a new access token can be generated with this statement.
+1. Create and copy the authorization token to the clipboard and store securely for later use. Use this token for each SCIM REST API request and place it in the request header. The access token expires after six months and a new access token can be generated with this statement.
![Screenshot showing the token generation.](media/Snowflake-provisioning-tutorial/step-5.png)
To configure automatic user provisioning for Snowflake in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
- ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
+ ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-2. In the list of applications, select **Snowflake**.
+1. In the list of applications, select **Snowflake**.
- ![Screenshot that shows a list of applications.](common/all-applications.png)
+ ![Screenshot that shows a list of applications.](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
+ ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set **Provisioning Mode** to **Automatic**.
+1. Set **Provisioning Mode** to **Automatic**.
- ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
+ ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
-5. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
+1. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
+ >[!NOTE]
+ >The Snowflake SCIM endpoint consists of the Snowflake account URL appended with `/scim/v2/`. For example, if your Snowflake account name is `acme` and your Snowflake account is in the `east-us-2` Azure region, the **Tenant URL** value is `https://acme.east-us-2.azure.snowflakecomputing.com/scim/v2`.
Select **Test Connection** to ensure that Azure AD can connect to Snowflake. If the connection fails, ensure that your Snowflake account has admin permissions and try again.
- ![Screenshot that shows boxes for tenant URL and secret token, along with the Test Connection button.](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot that shows boxes for tenant URL and secret token, along with the Test Connection button.](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
- ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
+ ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
-7. Select **Save**.
+1. Select **Save**.
-8. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Snowflake**.
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Snowflake**.
-9. Review the user attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Snowflake for update operations. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Snowflake for update operations. Select the **Save** button to commit any changes.
|Attribute|Type| |||
To configure automatic user provisioning for Snowflake in Azure AD:
|userName|String| |name.givenName|String| |name.familyName|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:defaultRole|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:defaultWarehouse|String|
+ |externalId|String|
-10. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Snowflake**.
+ >[!NOTE]
+ >Snowflake supported custom extension user attributes during SCIM provisioning:
+ >* DEFAULT_ROLE
+ >* DEFAULT_WAREHOUSE
+ >* DEFAULT_SECONDARY_ROLES
+ >* SNOWFLAKE NAME AND LOGIN_NAME FIELDS TO BE DIFFERENT
-11. Review the group attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Snowflake for update operations. Select the **Save** button to commit any changes.
+ > How to set up Snowflake custom extension attributes in Azure AD SCIM user provisioning is explained [here](https://community.snowflake.com/s/article/HowTo-How-to-Set-up-Snowflake-Custom-Attributes-in-Azure-AD-SCIM-for-Default-Roles-and-Default-Warehouses).
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Snowflake**.
+
+1. Review the group attributes that are synchronized from Azure AD to Snowflake in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Snowflake for update operations. Select the **Save** button to commit any changes.
|Attribute|Type| ||| |displayName|String| |members|Reference|
-12. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, see the instructions in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for Snowflake, change **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Snowflake, change **Provisioning Status** to **On** in the **Settings** section.
- ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
+ ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
-14. Define the users and groups that you want to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and groups that you want to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section.
If this option is not available, configure the required fields under **Admin Credentials**, select **Save**, and refresh the page.
- ![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
+ ![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
-15. When you're ready to provision, select **Save**.
+1. When you're ready to provision, select **Save**.
- ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
+ ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization of all users and groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs. Subsequent syncs occur about every 40 minutes, as long as the Azure AD provisioning service is running.
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
editor:
Previously updated : 06/02/2022 Last updated : 08/20/2022
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ## Prerequisites To link your DID to your domain, you need to have completed the following.
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
In this article:
- What happens to your data? - Effect on existing verifiable credentials. + ## Prerequisites - Complete verifiable credentials onboarding.
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ## Prerequisites - Complete verifiable credentials onboarding with Web as the selected trust system.
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [idTokens attestation](rules-and-display-definitions-model.md#idtokenattestation-type) produces an issuance flow where you're required to do an interactive sign-in to an OpenID Connect (OIDC) identity provider in Microsoft Authenticator. Claims in the ID token that the identity provider returns can be used to populate the issued verifiable credential. The claims mapping section in the rules definition specifies which claims are used. ## Create a custom credential with the idTokens attestation type
active-directory How To Use Quickstart Selfissued https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-selfissued.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) type produces an issuance flow where you're required to manually enter values for the claims in Microsoft Authenticator. ## Create a custom credential with the selfIssued attestation type
active-directory How Use Vcnetwork https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-use-vcnetwork.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ## Prerequisites To use the Entra Verified ID Network, you need to have completed the following.
To use the Entra Verified ID Network, you need to have completed the following.
## What is the Entra Verified ID Network?
-In our scenario, Proseware is a verifier. Woodgrove is the issuer. The verifier needs to know Woodgrove's issuer DID and the verifiable credential (VC) type that represents Woodgrove employees before it can create a presentation request for a verified credential for Woodgrove employees. The necessary information may come from some kind of manual exchange between the companies, this approach would be both a manual and a complex. The Entra Verified ID Network makes this process much easier. Woodgrove, as an issuer, can publish credential types to the Entra Verified ID Network and Proseware, as the verifier, can search for published credential types and schemas in the Entra Verified ID Network. Using this information, Woodgrove can create a [presentation request](presentation-request-api.md#presentation-request-payload) and easily invoke the Request Service API.
+In our scenario, Proseware is a verifier. Woodgrove is the issuer. The verifier needs to know Woodgrove's issuer DID and the verifiable credential (VC) type that represents Woodgrove employees before it can create a presentation request for a verified credential for Woodgrove employees. The necessary information may come from some kind of manual exchange between the companies, but this approach would be both a manual and a complex. The Entra Verified ID Network makes this process much easier. Woodgrove, as an issuer, can publish credential types to the Entra Verified ID Network and Proseware, as the verifier, can search for published credential types and schemas in the Entra Verified ID Network. Using this information, Woodgrove can create a [presentation request](presentation-request-api.md#presentation-request-payload) and easily invoke the Request Service API.
-![Diagram of Microsoft DID implementation overview](media/decentralized-identifier-overview/did-overview.png)
## How do I use the Entra Verified ID Network? 1. In the start page of Microsoft Entra Verified ID in the Azure portal, you have a Quickstart named **Verification request**. Clicking on **start** will take you to a page where you can browse the Verifiable Credentials Network
- ![Screenshot of the Verified ID Network Quickstart](media/how-use-vcnetwork/vcnetwork-quickstart.png)
+ :::image type="content" source="media/how-use-vcnetwork/vcnetwork-quickstart.png" alt-text="Screenshot of the Verified ID Network Quickstart.":::
1. When you select on the **Select first issuer**, a panel opens on the right side of the screen where you can search for issuers by their linked domains. So if you are looking for something from Woodgrove, you just type `woodgrove` in the search textbox. When you select an issuer in the list, the available credential types will show in the lower part labeled Step 2. Check the type you want to use and select the Add button to get back to the first screen. If the expected linked domain isn't in the list it means that the linked domain isn't verified yet. If the list of credentials is empty, it means that the issuer has verified the linked domain but haven't published any credential types yet.
- ![Screenshot of Verified ID Network Search and select](media/how-use-vcnetwork/vcnetwork-search-select.png)
+ :::image type="content" source="media/how-use-vcnetwork/vcnetwork-search-select.png" alt-text="Screenshot of Verified ID Network Search and select.":::
1. In the first screen we now have Woodgrove in the issuer list and the next step is to select the **Review** button.
- ![Verified ID Network list of isuers](media/how-use-vcnetwork/vcnetwork-issuer-list.png)
+ :::image type="content" source="media/how-use-vcnetwork/vcnetwork-issuer-list.png" alt-text="Screenshot of verified ID Network list of issuers.":::
1. The Review screen displays a skeleton presentation request JSON payload for the Request Service API. The important pieces of information are the DID inside the **acceptedIssuers** collection and the **type** value. This information is needed to create a presentation request. The request prompts the user for a credential of a certain type issued by a trusted organization.
- ![Verified ID Network issuers details](media/how-use-vcnetwork/vcnetwork-issuer-details.png)
+ :::image type="content" source="media/how-use-vcnetwork/vcnetwork-issuer-details.png" alt-text="Verified ID Network issuers details.":::
## How do I make my linked domain searchable?
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Microsoft Entra Verified ID](decentralized-identifier-overview.md) and the [FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial. This architectural overview introduces the capabilities and components of the Microsoft Entra Verified ID service. For more detailed information on issuance and validation, see
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + ItΓÇÖs important to plan your issuance solution so that in addition to issuing credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt done so, we recommend you view the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md) for foundational information. ## Scope of guidance
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] + MicrosoftΓÇÖs Microsoft Entra Verified ID (Azure AD VC) service enables you to trust proofs of user identity without expanding your trust boundary. With Azure AD VC, you create accounts or federate with another identity provider. When a solution implements a verification exchange using verifiable credentials, it enables applications to request credentials that aren't bound to a specific domain. This approach makes it easier to request and verify credentials at scale. If you havenΓÇÖt already, we suggest you review the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md). You may also want to review [Plan your Microsoft Entra Verified ID issuance solution](plan-issuance-solution.md).
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
In this article, you learn how to:
The following diagram illustrates the Microsoft Entra Verified ID architecture and the component you configure.
-![Diagram that illustrates the Azure A D Verifiable Credentials architecture.](media/verifiable-credentials-configure-issuer/verifiable-credentials-architecture.png)
## Prerequisites
The following diagram illustrates the Microsoft Entra Verified ID architecture a
- To clone the repository that hosts the sample app, install [GIT](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download), or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization,read this [FAQ](verifiable-credentials-faq.md#i-cannot-use-ngrok-what-do-i-do).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization, read this [FAQ](verifiable-credentials-faq.md#i-cannot-use-ngrok-what-do-i-do).
- A mobile device with Microsoft Authenticator: - Android version 6.2206.3973 or later installed. - iOS version 6.6.2 or later installed.
In this step, you create the verified credential expert card by using Microsoft
The following screenshot demonstrates how to create a new credential:
- ![Screenshot that shows how to create a new credential.](media/verifiable-credentials-configure-issuer/how-create-new-credential.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/how-create-new-credential.png" alt-text="Screenshot that shows how to create a new credential.":::
## Gather credentials and environment details
Now that you have a new credential, you're going to gather some information abou
1. In Verifiable Credentials, select **Issue credential**.
- ![Screenshot that shows how to select the newly created verified credential.](media/verifiable-credentials-configure-issuer/issue-credential-custom-view.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/issue-credential-custom-view.png" alt-text="Screenshot that shows how to select the newly created verified credential.":::
1. Copy the **authority**, which is the Decentralized Identifier, and record it for later.
Create a client secret for the registered application that you created. The samp
1. Copy the **Application (client) ID**, and store it for later.
- ![Screenshot that shows how to copy the app registration ID.](media/verifiable-credentials-configure-issuer/copy-app-id.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/copy-app-id.png" alt-text="Screenshot that shows how to copy the app registration ID.":::
1. From the main menu, under **Manage**, select **Certificates & secrets**.
Now you're ready to issue your first verified credential expert card by running
1. Open the HTTPS URL generated by ngrok.
- ![Screenshot that shows how to get the ngrok public URL.](media/verifiable-credentials-configure-issuer/ngrok-url.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/ngrok-url.png" alt-text="Screenshot that shows how to get the ngrok public URL.":::
1. From a web browser, select **Get Credential**.
- ![Screenshot that shows how to choose get the credential from the sample app.](media/verifiable-credentials-configure-issuer/get-credentials.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/get-credentials.png" alt-text="Screenshot that shows how to choose to get the credential from the sample app.":::
1. Using your mobile device, scan the QR code with the Authenticator app. You can also scan the QR code directly from your camera, which will open the Authenticator app for you.
- ![Screenshot that shows how to scan the Q R code.](media/verifiable-credentials-configure-issuer/scan-issuer-qr-code.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/scan-issuer-qr-code.png" alt-text="Screenshot that shows how to scan the QR code.":::
1. At this time, you'll see a message warning that this app or website might be risky. Select **Advanced**.
- ![Screenshot that shows how to respond to the warning message.](media/verifiable-credentials-configure-issuer/at-risk.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/at-risk.png" alt-text="Screenshot that shows how to respond to the warning message.":::
1. At the risky website warning, select **Proceed anyways (unsafe)**. You're seeing this warning because your domain isn't linked to your decentralized identifier (DID). To verify your domain, follow [Link your domain to your decentralized identifier (DID)](how-to-dnsbind.md). For this tutorial, you can skip the domain registration, and select **Proceed anyways (unsafe).**
- ![Screenshot that shows how to proceed with the risky warning.](media/verifiable-credentials-configure-issuer/proceed-anyway.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/proceed-anyway.png" alt-text="Screenshot that shows how to proceed with the risky warning.":::
1. You'll be prompted to enter a PIN code that is displayed in the screen where you scanned the QR code. The PIN adds an extra layer of protection to the issuance. The PIN code is randomly generated every time an issuance QR code is displayed.
- ![Screenshot that shows how to type the pin code.](media/verifiable-credentials-configure-issuer/enter-verification-code.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/enter-verification-code.png" alt-text="Screenshot that shows how to type the pin code.":::
1. After you enter the PIN number, the **Add a credential** screen appears. At the top of the screen, you see a **Not verified** message (in red). This warning is related to the domain validation warning mentioned earlier. 1. Select **Add** to accept your new verifiable credential.
- ![Screenshot that shows how to add your new credential.](media/verifiable-credentials-configure-issuer/new-verifiable-credential.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/new-verifiable-credential.png" alt-text="Screenshot that shows how to add your new credential.":::
Congratulations! You now have a verified credential expert verifiable credential.
- ![Screenshot that shows a newly added verifiable credential.](media/verifiable-credentials-configure-issuer/verifiable-credential-has-been-added.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/verifiable-credential-has-been-added.png" alt-text="Screenshot that shows a newly added verifiable credential.":::
Go back to the sample app. It shows you that a credential successfully issued.
- ![Screenshot that shows a successfully issued verifiable credential.](media/verifiable-credentials-configure-issuer/credentials-issued.png)
-
-## Verify the verified credential expert card
-
-Now you are ready to verify your verified credential expert card by running the sample application again.
-
-1. Hit the back button in your browser to return to the sample app home page.
-
-1. Select **Verify credentials**.
-
- ![Screenshot that shows how to select the verify credential button.](media/verifiable-credentials-configure-issuer/verify-credential.png)
-
-1. Using the authenticator app, scan the QR code, or scan it directly from your mobile camera.
-
-1. When you see the warning message, select **Advanced**. Then select **Proceed anyways (unsafe)**.
-
-1. Approve the presentation request by selecting **Allow**.
-
- ![Screenshot that shows how to approve the verifiable credentials new presentation request.](media/verifiable-credentials-configure-issuer/approve-presentation-request.jpg)
-
-1. After you approve the presentation request, you can see that the request has been approved. You can also check the log. To see the log, select the verifiable credential.
-
- ![Screenshot that shows a verified credential expert card.](media/verifiable-credentials-configure-issuer/verifable-credential-info.png)
-
-1. Then select **Recent Activity**.
-
- ![Screenshot that shows the recent activity button that takes you to the credential history.](media/verifiable-credentials-configure-issuer/verifable-credential-history.jpg)
-
-1. You can now see the recent activities of your verifiable credential.
-
- ![Screenshot that shows the history of the verifiable credential.](media/verifiable-credentials-configure-issuer/verify-credential-history.jpg)
-
-1. Go back to the sample app. It shows you that the presentation of the verifiable credentials was received.
- ![Screenshot that shows that a presentation was received.](media/verifiable-credentials-configure-issuer/verifiable-credential-expert-verification.png)
+ :::image type="content" source="media/verifiable-credentials-configure-issuer/credentials-issued.png" alt-text="Screenshot that shows a successfully issued verifiable credential.":::
## Verifiable credential names
In real scenarios, your application pulls the user details from an identity prov
public async Task<ActionResult> issuanceRequest() { ...- // Here you could change the payload manifest and change the first name and last name. payload["claims"]["given_name"] = "Megan"; payload["claims"]["family_name"] = "Bowen";
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
After you create your key vault, Verifiable Credentials generates a set of keys
1. To save the changes, select **Save**.
-### Set access policies for the Verifiable credentials service request service principal
-
-The Verifiable credentials service request is the Request Service API, and it needs access to Key Vault in order to sign issuance and presentation requests.
-
-1. Select **+ Add Access Policy** and select the service principal **Verifiable Credentials Service Request** with AppId **3db474b9-6a0c-4840-96ac-1fceb342124f**.
-
-1. For **Key permissions**, select permissions **Get** and **Sign**.
-
- :::image type="content" source="media/verifiable-credentials-configure-tenant/set-key-vault-sp-access-policy.png" alt-text="screenshot of key vault granting access to a security principal":::
-
-1. To save the changes, select **Add**.
-- ## Set up Verified ID To set up Verified ID, follow these steps:
To set up Verified ID, follow these steps:
>[!IMPORTANT] > The only way to change the trust system is to opt-out of the Verified ID service and redo the onboarding. - 1. Select **Save and get started**. :::image type="content" source="media/verifiable-credentials-configure-tenant/verifiable-credentials-getting-started.png" alt-text="Screenshot that shows how to set up Verifiable Credentials.":::
+### Set access policies for the Verified ID service principals
+
+When you set up Verified ID in the previous step, the access policies in Azure Key Vault are automatically updated to give service principals for Verified ID the required permissions.
+If you ever are in need of manually resetting the permissions, the access policy should look like below.
+
+| Service Principal | AppId | Key Permissions |
+| -- | -- | -- |
+| Verifiable Credentials Service | bb2a64ee-5d29-4b07-a491-25806dc854d3 | Get, Sign |
+| Verifiable Credentials Service Request | 3db474b9-6a0c-4840-96ac-1fceb342124f | Sign |
++ ## Register an application in Azure AD Your application needs to get access tokens when it wants to call into Microsoft Entra Verified ID so it can issue or verify credentials. To get access tokens, you have to register an application and grant API permission for the Verified ID Request Service. For example, use the following steps for a web application:
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
This page contains commonly asked questions about Verifiable Credentials and Dec
- [Conceptual questions about decentralized identity](#conceptual-questions) - [Questions about using Verifiable Credentials preview](#using-the-preview) + ## The basics ### What is a DID?
aks Aks Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-resource-health.md
- Title: Check for Resource Health events impacting your AKS cluster (Preview)
-description: Check the health status of your AKS cluster using Azure Resource Health.
--- Previously updated : 08/18/2020---
-# Check for Resource Health events impacting your AKS cluster (Preview)
--
-When running your container workloads on AKS, you want to ensure you can troubleshoot and fix problems as soon as they arise to minimize the impact on the availability of your workloads. [Azure Resource Health](../service-health/resource-health-overview.md) gives you visibility into various health events that may cause your AKS cluster to be unavailable.
--
-## Open Resource Health
-
-### To access Resource Health for your AKS cluster:
--- Navigate to your AKS cluster in the [Azure portal](https://portal.azure.com).-- Select **Resource Health** in the left navigation.-
-### To access Resource Health for all clusters on your subscription:
--- Search for **Service Health** in the [Azure portal](https://portal.azure.com) and navigate to it.-- Select **Resource health** in the left navigation.-- Select your subscription and set the resource type to Azure Kubernetes Service (AKS).-
-![Screenshot shows the Resource health for your A K S clusters.](./media/aks-resource-health/resource-health-check.png)
-
-## Check the health status
-
-Azure Resource Health helps you diagnose and get support for service problems that affect your Azure resources. Resource Health reports on the current and past health of your resources and helps you determine if the problem is caused by a user-initiated action or a platform event.
-
-Resource Health receives signals for your managed cluster to determine the cluster's health state. It examines the health state of your AKS cluster and reports actions required for each health signal. These signals range from auto-resolving issues, planned updates, unplanned health events, and unavailability caused by user-initiated actions. These signals are classified using the Azure Resource HealthΓÇÖs health status: *Available*, *Unavailable*, *Unknown*, and *Degraded*.
--- **Available**: When there are no known issues affecting your clusterΓÇÖs health, Resource Health reports your cluster as *Available*.--- **Unavailable**: When there is a platform or non-platform event affecting your cluster's health, Resource Health reports your cluster as *Unavailable*.--- **Unknown**: When there is a temporary connection loss to your cluster's health metrics, Resource Health reports your cluster as *Unknown*.--- **Degraded**: When there is a health issue requiring your action, Resource Health reports your cluster as *Degraded*.-
-Note that the Resource Health for an AKS cluster is different than the Resource Health of its individual resources (*Virtual Machines, ScaleSet Instances, Load Balancer, etc...*).
-For additional details on what each health status indicates, visit [Resource Health overview](../service-health/resource-health-overview.md#health-status).
-
-### View historical data
-
-You can also view the past 30 days of historical Resource Health information in the **Health history** section.
-
-## Next steps
-
-Run checks on your cluster to further troubleshoot cluster issues by using [AKS Diagnostics](./concepts-diagnostics.md).
aks Concepts Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-diagnostics.md
- Title: Azure Kubernetes Service (AKS) Diagnostics Overview
-description: Learn about self-diagnosing clusters in Azure Kubernetes Service.
--- Previously updated : 03/29/2021---
-# Azure Kubernetes Service Diagnostics (preview) overview
-
-Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics is an intelligent, self-diagnostic experience that:
-* Helps you identify and resolve problems in your cluster.
-* Is cloud-native.
-* Requires no extra configuration or billing cost.
-
-This feature is now in public preview.
-
-## Open AKS Diagnostics
-
-To access AKS Diagnostics:
-
-1. Navigate to your Kubernetes cluster in the [Azure portal](https://portal.azure.com).
-1. Click on **Diagnose and solve problems** in the left navigation, which opens AKS Diagnostics.
-1. Choose a category that best describes the issue of your cluster, like _Cluster Node Issues_, by:
- * Using the keywords in the homepage tile.
- * Typing a keyword that best describes your issue in the search bar.
-
-![Homepage](./media/concepts-diagnostics/aks-diagnostics-homepage.png)
-
-## View a diagnostic report
-
-After you click on a category, you can view a diagnostic report specific to your cluster. Diagnostic reports intelligently call out any issues in your cluster with status icons. You can drill down on each topic by clicking **More Info** to see a detailed description of:
-* Issues
-* Recommended actions
-* Links to helpful docs
-* Related-metrics
-* Logging data
-
-Diagnostic reports generate based on the current state of your cluster after running various checks. They can be useful for pinpointing the problem of your cluster and understanding next steps to resolve the issue.
-
-![Diagnostic Report](./media/concepts-diagnostics/diagnostic-report.png)
-
-![Expanded Diagnostic Report](./media/concepts-diagnostics/node-issues.png)
-
-## Cluster Insights
-
-The following diagnostic checks are available in **Cluster Insights**.
-
-### Cluster Node Issues
-
-Cluster Node Issues checks for node-related issues that cause your cluster to behave unexpectedly.
--- Node readiness issues-- Node failures-- Insufficient resources-- Node missing IP configuration-- Node CNI failures-- Node not found-- Node power off-- Node authentication failure-- Node kube-proxy stale-
-### Create, read, update & delete (CRUD) operations
-
-CRUD Operations checks for any CRUD operations that cause issues in your cluster.
--- In-use subnet delete operation error-- Network security group delete operation error-- In-use route table delete operation error-- Referenced resource provisioning error-- Public IP address delete operation error-- Deployment failure due to deployment quota-- Operation error due to organization policy-- Missing subscription registration-- VM extension provisioning error-- Subnet capacity-- Quota exceeded error-
-### Identity and security management
-
-Identity and Security Management detects authentication and authorization errors that prevent communication to your cluster.
--- Node authorization failures-- 401 errors-- 403 errors-
-## Next steps
-
-* Collect logs to help you further troubleshoot your cluster issues by using [AKS Periscope](https://aka.ms/aksperiscope).
-
-* Read the [triage practices section](/azure/architecture/operator-guides/aks/aks-triage-practices) of the AKS day-2 operations guide.
-
-* Post your questions or feedback at [UserVoice](https://feedback.azure.com/d365community/forum/aabe212a-f724-ec11-b6e6-000d3a4f0da0) by adding "[Diag]" in the title.
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
Azure Kubernetes Service (AKS) provides additional, supported functionality for
## Add-ons
-Add-ons are a fully supported way to provide extra capabilities for your AKS cluster. Add-ons' installation, configuration, and lifecycle is managed by AKS. Use `az aks addon` to install an add-on or manage the add-ons for your cluster.
+Add-ons are a fully supported way to provide extra capabilities for your AKS cluster. Add-ons' installation, configuration, and lifecycle is managed by AKS. Use `az aks enable-addons` to install an add-on or manage the add-ons for your cluster.
The following rules are used by AKS for applying updates to installed add-ons:
The below table shows a few examples of open-source and third-party integrations
[azure/k8s-artifact-substitute]: https://github.com/Azure/k8s-artifact-substitute [azure/aks-create-action]: https://github.com/Azure/aks-create-action [azure/aks-github-runner]: https://github.com/Azure/aks-github-runner
-[github-actions-aks]: kubernetes-action.md
+[github-actions-aks]: kubernetes-action.md
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
The following table lists the platform metrics collected for AKS. Follow each l
|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | |-|--| | Managed clusters | [Microsoft.ContainerService/managedClusters](../azure-monitor/essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters)
-| Connected clusters | [microsoft.kubernetes/connectedClusters](../azure-monitor/essentials/metrics-supported.md#microsoftkubernetesconnectedclusters)
+| Connected clusters | [microsoft.kubernetes/connectedClusters](../azure-monitor/essentials/metrics-supported.md)
| Virtual machines| [Microsoft.Compute/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachines) | | Virtual machine scale sets | [Microsoft.Compute/virtualMachineScaleSets](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)| | Virtual machine scale sets virtual machines | [Microsoft.Compute/virtualMachineScaleSets/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines)|
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
- Title: Connect to Azure Kubernetes Service (AKS) cluster nodes
-description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks.
-- Previously updated : 02/25/2022---
-#Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
--
-# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access AKS nodes using SSH, including Windows Server nodes. You can also [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp]. For security purposes, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
-
-This article shows you how to create a connection to an AKS node.
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-This article also assumes you have an SSH key. You can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. If you use PuTTY Gen to create the key pair, save the key pair in an OpenSSH format rather than the default PuTTy private key format (.ppk file).
-
-You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-## Create an interactive shell connection to a Linux node
-
-To create an interactive shell connection to a Linux node, use `kubectl debug` to run a privileged container on your node. To list your nodes, use `kubectl get nodes`:
-
-```output
-$ kubectl get nodes -o wide
-
-NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
-aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
-```
-
-Use `kubectl debug` to run a container image on the node to connect to it.
-
-```azurecli-interactive
-kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
-```
-
-This command starts a privileged container on your node and connects to it.
-
-```output
-$ kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
-Creating debugging pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx with container debugger on node aks-nodepool1-12345678-vmss000000.
-If you don't see a command prompt, try pressing enter.
-root@aks-nodepool1-12345678-vmss000000:/#
-```
-
-This privileged container gives access to the node.
-
-> [!NOTE]
-> You can interact with the node session by running `chroot /host` from the privileged container.
-
-### Remove Linux node access
-
-When done, `exit` the interactive shell session. After the interactive container session closes, delete the pod used for access with `kubectl delete pod`.
-
-```output
-kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
-```
-
-## Create the SSH connection to a Windows node
-
-At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
-
-To connect to another node in the cluster, use `kubectl debug`. For more information, see [Create an interactive shell connection to a Linux node][ssh-linux-kubectl-debug].
-
-To create the SSH connection to the Windows Server node from another node, use the SSH keys provided when you created the AKS cluster and the internal IP address of the Windows Server node.
-
-Open a new terminal window and use `kubectl get pods` to get the name of the pod started by `kubectl debug`.
-
-```output
-$ kubectl get pods
-
-NAME READY STATUS RESTARTS AGE
-node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 1/1 Running 0 21s
-```
-
-In the above example, *node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx* is the name of the pod started by `kubectl debug`.
-
-Using `kubectl port-forward`, you can open a connection to the deployed pod:
-
-```
-$ kubectl port-forward node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 2022:22
-Forwarding from 127.0.0.1:2022 -> 22
-Forwarding from [::1]:2022 -> 22
-```
-
-The above example begins forwarding network traffic from port 2022 on your development computer to port 22 on the deployed pod. When using `kubectl port-forward` to open a connection and forward network traffic, the connection remains open until you stop the `kubectl port-forward` command.
-
-Open a new terminal and use `kubectl get nodes` to show the internal IP address of the Windows Server node:
-
-```output
-$ kubectl get nodes -o wide
-
-NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
-aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
-```
-
-In the above example, *10.240.0.67* is the internal IP address of the Windows Server node.
-
-Create an SSH connection to the Windows Server node using the internal IP address. The default username for AKS nodes is *azureuser*. Accept the prompt to continue with the connection. You are then provided with the bash prompt of your Windows Server node:
-
-```output
-$ ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' azureuser@10.240.0.67
-
-The authenticity of host '10.240.0.67 (10.240.0.67)' can't be established.
-ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG.
-Are you sure you want to continue connecting (yes/no)? yes
-
-[...]
-
-Microsoft Windows [Version 10.0.17763.1935]
-(c) 2018 Microsoft Corporation. All rights reserved.
-
-azureuser@aksnpwin000000 C:\Users\azureuser>
-```
-
-The above example connects to port 22 on the Windows Server node through port 2022 on your development computer.
-
-> [!NOTE]
-> If you prefer to use password authentication, use `-o PreferredAuthentications=password`. For example:
->
-> ```console
-> ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' -o PreferredAuthentications=password azureuser@10.240.0.67
-> ```
-
-### Remove SSH access
-
-When done, `exit` the SSH session, stop any port forwarding, and then `exit` the interactive container session. After the interactive container session closes, delete the pod used for SSH access with `kubectl delete pod`.
-
-```output
-kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
-```
-
-## Next steps
-
-If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
-
-<!-- INTERNAL LINKS -->
-[view-kubelet-logs]: kubelet-logs.md
-[view-master-logs]: monitor-aks-reference.md#resource-logs
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[aks-windows-rdp]: rdp.md
-[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md
-[ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md
-[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
To increase the node limit beyond 1000, you must have the following pre-requisit
> [!NOTE] > You can't use NPM with clusters greater than 500 Nodes - ## Node pool scaling considerations and best practices
-* For system node pools, use the *Standard_D16ds_v5* SKU or equivalent core/memory VM SKUs to provide sufficient compute resources for *kube-system* pods.
+* For system node pools, use the *Standard_D16ds_v5* SKU or equivalent core/memory VM SKUs with ephemeral OS disks to provide sufficient compute resources for *kube-system* pods.
* Create at-least five user node pools to scale up to 5,000 nodes since there's a 1000 nodes per node pool limit. * Use cluster autoscaler wherever possible when running at-scale AKS clusters to ensure dynamic scaling of node pools based on the demand for compute resources. * When scaling beyond 1000 nodes without cluster autoscaler, it's recommended to scale in batches of a maximum 500 to 700 nodes at a time. These scaling operations should also have 2 mins to 5-mins sleep time between consecutive scale-ups to prevent Azure API throttling.
+> [!NOTE]
+> You can't use [Stop and Start feature][Stop and Start feature] on clusters enabled with the greater than 1000 node limit
+ ## Cluster upgrade best practices * AKS clusters have a hard limit of 5000 nodes. This limit prevents clusters from upgrading that are running at this limit since there's no more capacity do a rolling update with the max surge property. We recommend scaling the cluster down below 3000 nodes before doing cluster upgrades to provide extra capacity for node churn and minimize control plane load.
To increase the node limit beyond 1000, you must have the following pre-requisit
<!-- LINKS - Internal --> [quotas-skus-regions]: quotas-skus-regions.md [cluster upgrades]: upgrade-cluster.md
+[Stop and Start feature]: start-stop-cluster.md
aks Troubleshoot Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshoot-linux.md
- Title: Linux performance tools-
-description: Learn how to use Linux performance tools to troubleshoot and resolve common problems when using Azure Kubernetes Service (AKS).
----- Previously updated : 02/10/2020---
-# Linux Performance Troubleshooting
-
-Resource exhaustion on Linux machines is a common issue and can manifest through a wide variety of symptoms. This document provides a high-level overview of the tools available to help diagnose such issues.
-
-Many of these tools accept an interval on which to produce rolling output. This output format typically makes spotting patterns much easier. Where accepted, the example invocation will include `[interval]`.
-
-Many of these tools have an extensive history and wide set of configuration options. This page provides only a simple subset of invocations to highlight common problems. The canonical source of information is always the reference documentation for each particular tool. That documentation will be much more thorough than what is provided here.
-
-## Guidance
-
-Be systematic in your approach to investigating performance issues. Two common approaches are USE (utilization, saturation, errors) and RED (rate, errors, duration). RED is typically used in the context of services for request-based monitoring. USE is typically used for monitoring resources: for each resource in a machine, monitor utilization, saturation, and errors. The four main kinds of resources on any machine are cpu, memory, disk, and network. High utilization, saturation, or error rates for any of these resources indicates a possible problem with the system. When a problem exists, investigate the root cause: why is disk IO latency high? Are the disks or virtual machine SKU throttled? What processes are writing to the devices, and to what files?
-
-Some examples of common issues and indicators to diagnose them:
-- IOPS throttling: use iostat to measure per-device IOPS. Ensure no individual disk is above its limit, and the sum for all disks is less than the limit for the virtual machine.-- Bandwidth throttling: use iostat as for IOPS, but measuring read/write throughput. Ensure both per-device and aggregate throughput are below the bandwidth limits.-- SNAT exhaustion: this can manifest as high active (outbound) connections in SAR. -- Packet loss: this can be measured by proxy via TCP retransmit count relative to sent/received count. Both `sar` and `netstat` can show this information.-
-## General
-
-These tools are general purpose and cover basic system information. They are a good starting point for further investigation.
-
-### uptime
-
-```
-$ uptime
- 19:32:33 up 17 days, 12:36, 0 users, load average: 0.21, 0.77, 0.69
-```
-
-uptime provides system uptime and 1, 5, and 15-minute load averages. These load averages roughly correspond to threads doing work or waiting for uninterruptible work to complete. In absolute these numbers can be difficult to interpret, but measured over time they can tell us useful information:
--- 1-minute average > 5-minute average means load is increasing.-- 1-minute average < 5-minute average means load is decreasing.-
-uptime can also illuminate why information is not available: the issue may have resolved on its own or by a restart before the user could access the machine.
-
-Load averages higher than the number of CPU threads available may indicate a performance issue with a given workload.
-
-### dmesg
-
-```
-$ dmesg | tail
-$ dmesg --level=err | tail
-```
-
-dmesg dumps the kernel buffer. Events like OOMKill add an entry to the kernel buffer. Finding an OOMKill or other resource exhaustion messages in dmesg logs is a strong indicator of a problem.
-
-### top
-
-```
-$ top
-Tasks: 249 total, 1 running, 158 sleeping, 0 stopped, 0 zombie
-%Cpu(s): 2.2 us, 1.3 sy, 0.0 ni, 95.4 id, 1.0 wa, 0.0 hi, 0.2 si, 0.0 st
-KiB Mem : 65949064 total, 43415136 free, 2349328 used, 20184600 buff/cache
-KiB Swap: 0 total, 0 free, 0 used. 62739060 avail Mem
-
- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
-116004 root 20 0 144400 41124 27028 S 11.8 0.1 248:45.45 coredns
- 4503 root 20 0 1677980 167964 89464 S 5.9 0.3 1326:25 kubelet
- 1 root 20 0 120212 6404 4044 S 0.0 0.0 48:20.38 systemd
- ...
-```
-
-`top` provides a broad overview of current system state. The headers provide some useful aggregate information:
--- state of tasks: running, sleeping, stopped.-- CPU utilization, in this case mostly showing idle time.-- total, free, and used system memory.-
-`top` may miss short-lived processes; alternatives like `htop` and `atop` provide similar interfaces while fixing some of these shortcomings.
-
-## CPU
-
-These tools provide CPU utilization information. This is especially useful with rolling output, where patterns become easy to spot.
-
-### mpstat
-
-```
-$ mpstat -P ALL [interval]
-Linux 4.15.0-1064-azure (aks-main-10212767-vmss000001) 02/10/20 _x86_64_ (8 CPU)
-
-19:49:03 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
-19:49:04 all 1.01 0.00 0.63 2.14 0.00 0.13 0.00 0.00 0.00 96.11
-19:49:04 0 1.01 0.00 1.01 17.17 0.00 0.00 0.00 0.00 0.00 80.81
-19:49:04 1 1.98 0.00 0.99 0.00 0.00 0.00 0.00 0.00 0.00 97.03
-19:49:04 2 1.01 0.00 0.00 0.00 0.00 1.01 0.00 0.00 0.00 97.98
-19:49:04 3 0.00 0.00 0.99 0.00 0.00 0.99 0.00 0.00 0.00 98.02
-19:49:04 4 1.98 0.00 1.98 0.00 0.00 0.00 0.00 0.00 0.00 96.04
-19:49:04 5 1.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 98.00
-19:49:04 6 1.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 98.00
-19:49:04 7 1.98 0.00 0.99 0.00 0.00 0.00 0.00 0.00 0.00 97.03
-```
-
-`mpstat` prints similar CPU information to top, but broken down by CPU thread. Seeing all cores at once can be useful for detecting highly imbalanced CPU usage, for example when a single threaded application uses one core at 100% utilization. This problem may be more difficult to spot when aggregated over all CPUs in the system.
-
-### vmstat
-
-```
-$ vmstat [interval]
-procs --memory- swap-- --io- -system-- cpu--
- r b swpd free buff cache si so bi bo in cs us sy id wa st
- 2 0 0 43300372 545716 19691456 0 0 3 50 3 3 2 1 95 1 0
-```
-
-`vmstat` provides similar information `mpstat` and `top`, enumerating number of processes waiting on CPU (r column), memory statistics, and percent of CPU time spent in each work state.
-
-## Memory
-
-Memory is a very important, and thankfully easy, resource to track. Some tools can report both CPU and memory, like `vmstat`. But tools like `free` may still be useful for quick debugging.
-
-### free
-
-```
-$ free -m
- total used free shared buff/cache available
-Mem: 64403 2338 42485 1 19579 61223
-Swap: 0 0 0
-```
-
-`free` presents basic information about total memory as well as used and free memory. `vmstat` may be more useful even for basic memory analysis due to its ability to provide rolling output.
-
-## Disk
-
-These tools measure disk IOPS, wait queues, and total throughput.
-
-### iostat
-
-```
-$ iostat -xy [interval] [count]
-$ iostat -xy 1 1
-Linux 4.15.0-1064-azure (aks-main-10212767-vmss000001) 02/10/20 _x86_64_ (8 CPU)
-
-avg-cpu: %user %nice %system %iowait %steal %idle
- 3.42 0.00 2.92 1.90 0.00 91.76
-
-Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
-loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-sda 0.00 56.00 0.00 65.00 0.00 504.00 15.51 0.01 3.02 0.00 3.02 0.12 0.80
-scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-```
-
-`iostat` provides deep insights into disk utilization. This invocation passes `-x` for extended statistics, `-y` to skip the initial output printing system averages since boot, and `1 1` to specify we want 1-second interval, ending after one block of output.
-
-`iostat` exposes many useful statistics:
--- `r/s` and `w/s` are reads per second and writes per second. The sum of these values is IOPS.-- `rkB/s` and `wkB/s` are kilobytes read/written per second. The sum of these values is throughput.-- `await` is the average iowait time in milliseconds for queued requests.-- `avgqu-sz` is the average queue size over the provided interval.-
-On an Azure VM:
--- the sum of `r/s` and `w/s` for an individual block device may not exceed that disk's SKU limits.-- the sum of `rkB/s` and `wkB/s` for an individual block device may not exceed that disk's SKU limits-- the sum of `r/s` and `w/s` for all block devices may not exceed the limits for the VM SKU.-- the sum of `rkB/s` and `wkB/s for all block devices may not exceed the limits for the VM SKU.-
-Note that the OS disk counts as a managed disk of the smallest SKU corresponding to its capacity. For example, a 1024GB OS Disk corresponds to a P30 disk. Ephemeral OS disks and temporary disks do not have individual disk limits; they are only limited by the full VM limits.
-
-Non-zero values of await or avgqu-sz are also good indicators of IO contention.
-
-## Network
-
-These tools measure network statistics like throughput, transmission failures, and utilization. Deeper analysis can expose fine-grained TCP statistics about congestion and dropped packets.
-
-### sar
-
-```
-$ sar -n DEV [interval]
-22:36:57 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
-22:36:58 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-22:36:58 azv604be75d832 1.00 9.00 0.06 1.04 0.00 0.00 0.00 0.00
-22:36:58 azure0 68.00 79.00 27.79 52.79 0.00 0.00 0.00 0.00
-22:36:58 azv4a8e7704a5b 202.00 207.00 37.51 21.86 0.00 0.00 0.00 0.00
-22:36:58 azve83c28f6d1c 21.00 30.00 24.12 4.11 0.00 0.00 0.00 0.00
-22:36:58 eth0 314.00 321.00 70.87 163.28 0.00 0.00 0.00 0.00
-22:36:58 azva3128390bff 12.00 20.00 1.14 2.29 0.00 0.00 0.00 0.00
-22:36:58 azvf46c95ddea3 10.00 18.00 31.47 1.36 0.00 0.00 0.00 0.00
-22:36:58 enP1s1 74.00 374.00 29.36 166.94 0.00 0.00 0.00 0.00
-22:36:58 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
-22:36:58 azvdbf16b0b2fc 9.00 19.00 3.36 1.18 0.00 0.00 0.00 0.00
-```
-
-`sar` is a powerful tool for a wide range of analysis. While this example uses its ability to measure network stats, it is equally powerful for measuring CPU and memory consumption. This example invokes `sar` with `-n` flag to specify the `DEV` (network device) keyword, displaying network throughput by device.
--- The sum of `rxKb/s` and `txKb/s` is total throughput for a given device. When this value exceeds the limit for the provisioned Azure NIC, workloads on the machine will experience increased network latency.-- `%ifutil` measures utilization for a given device. As this value approaches 100%, workloads will experience increased network latency.-
-```
-$ sar -n TCP,ETCP [interval]
-Linux 4.15.0-1064-azure (aks-main-10212767-vmss000001) 02/10/20 _x86_64_ (8 CPU)
-
-22:50:08 active/s passive/s iseg/s oseg/s
-22:50:09 2.00 0.00 19.00 24.00
-
-22:50:08 atmptf/s estres/s retrans/s isegerr/s orsts/s
-22:50:09 0.00 0.00 0.00 0.00 0.00
-
-Average: active/s passive/s iseg/s oseg/s
-Average: 2.00 0.00 19.00 24.00
-
-Average: atmptf/s estres/s retrans/s isegerr/s orsts/s
-Average: 0.00 0.00 0.00 0.00 0.00
-```
-
-This invocation of `sar` uses the `TCP,ETCP` keywords to examine TCP connections. The third column of the last row, "retrans", is the number of TCP retransmits per second. High values for this field indicate an unreliable network connection. In The first and third rows, "active" means a connection originated from the local device, while "remote" indicates an incoming connection. A common issue on Azure is SNAT port exhaustion, which `sar` can help detect. SNAT port exhaustion would manifest as high "active" values, since the problem is due to a high rate of outbound, locally-initiated TCP connections.
-
-As `sar` takes an interval, it prints rolling output and then prints final rows of output containing the average results from the invocation.
-
-### netstat
-
-```
-$ netstat -s
-Ip:
- 71046295 total packets received
- 78 forwarded
- 0 incoming packets discarded
- 71046066 incoming packets delivered
- 83774622 requests sent out
- 40 outgoing packets dropped
-Icmp:
- 103 ICMP messages received
- 0 input ICMP message failed.
- ICMP input histogram:
- destination unreachable: 103
- 412802 ICMP messages sent
- 0 ICMP messages failed
- ICMP output histogram:
- destination unreachable: 412802
-IcmpMsg:
- InType3: 103
- OutType3: 412802
-Tcp:
- 11487089 active connections openings
- 592 passive connection openings
- 1137 failed connection attempts
- 404 connection resets received
- 17 connections established
- 70880911 segments received
- 95242567 segments send out
- 176658 segments retransmited
- 3 bad segments received.
- 163295 resets sent
-Udp:
- 164968 packets received
- 84 packets to unknown port received.
- 0 packet receive errors
- 165082 packets sent
-UdpLite:
-TcpExt:
- 5 resets received for embryonic SYN_RECV sockets
- 1670559 TCP sockets finished time wait in fast timer
- 95 packets rejects in established connections because of timestamp
- 756870 delayed acks sent
- 2236 delayed acks further delayed because of locked socket
- Quick ack mode was activated 479 times
- 11983969 packet headers predicted
- 25061447 acknowledgments not containing data payload received
- 5596263 predicted acknowledgments
- 19 times recovered from packet loss by selective acknowledgements
- Detected reordering 114 times using SACK
- Detected reordering 4 times using time stamp
- 5 congestion windows fully recovered without slow start
- 1 congestion windows partially recovered using Hoe heuristic
- 5 congestion windows recovered without slow start by DSACK
- 111 congestion windows recovered without slow start after partial ack
- 73 fast retransmits
- 26 retransmits in slow start
- 311 other TCP timeouts
- TCPLossProbes: 198845
- TCPLossProbeRecovery: 147
- 480 DSACKs sent for old packets
- 175310 DSACKs received
- 316 connections reset due to unexpected data
- 272 connections reset due to early user close
- 5 connections aborted due to timeout
- TCPDSACKIgnoredNoUndo: 8498
- TCPSpuriousRTOs: 1
- TCPSackShifted: 3
- TCPSackMerged: 9
- TCPSackShiftFallback: 177
- IPReversePathFilter: 4
- TCPRcvCoalesce: 1501457
- TCPOFOQueue: 9898
- TCPChallengeACK: 342
- TCPSYNChallenge: 3
- TCPSpuriousRtxHostQueues: 17
- TCPAutoCorking: 2315642
- TCPFromZeroWindowAdv: 483
- TCPToZeroWindowAdv: 483
- TCPWantZeroWindowAdv: 115
- TCPSynRetrans: 885
- TCPOrigDataSent: 51140171
- TCPHystartTrainDetect: 349
- TCPHystartTrainCwnd: 7045
- TCPHystartDelayDetect: 26
- TCPHystartDelayCwnd: 862
- TCPACKSkippedPAWS: 3
- TCPACKSkippedSeq: 4
- TCPKeepAlive: 62517
-IpExt:
- InOctets: 36416951539
- OutOctets: 41520580596
- InNoECTPkts: 86631440
- InECT0Pkts: 14
-```
-
-`netstat` can introspect a wide variety of network stats, here invoked with summary output. There are many useful fields here depending on the issue. One useful field in the TCP section is "failed connection attempts". This may be an indication of SNAT port exhaustion or other issues making outbound connections. A high rate of retransmitted segments (also under the TCP section) may indicate issues with packet delivery.
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
- Title: Troubleshoot common Azure Kubernetes Service problems
-description: Learn how to troubleshoot and resolve common problems when using Azure Kubernetes Service (AKS)
-- Previously updated : 09/24/2021--
-# AKS troubleshooting
-
-When you create or manage Azure Kubernetes Service (AKS) clusters, you might occasionally come across problems. This article details some common problems and troubleshooting steps.
-
-## In general, where do I find information about debugging Kubernetes problems?
-
-Try the [official guide to troubleshooting Kubernetes clusters](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
-There's also a [troubleshooting guide](https://github.com/feiskyer/kubernetes-handbook/blob/master/en/troubleshooting/index.md), published by a Microsoft engineer for troubleshooting pods, nodes, clusters, and other features.
-
-## I'm getting a `quota exceeded` error during creation or upgrade. What should I do?
-
- [Request more cores](../azure-portal/supportability/regional-quota-requests.md).
-
-## I'm getting an `insufficientSubnetSize` error while deploying an AKS cluster with advanced networking. What should I do?
-
-This error indicates a subnet in use for a cluster no longer has available IPs within its CIDR for successful resource assignment. For Kubenet clusters, the requirement is sufficient IP space for each node in the cluster. For Azure CNI clusters, the requirement is sufficient IP space for each node and pod in the cluster.
-Read more about the [design of Azure CNI to assign IPs to pods](configure-azure-cni.md#plan-ip-addressing-for-your-cluster).
-
-These errors are also surfaced in [AKS Diagnostics](concepts-diagnostics.md), which proactively surfaces issues such as an insufficient subnet size.
-
-The following three (3) cases cause an insufficient subnet size error:
-
-1. AKS Scale or AKS Node pool scale
- 1. If using Kubenet, when the `number of free IPs in the subnet` is **less than** the `number of new nodes requested`.
- 1. If using Azure CNI, when the `number of free IPs in the subnet` is **less than** the `number of nodes requested times (*) the node pool's --max-pod value`.
-
-1. AKS Upgrade or AKS Node pool upgrade
- 1. If using Kubenet, when the `number of free IPs in the subnet` is **less than** the `number of buffer nodes needed to upgrade`.
- 1. If using Azure CNI, when the `number of free IPs in the subnet` is **less than** the `number of buffer nodes needed to upgrade times (*) the node pool's --max-pod value`.
-
- By default AKS clusters set a max surge (upgrade buffer) value of one (1), but this upgrade behavior can be customized by setting the [max surge value of a node pool, which will increase the number of available IPs needed to complete an upgrade.
-
-1. AKS create or AKS Node pool add
- 1. If using Kubenet, when the `number of free IPs in the subnet` is **less than** the `number of nodes requested for the node pool`.
- 1. If using Azure CNI, when the `number of free IPs in the subnet` is **less than** the `number of nodes requested times (*) the node pool's --max-pod value`.
-
-The following mitigation can be taken by creating new subnets. The permission to create a new subnet is required for mitigation due to the inability to update an existing subnet's CIDR range.
-
-1. Rebuild a new subnet with a larger CIDR range sufficient for operation goals:
- 1. Create a new subnet with a new desired non-overlapping range.
- 1. Create a new node pool on the new subnet.
- 1. Drain pods from the old node pool residing in the old subnet to be replaced.
- 1. Delete the old subnet and old node pool.
-
-## My pod is stuck in CrashLoopBackOff mode. What should I do?
-
-There might be various reasons for the pod being stuck in that mode. You might look into:
-
-* The pod itself, by using `kubectl describe pod <pod-name>`.
-* The logs, by using `kubectl logs <pod-name>`.
-
-For more information about how to troubleshoot pod problems, see [Debugging Pods](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods) in the Kubernetes documentation.
-
-## I'm receiving `TCP timeouts` when using `kubectl` or other third-party tools connecting to the API server
-AKS has HA control planes that scale vertically and horizontally according to the number of cores to ensure its Service Level Objectives (SLOs) and Service Level Agreements (SLAs). If you're experiencing connections timing out, check the below:
--- **Are all your API commands timing out consistently or only a few?** If it's only a few, your `konnectivity-agent` pod, `tunnelfront` pod or `aks-link` pod, responsible for node -> control plane communication, might not be in a running state. Make sure the nodes hosting this pod aren't over-utilized or under stress. Consider moving them to their own [`system` node pool](use-system-pools.md).-- **Have you opened all required ports, FQDNs, and IPs noted on the [AKS restrict egress traffic docs](limit-egress-traffic.md)?** Otherwise several commands calls can fail. The AKS secure, tunneled communication between api-server and kubelet (through the *konnectivity-agent*) will require some of these to work.-- **Have you blocked the Application-Layer Protocol Negotiation TLS extension?** *konnectivity-agent* requires this extension to establish a connection between the control plane and nodes.-- **Is your current IP covered by [API IP Authorized Ranges](api-server-authorized-ip-ranges.md)?** If you're using this feature and your IP is not included in the ranges your calls will be blocked. -- **Do you have a client or application leaking calls to the API server?** Make sure to use watches instead of frequent get calls and that your third-party applications aren't leaking such calls. For example, a bug in the Istio mixer causes a new API Server watch connection to be created every time a secret is read internally. Because this behavior happens at a regular interval, watch connections quickly accumulate, and eventually cause the API Server to become overloaded no matter the scaling pattern. https://github.com/istio/istio/issues/19481-- **Do you have many releases in your helm deployments?** This scenario can cause both tiller to use too much memory on the nodes, as well as a large amount of `configmaps`, which can cause unnecessary spikes on the API server. Consider configuring `--history-max` at `helm init` and leverage the new Helm 3. More details on the following issues:
- - https://github.com/helm/helm/issues/4821
- - https://github.com/helm/helm/issues/3500
- - https://github.com/helm/helm/issues/4543
-- **[Is internal traffic between nodes being blocked?](#im-receiving-tcp-timeouts-such-as-dial-tcp-node_ip10250-io-timeout)**-
-## I'm receiving `TCP timeouts`, such as `dial tcp <Node_IP>:10250: i/o timeout`
-
-These timeouts may be related to internal traffic between nodes being blocked. Verify that this traffic is not being blocked, such as by [network security groups](concepts-security.md#azure-network-security-groups) on the subnet for your cluster's nodes.
-
-## I'm trying to enable Kubernetes role-based access control (Kubernetes RBAC) on an existing cluster. How can I do that?
-
-Enabling Kubernetes role-based access control (Kubernetes RBAC) on existing clusters isn't supported at this time, it must be set when creating new clusters. Kubernetes RBAC is enabled by default when using CLI, Portal, or an API version later than `2020-03-01`.
-
-## I can't get logs by using kubectl logs or I can't connect to the API server. I'm getting "Error from server: error dialing backend: dial tcp…". What should I do?
-
-Ensure ports 22, 9000 and 1194 are open to connect to the API server. Check whether the `tunnelfront` or `aks-link` pod is running in the *kube-system* namespace using the `kubectl get pods --namespace kube-system` command. If it isn't, force deletion of the pod and it will restart.
-
-## I'm getting `"tls: client offered only unsupported versions"` from my client when connecting to AKS API. What should I do?
-
-The minimum supported TLS version in AKS is TLS 1.2.
-
-## I'm using Alias minor version, but I can't seem to upgrade in the same minor version? Why?
-
-When upgrading by alias minor version, only a higher minor version is supported. For example, upgrading from 1.14.x to 1.14 will not trigger an upgrade to the latest GA 1.14 patch, but upgrading to 1.15 will trigger an upgrade to the latest GA 1.15 patch.
-
-## My application is failing with `argument list too long`
-
-You may receive an error message similar to:
-
-```
-standard_init_linux.go:228: exec user process caused: argument list too long
-```
-
-There are two potential causes:
-- The argument list provided to the executable is too long-- The set of environment variables provided to the executable is too big-
-If you have many services deployed in one namespace, it can cause the environment variable list to become too large, and will produce the above error message when Kubelet tries to run the executable. The error is caused by Kubelet injecting environment variables recording the host and port for each active service, so that services can use this information to locate one another (read more about this [in the Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service)).
-
-As a workaround, you can disable this Kubelet behaviour by setting `enableServiceLinks: false` inside your [Pod spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#podspec-v1-core). **However**, if your service relies on these environment variables to locate other services, then this will cause it to fail. One fix is to use DNS for service resolution rather than environment variables (using [CoreDNS](https://kubernetes.io/docs/tasks/administer-cluster/coredns/)). Another option is to reduce the number of services that are active.
-
-## I'm trying to upgrade or scale and am getting a `"Changing property 'imageReference' is not allowed"` error. How do I fix this problem?
-
-You might be getting this error because you've modified the tags in the agent nodes inside the AKS cluster. Modify or delete tags and other properties of resources in the MC_* resource group can lead to unexpected results. Altering the resources under the MC_* group in the AKS cluster breaks the service-level objective (SLO).
-
-## I'm receiving errors that my cluster is in failed state and upgrading or scaling will not work until it is fixed
-
-*This troubleshooting assistance is directed from https://aka.ms/aks-cluster-failed*
-
-This error occurs when clusters enter a failed state for multiple reasons. Follow the steps below to resolve your cluster failed state before retrying the previously failed operation:
-
-1. Until the cluster is out of `failed` state, `upgrade` and `scale` operations won't succeed. Common root issues and resolutions include:
- * Scaling with **insufficient compute (CRP) quota**. To resolve, first scale your cluster back to a stable goal state within quota. Then follow these [steps to request a compute quota increase](../azure-portal/supportability/regional-quota-requests.md) before trying to scale up again beyond initial quota limits.
- * Scaling a cluster with advanced networking and **insufficient subnet (networking) resources**. To resolve, first scale your cluster back to a stable goal state within quota. Then follow [these steps to request a resource quota increase](../azure-resource-manager/templates/error-resource-quota.md#solution) before trying to scale up again beyond initial quota limits.
-2. Once the underlying cause of upgrade failure is addressed, retry the original operation. This retry operation should bring your cluster to the succeeded state.
-
-## I'm receiving errors when trying to upgrade or scale that state my cluster is being upgraded or has failed upgrade
-
-*This troubleshooting assistance is directed from https://aka.ms/aks-pending-upgrade*
-
- You can't have a cluster or node pool simultaneously upgrade and scale. Instead, each operation type must complete on the target resource before the next request on that same resource. As a result, operations are limited when active upgrade or scale operations are occurring or attempted.
-
-To help diagnose the issue run `az aks show -g myResourceGroup -n myAKSCluster -o table` to retrieve detailed status on your cluster. Based on the result:
-
-* If cluster is actively upgrading, wait until the operation finishes. If it succeeded, retry the previously failed operation again.
-* If cluster has failed upgrade, follow steps outlined in previous section.
-
-## I'm receiving an error due to "PodDrainFailure"
-
-This error is due to the requested operation being blocked by a PodDisruptionBudget (PDB) that has been set on the deployments within the cluster. To learn more about how PodDisruptionBudgets work, please visit check out [the official Kubernetes example](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pdb-example).
-
-You may use this command to find the PDBs applied on your cluster:
-
-```
-kubectl get poddisruptionbudgets --all-namespaces
-```
-or
-```
-kubectl get poddisruptionbudgets -n {namespace of failed pod}
-```
-Please view the label selector to see the exact pods that are causing this failure.
-
-There are a few ways this error can occur:
-1. Your PDB may be too restrictive such as having a high minAvailable pod count, or low maxUnavailable pod count. You can change it by updating the PDB with less restrictive.
-2. During an upgrade, the replacement pods may not be ready fast enough. You can investigate your Pod Readiness times to attempt to fix this situation.
-3. The deployed pods may not work with the new upgraded node version, causing Pods to fail and fall below the PDB.
-
->[!NOTE]
- > If the pod is failing from the namespace 'kube-system', please contact support. This is a namespace managed by AKS.
-
-For more information about PodDisruptionBudgets, please check out the [official Kubernetes guide on configuring a PDB](https://kubernetes.io/docs/tasks/run-application/configure-pdb/).
-
-## Can I move my cluster to a different subscription or my subscription with my cluster to a new tenant?
-
-If you've moved your AKS cluster to a different subscription or the cluster's subscription to a new tenant, the cluster won't function because of missing cluster identity permissions. **AKS doesn't support moving clusters across subscriptions or tenants** because of this constraint.
-
-## I'm receiving errors trying to use features that require virtual machine scale sets
-
-*This troubleshooting assistance is directed from aka.ms/aks-vmss-enablement*
-
-You may receive errors that indicate your AKS cluster isn't on a virtual machine scale set, such as the following example:
-
-**AgentPool `<agentpoolname>` has set auto scaling as enabled but isn't on Virtual Machine Scale Sets**
-
-Features such as the cluster autoscaler or multiple node pools require virtual machine scale sets as the `vm-set-type`.
-
-Follow the *Before you begin* steps in the appropriate doc to correctly create an AKS cluster:
-
-* [Use the cluster autoscaler](cluster-autoscaler.md)
-* [Create and use multiple node pools](use-multiple-node-pools.md)
-
-## What naming restrictions are enforced for AKS resources and parameters?
-
-*This troubleshooting assistance is directed from aka.ms/aks-naming-rules*
-
-Naming restrictions are implemented by both the Azure platform and AKS. If a resource name or parameter breaks one of these restrictions, an error is returned that asks you provide a different input. The following common naming guidelines apply:
-
-* Cluster names must be 1-63 characters. The only allowed characters are letters, numbers, dashes, and underscore. The first and last character must be a letter or a number.
-* The AKS Node/*MC_* resource group name combines resource group name and resource name. The autogenerated syntax of `MC_resourceGroupName_resourceName_AzureRegion` must be no greater than 80 chars. If needed, reduce the length of your resource group name or AKS cluster name. You may also [customize your node resource group name](cluster-configuration.md#custom-resource-group-name)
-* The *dnsPrefix* must start and end with alphanumeric values and must be between 1-54 characters. Valid characters include alphanumeric values and hyphens (-). The *dnsPrefix* can't include special characters such as a period (.).
-* AKS Node Pool names must be all lowercase and be 1-11 characters for Linux node pools and 1-6 characters for Windows node pools. The name must start with a letter and the only allowed characters are letters and numbers.
-* The *admin-username*, which sets the administrator username for Linux nodes, must start with a letter, may only contain letters, numbers, hyphens, and underscores, and has a maximum length of 64 characters.
-
-## I'm receiving errors when trying to create, update, scale, delete or upgrade cluster, that operation is not allowed as another operation is in progress.
-
-*This troubleshooting assistance is directed from aka.ms/aks-pending-operation*
-
-Cluster operations are limited when a previous operation is still in progress. To retrieve a detailed status of your cluster, use the `az aks show -g myResourceGroup -n myAKSCluster -o table` command. Use your own resource group and AKS cluster name as needed.
-
-Based on the output of the cluster status:
-
-* If the cluster is in any provisioning state other than *Succeeded* or *Failed*, wait until the operation (*Upgrading / Updating / Creating / Scaling / Deleting / Migrating*) finishes. When the previous operation has completed, retry your latest cluster operation.
-
-* If the cluster has a failed upgrade, follow the steps outlined [I'm receiving errors that my cluster is in failed state and upgrading or scaling will not work until it is fixed](#im-receiving-errors-that-my-cluster-is-in-failed-state-and-upgrading-or-scaling-will-not-work-until-it-is-fixed).
-
-## Received an error saying my service principal wasn't found or is invalid when I try to create a new cluster.
-
-When creating an AKS cluster, it requires a service principal or managed identity to create resources on your behalf. AKS can automatically create a new service principal at cluster creation time or receive an existing one. When using an automatically created one, Azure Active Directory needs to propagate it to every region so the creation succeeds. When the propagation takes too long, the cluster will fail validation to create as it can't find an available service principal to do so.
-
-Use the following workarounds for this issue:
-* Use an existing service principal, which has already propagated across regions and exists to pass into AKS at cluster create time.
-* If using automation scripts, add time delays between service principal creation and AKS cluster creation.
-* If using Azure portal, return to the cluster settings during create and retry the validation page after a few minutes.
-
-## I'm getting `"AADSTS7000215: Invalid client secret is provided."` when using AKS API. What should I do?
-
-This issue is due to the expiration of service principal credentials. [Update the credentials for an AKS cluster.](update-credentials.md)
-
-## I'm getting `"The credentials in ServicePrincipalProfile were invalid."` or `"error:invalid_client AADSTS7000215: Invalid client secret is provided."`
-This is caused by special characters in the value of the client secret that have not been escaped properly. Refer to [escape special characters when updating AKS Service Principal credentials.](update-credentials.md#update-aks-cluster-with-new-service-principal-credentials)
-
-## I can't access my cluster API from my automation/dev machine/tooling when using API server authorized IP ranges. How do I fix this problem?
-
-To resolve this issue, ensure `--api-server-authorized-ip-ranges` includes the IP(s) or IP range(s) of automation/dev/tooling systems being used. Refer section 'How to find my IP' in [Secure access to the API server using authorized IP address ranges](api-server-authorized-ip-ranges.md).
-
-## I'm unable to view resources in Kubernetes resource viewer in Azure portal for my cluster configured with API server authorized IP ranges. How do I fix this problem?
-
-The [Kubernetes resource viewer](kubernetes-portal.md) requires `--api-server-authorized-ip-ranges` to include access for the local client computer or IP address range (from which the portal is being browsed). Refer section 'How to find my IP' in [Secure access to the API server using authorized IP address ranges](api-server-authorized-ip-ranges.md).
-
-## I'm receiving errors after restricting egress traffic
-
-When restricting egress traffic from an AKS cluster, there are [required and optional recommended](limit-egress-traffic.md) outbound ports / network rules and FQDN / application rules for AKS. If your settings are in conflict with any of these rules, certain `kubectl` commands won't work correctly. You may also see errors when creating an AKS cluster.
-
-Verify that your settings aren't conflicting with any of the required or optional recommended outbound ports / network rules and FQDN / application rules.
-
-## I'm receiving "429 - Too Many Requests" errors
-
-When a kubernetes cluster on Azure (AKS or no) does a frequent scale up/down or uses the cluster autoscaler (CA), those operations can result in a large number of HTTP calls that in turn exceed the assigned subscription quota leading to failure. The errors will look like
-
-```
-Service returned an error. Status=429 Code=\"OperationNotAllowed\" Message=\"The server rejected the request because too many requests have been received for this subscription.\" Details=[{\"code\":\"TooManyRequests\",\"message\":\"{\\\"operationGroup\\\":\\\"HighCostGetVMScaleSet30Min\\\",\\\"startTime\\\":\\\"2020-09-20T07:13:55.2177346+00:00\\\",\\\"endTime\\\":\\\"2020-09-20T07:28:55.2177346+00:00\\\",\\\"allowedRequestCount\\\":1800,\\\"measuredRequestCount\\\":2208}\",\"target\":\"HighCostGetVMScaleSet30Min\"}] InnerError={\"internalErrorCode\":\"TooManyRequestsReceived\"}"}
-```
-
-These throttling errors are described in detail [here](../azure-resource-manager/management/request-limits-and-throttling.md) and [here](/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors)
-
-The recommendation from AKS Engineering Team is to ensure you are running version at least 1.18.x, which contains many improvements. More details can be found on these improvements [here](https://github.com/Azure/AKS/issues/1413) and [here](https://github.com/kubernetes-sigs/cloud-provider-azure/issues/247).
-
-Given these throttling errors are measured at the subscription level, they might still happen if:
-- There are 3rd party applications making GET requests (for example, monitoring applications, and so on). The recommendation is to reduce the frequency of these calls.-- There are numerous AKS clusters / node pools using virtual machine scale sets. Try to split your number of clusters into different subscriptions, in particular if you expect them to be very active (for example, an active cluster autoscaler) or have multiple clients (for example, rancher, terraform, and so on).-
-## My cluster's provisioning status changed from Ready to Failed with or without me performing an operation. What should I do?
-
-If your cluster's provisioning status changes from *Ready* to *Failed* with or without you performing any operations, but the applications on your cluster are continuing to run, this issue may be resolved automatically by the service and your applications should not be affected.
-
-If your cluster's provisioning status remains as *Failed* or the applications on your cluster stop working, [submit a support request](https://azure.microsoft.com/support/options/#submit).
-
-## My watch is stale or Azure AD Pod Identity NMI is returning status 500
-
-If you're using Azure Firewall like on this [example](limit-egress-traffic.md#restrict-egress-traffic-using-azure-firewall), you may encounter this issue as the long lived TCP connections via firewall using Application Rules currently have a bug (to be resolved in Q1CY21) that causes the Go `keepalives` to be terminated on the firewall. Until this issue is resolved, you can mitigate by adding a Network rule (instead of application rule) to the AKS API server IP.
-
-## When resuming my cluster after a stop operation, why is my node count not in the autoscaler min and max range?
-
-If you are using cluster autoscaler, when you start your cluster back up your current node count may not be between the min and max range values you set. This behavior is expected. The cluster starts with the number of nodes it needs to run its workloads, which isn't impacted by your autoscaler settings. When your cluster performs scaling operations, the min and max values will impact your current node count and your cluster will eventually enter and remain in that desired range until you stop your cluster.
-
-## Windows containers have connectivity issues after a cluster upgrade operation
-
-For older clusters with Calico network policies applied before Windows Calico support, Windows Calico will be enabled by default after a cluster upgrade. After Windows Calico is enabled on Windows, you may have connectivity issues if the Calico network policies denied ingress/egress. You can mitigate this issue by creating a new Calico policy on the cluster that allows all ingress/egress for Windows using either PodSelector or IPBlock.
-
-## Azure Storage and AKS Troubleshooting
-
-### Failure when setting uid and `GID` in mountOptions for Azure Disk
-
-Azure Disk uses the ext4,xfs filesystem by default and mountOptions such as uid=x,gid=x can't be set at mount time. For example if you tried to set mountOptions uid=999,gid=999, would see an error like:
-
-```console
-Warning FailedMount 63s kubelet, aks-nodepool1-29460110-0 MountVolume.MountDevice failed for volume "pvc-d783d0e4-85a1-11e9-8a90-369885447933" : azureDisk - mountDevice:FormatAndMount failed with mount failed: exit status 32
-Mounting command: systemd-run
-Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m436970985 --scope -- mount -t xfs -o dir_mode=0777,file_mode=0777,uid=1000,gid=1000,defaults /dev/disk/azure/scsi1/lun2 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m436970985
-Output: Running scope as unit run-rb21966413ab449b3a242ae9b0fbc9398.scope.
-mount: wrong fs type, bad option, bad superblock on /dev/sde,
- missing codepage or helper program, or other error
-```
-
-You can mitigate the issue by doing one the options:
-
-* [Configure the security context for a pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) by setting uid in runAsUser and gid in fsGroup. For example, the following setting will set pod run as root, make it accessible to any file:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: security-context-demo
-spec:
- securityContext:
- runAsUser: 0
- fsGroup: 0
-```
-
- >[!NOTE]
- > Since gid and uid are mounted as root or 0 by default. If gid or uid are set as non-root, for example 1000, Kubernetes will use `chown` to change all directories and files under that disk. This operation can be time consuming and may make mounting the disk very slow.
-
-* Use `chown` in initContainers to set `GID` and `UID`. For example:
-
-```yaml
-initContainers:
-- name: volume-mount
- image: mcr.microsoft.com/dotnet/runtime-deps:6.0
- command: ["sh", "-c", "chown -R 100:100 /data"]
- volumeMounts:
- - name: <your data volume>
- mountPath: /data
-```
-
-### Large number of Azure Disks causes slow attach/detach
-
-When the numbers of Azure Disk attach/detach operations targeting a single node VM is larger than 10, or larger than 3 when targeting single virtual machine scale set pool they may be slower than expected as they are done sequentially. This issue is a known limitation with in-tree Azure Disk driver. [Azure Disk CSI driver](https://github.com/kubernetes-sigs/azuredisk-csi-driver) solved this issue with attach/detach disk in batch operation.
-
-### Azure Disk detach failure leading to potential node VM in failed state
-
-In some edge cases, an Azure Disk detach may partially fail and leave the node VM in a failed state.
-
-If your node is in a failed state, you can mitigate by manually updating the VM status using one of the below:
-
-* For an availability set-based cluster:
- ```azurecli
- az vm update -n <VM_NAME> -g <RESOURCE_GROUP_NAME>
- ```
-
-* For a VMSS-based cluster:
- ```azurecli
- az vmss update-instances -g <RESOURCE_GROUP_NAME> --name <VMSS_NAME> --instance-id <ID>
- ```
-
-## Azure Files and AKS Troubleshooting
-
-### Azure Files CSI storage driver fails to mount a volume with a secret not in default namespace
-
-If you have configured an Azure Files CSI driver persistent volume or storage class with a storage
-access secrete in a namespace other than *default*, the pod does not search in its own namespace
-and returns an error when trying to mount the volume.
-
-This issue has been fixed in the 2022041 release. To mitigate this issue, you have two options:
-
-1. Upgrade the agent node image to the latest release.
-1. Specify the *secretNamespace* setting when configuring the persistent volume configuration.
-
-### What are the default mountOptions when using Azure Files?
-
-Recommended settings:
-
-| Kubernetes version | fileMode and dirMode value|
-|--|:--:|
-| 1.12.2 and later | 0777 |
-
-Mount options can be specified on the storage class object. The following example sets *0777*:
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: azurefile
-provisioner: kubernetes.io/azure-file
-mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=1000
- - gid=1000
- - mfsymlinks
- - nobrl
- - cache=none
-parameters:
- skuName: Standard_LRS
-```
-
-Some additional useful *mountOptions* settings:
-
-* `mfsymlinks` will make Azure Files mount (cifs) support symbolic links
-* `nobrl` will prevent sending byte range lock requests to the server. This setting is necessary for certain applications that break with cifs style mandatory byte range locks. Most cifs servers don't yet support requesting advisory byte range locks. If not using *nobrl*, applications that break with cifs style mandatory byte range locks may cause error messages similar to:
- ```console
- Error: SQLITE_BUSY: database is locked
- ```
-
-### Error "could not change permissions" when using Azure Files
-
-When running PostgreSQL on the Azure Files plugin, you may see an error similar to:
-
-```console
-initdb: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
-fixing permissions on existing directory /var/lib/postgresql/data
-```
-
-This error is caused by the Azure Files plugin using the cifs/SMB protocol. When using the cifs/SMB protocol, the file and directory permissions couldn't be changed after mounting.
-
-To resolve this issue, use `subPath` together with the Azure Disk plugin.
-
-> [!NOTE]
-> For ext3/4 disk type, there is a lost+found directory after the disk is formatted.
-
-### Azure Files has high latency compared to Azure Disk when handling many small files
-
-In some case, such as handling many small files, you may experience high latency when using Azure Files when compared to Azure Disk.
-
-### Error when enabling "Allow access allow access from selected network" setting on storage account
-
-If you enable *allow access from selected network* on a storage account that's used for dynamic provisioning in AKS, you'll get an error when AKS creates a file share:
-
-```console
-persistentvolume-controller (combined from similar events): Failed to provision volume with StorageClass "azurefile": failed to create share kubernetes-dynamic-pvc-xxx in account xxx: failed to create file share, err: storage: service returned error: StatusCode=403, ErrorCode=AuthorizationFailure, ErrorMessage=This request is not authorized to perform this operation.
-```
-
-This error is because of the Kubernetes *persistentvolume-controller* not being on the network chosen when setting *allow access from selected network*.
-
-You can mitigate the issue by using [static provisioning with Azure Files](azure-files-volume.md).
-
-### Azure Files mount fails because of storage account key changed
-
-If your storage account key has changed, you may see Azure Files mount failures.
-
-You can mitigate by manually updating the `azurestorageaccountkey` field manually in an Azure file secret with your base64-encoded storage account key.
-
-To encode your storage account key in base64, you can use `base64`. For example:
-
-```console
-echo X+ALAAUgMhWHL7QmQ87E1kSfIqLKfgC03Guy7/xk9MyIg2w4Jzqeu60CVw2r/dm6v6E0DWHTnJUEJGVQAoPaBc== | base64
-```
-
-To update your Azure secret file, use `kubectl edit secret`. For example:
-
-```console
-kubectl edit secret azure-storage-account-{storage-account-name}-secret
-```
-
-After a few minutes, the agent node will retry the Azure File mount with the updated storage key.
--
-### Cluster autoscaler fails to scale with error failed to fix node group sizes
-
-If your cluster autoscaler isn't scaling up/down and you see an error like the below on the [cluster autoscaler logs][view-master-logs].
-
-```console
-E1114 09:58:55.367731 1 static_autoscaler.go:239] Failed to fix node group sizes: failed to decrease aks-default-35246781-vmss: attempt to delete existing nodes
-```
-
-This error is because of an upstream cluster autoscaler race condition. In such a case, cluster autoscaler ends with a different value than the one that is actually in the cluster. To get out of this state, disable and re-enable the [cluster autoscaler][cluster-autoscaler].
--
-### Why do upgrades to Kubernetes 1.16 fail when using node labels with a kubernetes.io prefix
-
-As of Kubernetes 1.16 [only a defined subset of labels with the kubernetes.io prefix](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) can be applied by the kubelet to nodes. AKS cannot remove active labels on your behalf without consent, as it may cause downtime to impacted workloads.
-
-As a result, to mitigate this issue you can:
-
-1. Upgrade your cluster control plane to 1.16 or higher
-2. Add a new nodepoool on 1.16 or higher without the unsupported kubernetes.io labels
-3. Delete the older node pool
-
-AKS is investigating the capability to mutate active labels on a node pool to improve this mitigation.
--
-<!-- LINKS - internal -->
-[view-master-logs]: monitor-aks-reference.md#resource-logs
-[cluster-autoscaler]: cluster-autoscaler.md
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 9/15/2022 Last updated : 10/19/2022 # Migrate to App Service Environment v3
Scenario: An existing app running on an App Service Environment v1 or App Servic
For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
-Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
### Checklist before migrating apps
Note that multiple App Service Environments can't exist in a single subnet. If y
App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment will need to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
-## Back up and restore
-
-The [back up and restore](../manage-backup.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [details](../manage-backup.md#automatic-vs-custom-backups) of this feature.
-
-The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. You can select a backup and use that to restore the app to an App Service in your App Service Environment v3.
--
-|Benefits |Limitations |
-|||
-|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups) |
-|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription |
-|In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. |
-|Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported |
-|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Using a [private endpoint enabled storage account](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported |
-|Can create empty web apps to restore to in your new environment before you start restoring to speed up the process | |
- ## Clone your app to an App Service Environment v3 [Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#whats-included-in-an-automatic-backup).
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
## Manually create your apps on an App Service Environment v3
-If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment.
+If the above feature doesn't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment.
You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, navigate to your App Service and go to **Export template** under **Automation**.
Once your migration and any testing with your new environment is complete, delet
Zone pinning isn't a supported feature on App Service Environment v3. - **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+- **Is backup and restore supported for moving apps from App Service Environment v2 to v3?**
+ The [back up and restore](../manage-backup.md) feature doesn't support restoring apps between App Service Environment versions (an app running on App Service Environment v2 can't be restored on an App Service Environment v3).
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
### Try Form Recognizer
See how data, including name, job title, address, email, and company name, is ex
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API:
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
The following resources are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
| _**Composed model**_ |<ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>JavaScript SDK</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>| ::: moniker-end
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | Model ID| |||:|
-|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
+|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
### Try Form Recognizer
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
### Try Form Recognizer
Extract data, including name, birth date, machine-readable zone, and expiration
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API:
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
### Try Form Recognizer
Keys can also exist in isolation when the model detects that a key exists, with
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API: > [!div class="nextstepaction"]
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
## Try Form Recognizer
For large multi-page documents, use the `pages` query parameter to indicate spec
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API:
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Learn how to use Form Recognizer v3.0 in your applications by following our [**F
* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with our [Form Recognizer sample tool](https://fott-2-1.azurewebsites.net/)
-* Complete a [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md) and get started creating a document processing app in the development language of your choice.
+* Complete a [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | |-|-|
-|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
### Try Form Recognizer
See how data, including time and date of transactions, merchant information, and
* Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
* Explore our REST API:
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
Once you've defined your table tag, tag the cell values.
Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
-* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](./quickstarts/try-sdk-rest-api.md).
+* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api).
* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by adding and labeling more forms, then retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed. * The list of tags, and the estimated accuracy per tag.
In this quickstart, you've learned how to use the Form Recognizer Sample Labelin
> [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md) * [What is Form Recognizer?](overview.md)
-* [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
+* [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Use the links in the table to learn more about each model and browse the API ref
| Model| Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
In this migration guide, you've learned how to upgrade your existing Form Recogn
* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) * [What is Form Recognizer?](overview.md)
-* [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
+* [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
pip package version 3.1.0b4
> [Learn more about Layout extraction](concept-layout.md)
-* **Client library update** - The latest versions of the [client libraries](./quickstarts/try-sdk-rest-api.md) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.
+* **Client library update** - The latest versions of the [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.
* **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md) * **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages. * **Quality improvements** - Extraction improvements including single digit extraction improvements.
pip package version 3.1.0b4
**v2.0** includes the following update:
-* The [client libraries](./quickstarts/try-sdk-rest-api.md) for NET, Python, Java, and JavaScript have entered General Availability.
+* The [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for NET, Python, Java, and JavaScript have entered General Availability.
**New samples** are available on GitHub.
The JSON responses for all API calls have new formats. Some keys and values have
## Next steps
-Complete a [quickstart](./quickstarts/try-sdk-rest-api.md) to get started writing a forms processing app with Form Recognizer in the development language of your choice.
+Complete a [quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) to get started writing a forms processing app with Form Recognizer in the development language of your choice.
## See also
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runbooks.md
When you test a runbook, the [Draft version](#publish-a-runbook) is executed and
Even though the Draft version is being run, the runbook still executes normally and performs any actions against resources in the environment. For this reason, you should only test runbooks on non-production resources.
+> [!NOTE]
+> All runbook execution actions are logged in the **Activity Log** of the automation account with the operation name **Create an Azure Automation job**. However, runbook execution in a test pane where the draft version of the runbook is executed would be logged in the activity logs with the operation name **Write an Azure Automation runbook draft**. Select **Operation** and **JSON** tab to see the scope ending with *../runbooks/(runbook name)/draft/testjob*.
+ The procedure to test each [type of runbook](automation-runbook-types.md) is the same. There's no difference in testing between the textual editor and the graphical editor in the Azure portal. 1. Open the Draft version of the runbook in either the [textual editor](automation-edit-textual-runbook.md) or the [graphical editor](automation-graphical-authoring-intro.md).
automation Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md
The following examples of runbook scripts fetch the Resource Manager resources b
# [Run As account](#tab/run-as-account)
-```powershell
+```powershell-interactive
$connectionName = "AzureRunAsConnection" try {
The following examples of runbook scripts fetch the Resource Manager resources b
>[!NOTE] > Enable appropriate RBAC permissions for the system identity of this Automation account. Otherwise, the runbook might fail.
- ```powershell
+ ```powershell-interactive
try { "Logging in to Azure..."
The following examples of runbook scripts fetch the Resource Manager resources b
``` # [User-assigned managed identity](#tab/ua-managed-identity)
-```powershell
+```powershell-interactive
try {
availability-zones Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service.md
description: Learn how to migrate Azure App Service to availability zone support
Previously updated : 08/03/2022 Last updated : 10/19/2022
Azure App Service can be deployed into [Availability Zones (AZ)](../availability
An App Service lives in an App Service plan (ASP), and the App Service plan exists in a single scale unit. App Services are zonal services, which means that App Services can be deployed using one of the following methods: - For App Services that aren't configured to be zone redundant, the VM instances are placed in a single zone that is selected by the platform in the selected region.-- For App Services that are configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a VM instance capacity larger than three is specified and the number of instances is divisible by three, the instances will be spread evenly. Otherwise, instance counts beyond 3*N will get spread across the remaining one or two zones.+
+- For App Services that are configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a VM instance capacity larger than three is specified and the number of instances is a multiple of three (3 * N), the instances will be spread evenly. However, if the number of instances is not a multiple of three, the remainder of the instances will get spread across the remaining one or two zones.
## Prerequisites
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
## Partners
-### Cisco
-
-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
-|--|--|--|--|--|
-|Cisco Hyperflex on VMware <br/> Cisco IKS ESXi 6.7 U3 |1.21.13|v1.9.0_2022-07-12|16.0.312.4243| Not validated |
- ### Dell |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
azure-arc Manage Automatic Vm Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md
If you continue to have trouble upgrading an extension, you can [disable automat
## Supported extensions
-Automatic extension upgrade supports the following extensions (and more are added periodically):
+Automatic extension upgrade supports the following extensions:
-- Azure Monitor Agent - Linux and Windows-- Azure Security agent - Linux and Windows
+- Azure Monitor agent - Linux and Windows
+- Log Analytics agent (OMS agent) - Linux only
- Dependency agent ΓÇô Linux and Windows
+- Azure Security agent - Linux and Windows
- Key Vault Extension - Linux only-- Log Analytics agent (OMS agent) - Linux only
+- Azure Update Management Center - Linux and Windows
+- Azure Automation Hybrid Runbook Worker - Linux and Windows
+- Azure Arc-enabled SQL Server agent - Windows only
+
+More extensions will be added over time. Extensions that do not support automatic extension upgrade today are still configured to enable automatic upgrades by default. This setting will have no effect until the extension publisher chooses to support automatic upgrades.
## Manage automatic extension upgrade
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
Title: Connect machines at scale using group policy description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using group policy. Previously updated : 05/25/2022 Last updated : 10/18/2022
try
"Installation Complete" >> $logpath }
- & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --config "$InstallationFolder\$ConfigFilename" >> $logpath
+ & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --config "$InstallationFolder\$ConfigFilename" --correlation-id "478b97c2-9310-465a-87df-f21e66c2b248" >> $logpath
if ($LASTEXITCODE -ne 0) { throw "Failed during azcmagent connect: $LASTEXITCODE" }
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
description: In this QuickStart, you will learn how to use the helper script to
Previously updated : 09/14/2022 Last updated : 10/19/2022
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
The primary use case for Durable Functions is simplifying complex, stateful coor
### <a name="chaining"></a>Pattern #1: Function chaining
-In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the output of one function is applied to the input of another function.
+In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the output of one function is applied to the input of another function. The use of queues between each function ensures that the system stays durable and scalable, even though there is a flow of control from one function to the next.
+ ![A diagram of the function chaining pattern](./media/durable-functions-concepts/function-chaining.png)
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
For explanations of the common and event-specific properties, see [Event propert
## Next steps
+* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-functions-eventgrid-extension/issues)
* [Dispatch an Event Grid event](./functions-bindings-event-grid-output.md) [EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
The Event Grid output binding is only available for Functions 2.x and higher. Ev
## Next steps
+* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-functions-eventgrid-extension/issues)
* [Event Grid trigger][trigger] * [Event Grid output binding][binding] * [Run a function when an Event Grid event is dispatched](./functions-bindings-event-grid-trigger.md)
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
## Introduction
-[Azure Functions](./functions-overview.md) allows you to implement your system's logic into readily-available blocks of code. These code blocks are called "functions".
+[Azure Functions](./functions-overview.md) allows you to implement your system's logic as event-driven, readily-available blocks of code. These code blocks are called "functions".
Use the following resources to get started.
azure-functions Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md
Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running.
-You focus on the pieces of code that matter most to you, and Azure Functions handles the rest.<br /><br />
+You focus on the code that matters most to you, in the most productive language for you, and Azure Functions handles the rest.<br /><br />
> [!VIDEO https://www.youtube.com/embed/8-jz5f_JyEQ]
The following are a common, _but by no means exhaustive_, set of scenarios for A
| | | | **Build a web API** | Implement an endpoint for your web applications using the [HTTP trigger](./functions-bindings-http-webhook.md) | | **Process file uploads** | Run code when a file is uploaded or changed in [blob storage](./functions-bindings-storage-blob.md) |
-| **Build a serverless workflow** | Chain a series of functions together using [durable functions](./durable/durable-functions-overview.md) |
+| **Build a serverless workflow** | Create an event-driven workflow from a series of functions using [durable functions](./durable/durable-functions-overview.md) |
| **Respond to database changes** | Run custom logic when a document is created or updated in [Azure Cosmos DB](./functions-bindings-cosmosdb-v2.md) | | **Run scheduled tasks** | Execute code on [pre-defined timed intervals](./functions-bindings-timer.md) | | **Create reliable message queue systems** | Process message queues using [Queue Storage](./functions-bindings-storage-queue.md), [Service Bus](./functions-bindings-service-bus.md), or [Event Hubs](./functions-bindings-event-hubs.md) | | **Analyze IoT data streams** | Collect and process [data from IoT devices](./functions-bindings-event-iot.md) | | **Process data in real time** | Use [Functions and SignalR](./functions-bindings-signalr-service.md) to respond to data in the moment |
+These scenarios allow you to build event-driven systems using modern architectural patterns.
+ As you build your functions, you have the following options and resources available: - **Use your preferred language**: Write functions in [C#, Java, JavaScript, PowerShell, or Python](./supported-languages.md), or use a [custom handler](./functions-custom-handlers.md) to use virtually any other language.
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.AppConfiguration/configurationStores |Yes | No | [Azure App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | |Microsoft.AppPlatform/spring | Yes | No | [Azure Spring Cloud](../essentials/metrics-supported.md#microsoftappplatformspring) | |Microsoft.Automation/automationAccounts | Yes| No | [Azure Automation accounts](../essentials/metrics-supported.md#microsoftautomationautomationaccounts) |
-|Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](../essentials/metrics-supported.md#microsoftavsprivateclouds) |
+|Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](../essentials/metrics-supported.md) |
|Microsoft.Batch/batchAccounts | Yes | No | [Azure Batch accounts](../essentials/metrics-supported.md#microsoftbatchbatchaccounts) |
-|Microsoft.Bing/accounts | Yes | No | [Bing accounts](../essentials/metrics-supported.md#microsoftbingaccounts) |
+|Microsoft.Bing/accounts | Yes | No | [Bing accounts](../essentials/metrics-supported.md#microsoftmapsaccounts) |
|Microsoft.BotService/botServices | Yes | No | [Azure Bot Service](../essentials/metrics-supported.md#microsoftbotservicebotservices) | |Microsoft.Cache/redis | Yes | Yes | [Azure Cache for Redis](../essentials/metrics-supported.md#microsoftcacheredis) | |Microsoft.Cache/redisEnterprise | Yes | No | [Azure Cache for Redis Enterprise](../essentials/metrics-supported.md#microsoftcacheredisenterprise) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Compute/cloudServices/roles | Yes | No | [Azure Cloud Services roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | |Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Azure Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) | |Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Azure Virtual Machine Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) |
-|Microsoft.ConnectedVehicle/platformAccounts | Yes | No |[Connected Vehicle Platform Accounts](../essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) |
+|Microsoft.ConnectedVehicle/platformAccounts | Yes | No |[Connected Vehicle Platform Accounts](../essentials/metrics-supported.md) |
|Microsoft.ContainerInstance/containerGroups | Yes| No | [Container groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | |Microsoft.ContainerRegistry/registries | No | No | [Azure Container Registry](../essentials/metrics-supported.md#microsoftcontainerregistryregistries) | |Microsoft.ContainerService/managedClusters | Yes | No | [Managed clusters](../essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Peering/peeringServices | Yes | No | [Azure Peering Service](../essentials/metrics-supported.md#microsoftpeeringpeeringservices) | |Microsoft.PowerBIDedicated/capacities | No | No | [Power BI dedicated capacities](../essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | |Microsoft.Purview/accounts | Yes | No | [Azure Purview accounts](../essentials/metrics-supported.md#microsoftpurviewaccounts) |
-|Microsoft.RecoveryServices/vaults | Yes | Yes | [Recovery Services vaults](../essentials/metrics-supported.md#microsoftrecoveryservicesvaults) |
+|Microsoft.RecoveryServices/vaults | Yes | Yes | [Recovery Services vaults](../essentials/metrics-supported.md) |
|Microsoft.Relay/namespaces | Yes | No | [Relays](../essentials/metrics-supported.md#microsoftrelaynamespaces) | |Microsoft.Search/searchServices | No | No | [Search services](../essentials/metrics-supported.md#microsoftsearchsearchservices) | |Microsoft.ServiceBus/namespaces | Yes | No | [Azure Service Bus](../essentials/metrics-supported.md#microsoftservicebusnamespaces) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Synapse/workspaces | Yes | No | [Azure Synapse Analytics](../essentials/metrics-supported.md#microsoftsynapseworkspaces) | |Microsoft.Synapse/workspaces/bigDataPools | Yes | No | [Azure Synapse Analytics Apache Spark pools](../essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) | |Microsoft.Synapse/workspaces/sqlPools | Yes | No | [Azure Synapse Analytics SQL pools](../essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) |
-|Microsoft.VMWareCloudSimple/virtualMachines | Yes | No | [CloudSimple virtual machines](../essentials/metrics-supported.md#microsoftvmwarecloudsimplevirtualmachines) |
+|Microsoft.VMWareCloudSimple/virtualMachines | Yes | No | [CloudSimple virtual machines](../essentials/metrics-supported.md) |
|Microsoft.Web/containerApps | Yes | No | Azure Container Apps | |Microsoft.Web/hostingEnvironments/multiRolePools | Yes | No | [Azure App Service environment multi-role pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools)| |Microsoft.Web/hostingEnvironments/workerPools | Yes | No | [Azure App Service environment worker pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsworkerpools)|
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
# Tips for updating your JVM args - Azure Monitor Application Insights for Java
-## Azure environments
+## Azure App Services
-Configure [App Services](../../app-service/configure-language-java.md#set-java-runtime-options).
+See [Application Monitoring for Azure App Service and Java](./azure-web-apps-java.md).
+
+## Azure Functions
+
+See [Monitoring Azure Functions with Azure Monitor Application Insights](./monitor-functions.md#distributed-tracing-for-java-applications-public-preview).
## Spring Boot
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Starting from version 3.2.0, the following preview instrumentations can be enabl
{ "preview": { "instrumentation": {
+ "akka": {
+ "enabled": true
+ },
"apacheCamel": { "enabled": true }, "grizzly": { "enabled": true },
- "springIntegration": {
+ "play": {
"enabled": true },
- "akka": {
+ "springIntegration": {
"enabled": true }, "vertx": {
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
When the conditions in the rules are met, one or more autoscale actions are trig
## Scaling out and scaling up
-Autoscale scales in and out, which is an increase, or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
+Autoscale scales in and out, which is an increase, or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a Virtual Machine Scale Set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
In contrast, scaling up and down, or vertical scaling, keeps the number of resources constant, but gives those resources more capacity in terms of memory, CPU speed, disk space and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
When the conditions in the rules are met, one or more autoscale actions are trig
### Predictive autoscale (preview)
-[Predictive autoscale](./autoscale-predictive.md) uses machine learning to help manage and scale Azure virtual machine scale sets with cyclical workload patterns. It forecasts the overall CPU load on your virtual machine scale set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
+[Predictive autoscale](./autoscale-predictive.md) uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load on your Virtual Machine Scale Set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
## Autoscale setup
The following diagram shows the autoscale architecture.
### Resource metrics
-Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual machine scale sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. See [Autoscale Common Metrics](autoscale-common-metrics.md) for a list of available metrics.
+Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual Machine Scale Sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. See [Autoscale Common Metrics](autoscale-common-metrics.md) for a list of available metrics.
### Custom metrics
The full list of configurable fields and descriptions is available in the [Autos
For code examples, see
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
## Horizontal vs vertical scaling
-Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
+Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a Virtual Machine Scale Set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
In contrast, vertical scaling, keeps the same number of resources constant, but gives them more capacity in terms of memory, CPU speed, disk space and network. Adding or removing capacity in vertical scaling is known as scaling or down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
The following services are supported by autoscale:
| Service | Schema & Documentation | | | |
-| Azure Virtual machines scale sets |[Overview of autoscale with Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
+| Azure Virtual machines scale sets |[Overview of autoscale with Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
| Web apps |[Scaling Web Apps](autoscale-get-started.md) | | Azure API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) | Azure Data Explorer Clusters|[Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling)| | Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) | | Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
+ Spring Cloud |[Set up autoscale for microservice applications](../../spring-apps/how-to-setup-autoscale.md)|
+| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
+| Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)|
+| Logic Apps - Integration Service Environment(ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
## Next steps
To learn more about autoscale, see the following resources:
* [Azure Monitor autoscale common metrics](autoscale-common-metrics.md) * [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with Azure PowerShell](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest) * [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings) * [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Metric alerts from Container insights
-description: This article reviews the recommended metric alerts available from Container insights in public preview.
+ Title: Create metric alert rules in Container insights (preview)
+description: Describes how to create recommended metric alerts rules for a Kubernetes cluster in Container insights.
Previously updated : 05/24/2022 Last updated : 09/28/2022
-# Recommended metric alerts (preview) from Container insights
+# Metric alert rules in Container insights (preview)
-To alert on system resource issues when they are experiencing peak demand and running near capacity, with Container insights you would create a log alert based on performance data stored in Azure Monitor Logs. Container insights now includes pre-configured metric alert rules for your AKS and Azure Arc-enabled Kubernetes cluster, which is in public preview.
+Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides pre-configured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
-This article reviews the experience and provides guidance on configuring and managing these alert rules.
+> [!IMPORTANT]
+> Container insights in Azure Monitor now supports alerts based on Prometheus metrics. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts.
+## Types of metric alert rules
+There are two types of metric rules used by Container insights based on either Prometheus metrics or custom metrics. See a list of the specific alert rules for each at [Alert rule details](#alert-rule-details).
-If you're not familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md).
+| Alert rule type | Description |
+|:|:|
+| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus (preview)](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are hand-picked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>-*Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
+| [Metric rules](#metrics-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. |
-> [!NOTE]
-> Beginning October 8, 2021, three alerts have been updated to correctly calculate the alert condition: **Container CPU %**, **Container working set memory %**, and **Persistent Volume Usage %**. These new alerts have the same names as their corresponding previously available alerts, but they use new, updated metrics. We recommend that you disable the alerts that use the "Old" metrics, described in this article, and enable the "New" metrics. The "Old" metrics will no longer be available in recommended alerts after they are disabled, but you can manually re-enable them.
-## Prerequisites
+## Prometheus alert rules
+[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts-preview) use metric data from your Kubernetes cluster sent to [Azure Monitor manage service for Prometheus](../essentials/prometheus-metrics-overview.md).
-Before you start, confirm the following:
+### Prerequisites
+- Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).
-* Custom metrics are only available in a subset of Azure regions. A list of supported regions is documented in [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
+### Enable alert rules
-* To support metric alerts and the introduction of additional metrics, the minimum agent version required is **mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod05262020** for AKS and **mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod09252020** for Azure Arc-enabled Kubernetes cluster.
+The only method currently available for creating Prometheus alert rules is a Resource Manager template.
- To verify your cluster is running the newer version of the agent, you can either:
+1. Download the template that includes the set of alert rules that you want to enable. See [Alert rule details](#alert-rule-details) for a listing of the rules for each.
- * Run the command: `kubectl describe pod <azure-monitor-agent-pod-name> --namespace=kube-system`. In the status returned, note the value under **Image** for Azure Monitor agent in the *Containers* section of the output.
- * On the **Nodes** tab, select the cluster node and on the **Properties** pane to the right, note the value under **Agent Image Tag**.
+ - [Community alerts](https://aka.ms/azureprometheus-communityalerts)
+ - [Recommended alerts](https://aka.ms/azureprometheus-recommendedalerts)
- The value shown for AKS should be version **ciprod05262020** or later. The value shown for Azure Arc-enabled Kubernetes cluster should be version **ciprod09252020** or later. If your cluster has an older version, see [How to upgrade the Container insights agent](container-insights-manage-agent.md#upgrade-agent-on-aks-cluster) for steps to get the latest version.
+2. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates) for guidance.
- For more information related to the agent release, see [agent release history](https://github.com/microsoft/docker-provider/tree/ci_feature_prod). To verify metrics are being collected, you can use Azure Monitor metrics explorer and verify from the **Metric namespace** that **insights** is listed. If it is, you can go ahead and start setting up the alerts. If you don't see any metrics collected, the cluster Service Principal or MSI is missing the necessary permissions. To verify the SPN or MSI is a member of the **Monitoring Metrics Publisher** role, follow the steps described in the section [Upgrade per cluster using Azure CLI](container-insights-update-metrics.md#update-one-cluster-by-using-the-azure-cli) to confirm and set role assignment.
-
-> [!TIP]
-> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
+> [!NOTE]
+> While the Prometheus alert could be created in a different resource group to the target resource, you should use the same resource group as your target resource.
-## Alert rules overview
+### Edit alert rules
-To alert on what matters, Container insights includes the following metric alerts for your AKS and Azure Arc-enabled Kubernetes clusters:
+ To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it using any deployment method.
-|Name| Description |Default threshold |
-|-|-||
-|**(New)Average container CPU %** |Calculates average CPU used per container.|When average CPU usage per container is greater than 95%.|
-|**(New)Average container working set memory %** |Calculates average working set memory used per container.|When average working set memory usage per container is greater than 95%. |
-|Average CPU % |Calculates average CPU used per node. |When average node CPU utilization is greater than 80% |
-| Daily Data Cap Breach | When data cap is breached| When the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md) |
-|Average Disk Usage % |Calculates average disk usage for a node.|When disk usage for a node is greater than 80%. |
-|**(New)Average Persistent Volume Usage %** |Calculates average PV usage per pod. |When average PV usage per pod is greater than 80%.|
-|Average Working set memory % |Calculates average Working set memory for a node. |When average Working set memory for a node is greater than 80%. |
-|Restarting container count |Calculates number of restarting containers. | When container restarts are greater than 0. |
-|Failed Pod Counts |Calculates if any pod in failed state.|When a number of pods in failed state are greater than 0. |
-|Node NotReady status |Calculates if any node is in NotReady state.|When a number of nodes in NotReady state are greater than 0. |
-|OOM Killed Containers |Calculates number of OOM killed containers. |When a number of OOM killed containers is greater than 0. |
-|Pods ready % |Calculates the average ready state of pods. |When ready state of pods is less than 80%.|
-|Completed job count |Calculates number of jobs completed more than six hours ago. |When number of stale jobs older than six hours is greater than 0.|
+### Configure alertable metrics in ConfigMaps
-There are common properties across all of these alert rules:
+Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
-* All alert rules are metric based.
+- *cpuExceededPercentage*
+- *cpuThresholdViolated*
+- *memoryRssExceededPercentage*
+- *memoryRssThresholdViolated*
+- *memoryWorkingSetExceededPercentage*
+- *memoryWorkingSetThresholdViolated*
+- *pvUsageExceededPercentage*
+- *pvUsageThresholdViolated*
-* All alert rules are disabled by default.
+> [!TIP]
+> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
-* All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
-* Alerts rules do not have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.
+1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
-* You can modify the threshold for alert rules by directly editing them. However, refer to the guidance provided in each alert rule before modifying its threshold.
+ - **Example**. Use the following ConfigMap configuration to modify the *cpuExceededPercentage* threshold to 90%:
-The following alert-based metrics have unique behavior characteristics compared to the other metrics:
+ ```
+ [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
+ # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
+ container_cpu_threshold_percentage = 90.0
+ # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
+ container_memory_rss_threshold_percentage = 95.0
+ # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
+ container_memory_working_set_threshold_percentage = 95.0
+ ```
-* *completedJobsCount* metric is only sent when there are jobs that are completed greater than six hours ago.
+ - **Example**. Use the following ConfigMap configuration to modify the *pvUsageExceededPercentage* threshold to 80%:
-* *containerRestartCount* metric is only sent when there are containers restarting.
+ ```
+ [alertable_metrics_configuration_settings.pv_utilization_thresholds]
+ # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
+ pv_usage_threshold_percentage = 80.0
+ ```
-* *oomKilledContainerCount* metric is only sent when there are OOM killed containers.
+2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-* *cpuExceededPercentage*, *memoryRssExceededPercentage*, and *memoryWorkingSetExceededPercentage* metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-* *pvUsageExceededPercentage* metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
+The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
-## Metrics collected
+## Metrics alert rules
+[Metric alert rules](../alerts/alerts-types.md#metric-alerts) use [custom metric data from your Kubernetes cluster](container-insights-custom-metrics.md).
-The following metrics are enabled and collected, unless otherwise specified, as part of this feature. The metrics in **bold** with label "Old" are the ones replaced by "New" metrics collected for correct alert evaluation.
-|Metric namespace |Metric |Description |
-||-||
-|Insights.container/nodes |cpuUsageMillicores |CPU utilization in millicores by host.|
-|Insights.container/nodes |cpuUsagePercentage, cpuUsageAllocatablePercentage (preview) |CPU usage percentage by node and allocatable respectively.|
-|Insights.container/nodes |memoryRssBytes |Memory RSS utilization in bytes by host.|
-|Insights.container/nodes |memoryRssPercentage, memoryRssAllocatablePercentage (preview) |Memory RSS usage percentage by host and allocatable respectively.|
-|Insights.container/nodes |memoryWorkingSetBytes |Memory Working Set utilization in bytes by host.|
-|Insights.container/nodes |memoryWorkingSetPercentage, memoryRssAllocatablePercentage (preview) |Memory Working Set usage percentage by host and allocatable respectively.|
-|Insights.container/nodes |nodesCount |Count of nodes by status.|
-|Insights.container/nodes |diskUsedPercentage |Percentage of disk used on the node by device.|
-|Insights.container/pods |podCount |Count of pods by controller, namespace, node, and phase.|
-|Insights.container/pods |completedJobsCount |Completed jobs count older user configurable threshold (default is six hours) by controller, Kubernetes namespace. |
-|Insights.container/pods |restartingContainerCount |Count of container restarts by controller, Kubernetes namespace.|
-|Insights.container/pods |oomKilledContainerCount |Count of OOMkilled containers by controller, Kubernetes namespace.|
-|Insights.container/pods |podReadyPercentage |Percentage of pods in ready state by controller, Kubernetes namespace.|
-|Insights.container/containers |**(Old)cpuExceededPercentage** |CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
-|Insights.container/containers |**(New)cpuThresholdViolated** |Metric triggered when CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
-|Insights.container/containers |**(Old)memoryRssExceededPercentage** |Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(New)memoryRssThresholdViolated** |Metric triggered when Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(Old)memoryWorkingSetExceededPercentage** |Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/containers |**(New)memoryWorkingSetThresholdViolated** |Metric triggered when Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
-|Insights.container/persistentvolumes |**(Old)pvUsageExceededPercentage** |PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.|
-|Insights.container/persistentvolumes |**(New)pvUsageThresholdViolated** |Metric triggered when PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.
+### Prerequisites
+ - You may need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
+ - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
-## Enable alert rules
-Follow these steps to enable the metric alerts in Azure Monitor from the Azure portal. To enable using a Resource Manager template, see [Enable with a Resource Manager template](#enable-with-a-resource-manager-template).
+### Enable and configure alert rules
-### From the Azure portal
+#### [Azure portal](#tab/azure-portal)
-This section walks through enabling Container insights metric alert (preview) from the Azure portal.
+#### Enable alert rules
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. From the **Insights** menu for your cluster, select **Recommended alerts**.
-2. Access to the Container insights metrics alert (preview) feature is available directly from an AKS cluster by selecting **Insights** from the left pane in the Azure portal.
+ :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot showing recommended alerts option in Container insights.":::
-3. From the command bar, select **Recommended alerts**.
- ![Screenshot showing the Recommended alerts option in Container insights.](./media/container-insights-metric-alerts/command-bar-recommended-alerts.png)
+2. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
-4. The **Recommended alerts** property pane automatically displays on the right side of the page. By default, all alert rules in the list are disabled. After selecting **Enable**, the alert rule is created and the rule name updates to include a link to the alert resource.
+ :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot showing list of recommended alerts and option for enabling each.":::
- ![Screenshot showing the Recommended alerts properties pane.](./media/container-insights-metric-alerts/recommended-alerts-pane.png)
+3. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page, specify an existing or create an action group by selecting **Create action group**.
- After selecting the **Enable/Disable** toggle to enable the alert, an alert rule is created and the rule name updates to include a link to the actual alert resource.
+ :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot showing selection of an action group.":::
- ![Screenshot showing the option to enable an alert rule.](./media/container-insights-metric-alerts/recommended-alerts-pane-enable.png)
+#### Edit alert rules
-5. Alert rules are not associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** and on the **Action Groups** page, specify an existing or create an action group by selecting **Add** or **Create**.
+To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your AKS cluster.
- ![Screenshot showing the option to select an action group.](./media/container-insights-metric-alerts/select-action-group.png)
+1. From Container insights for your cluster, select **Recommended alerts**.
+2. Click the **Rule Name** to open the alert rule.
+3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for details on the alert rule settings.
-### Enable with a Resource Manager template
+#### Disable alert rules
+1. From Container insights for your cluster, select **Recommended alerts**.
+2. Change the status for the alert rule to **Disabled**.
-You can use an Azure Resource Manager template and parameters file to create the included metric alerts in Azure Monitor.
+### [Resource Manager](#tab/resource-manager)
+For custom metrics, a separate Resource Manager template is provided for each alert rule.
-The basic steps are as follows:
+#### Enable alert rules
1. Download one or all of the available templates that describe how to create the alert from [GitHub](https://github.com/microsoft/Docker-Provider/tree/ci_dev/alerts/recommended_alerts_ARM).- 2. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
+3. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md) for guidance.
-3. Deploy the template from the Azure portal, PowerShell, or Azure CLI.
-
-#### Deploy through Azure portal
-
-1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create the alert rule using the following commands:
-
-2. To deploy a customized template through the portal, select **Create a resource** from the [Azure portal](https://portal.azure.com).
-
-3. Search for **template**, and then select **Template deployment**.
-
-4. Select **Create**.
-
-5. You see several options for creating a template, select **Build your own template in editor**.
-
-6. On the **Edit template page**, select **Load file** and then select the template file.
-
-7. On the **Edit template** page, select **Save**.
-
-8. On the **Custom deployment** page, specify the following and then when complete select **Purchase** to deploy the template and create the alert rule.
-
- * Resource group
- * Location
- * Alert Name
- * Cluster Resource ID
+#### Disable alert rules
+To disable custom alert rules, use the same Resource Manager template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
-#### Deploy with Azure PowerShell or CLI
-
-1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create the alert rule using the following commands:
-
-2. You can create the metric alert using the template and parameters file using PowerShell or Azure CLI.
-
- Using Azure PowerShell
-
- ```powershell
- Connect-AzAccount
-
- Select-AzSubscription -SubscriptionName <yourSubscriptionName>
- New-AzResourceGroupDeployment -Name CIMetricAlertDeployment -ResourceGroupName ResourceGroupofTargetResource `
- -TemplateFile templateFilename.json -TemplateParameterFile templateParameterFilename.parameters.json
- ```
-
- Using Azure CLI
-
- ```azurecli
- az login
-
- az deployment group create \
- --name AlertDeployment \
- --resource-group ResourceGroupofTargetResource \
- --template-file templateFileName.json \
- --parameters @templateParameterFilename.parameters.json
- ```
-
- >[!NOTE]
- >While the metric alert could be created in a different resource group to the target resource, we recommend using the same resource group as your target resource.
-
-## Edit alert rules
-
-You can view and manage Container insights alert rules, to edit its threshold or configure an [action group](../alerts/action-groups.md) for your AKS cluster. While you can perform these actions from the Azure portal and Azure CLI, it can also be done directly from your AKS cluster in Container insights.
-
-1. From the command bar, select **Recommended alerts**.
+
-2. To modify the threshold, on the **Recommended alerts** pane, select the enabled alert. In the **Edit rule**, select the **Alert criteria** you want to edit.
- * To modify the alert rule threshold, select the **Condition**.
- * To specify an existing or create an action group, select **Add** or **Create** under **Action group**
+## Alert rule details
+The following sections provide details on the alert rules provided by Container insights.
+
+### Community alert rules
+These are hand-picked alerts from Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-mixins).
+
+- KubeJobNotCompleted
+- KubeJobFailed
+- KubePodCrashLooping
+- KubePodNotReady
+- KubeDeploymentReplicasMismatch
+- KubeStatefulSetReplicasMismatch
+- KubeHpaReplicasMismatch
+- KubeHpaMaxedOut
+- KubeQuotaAlmostFull
+- KubeMemoryQuotaOvercommit
+- KubeCPUQuotaOvercommit
+- KubeVersionMismatch
+- KubeNodeNotReady
+- KubeNodeReadinessFlapping
+- KubeletTooManyPods
+- KubeNodeUnreachable
+### Recommended alert rules
+The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics.
+
+| Prometheus alert name | Custom metric alert name | Description | Default threshold |
+|:|:|:|:|
+| Average container CPU % | Average container CPU % | Calculates average CPU used per container. | 95% |
+| Average container working set memory % | Average container working set memory % | Calculates average working set memory used per container. | 95% |
+| Average CPU % | Average CPU % | Calculates average CPU used per node. | 80% |
+| Average Disk Usage % | Average Disk Usage % | Calculates average disk usage for a node. | 80% |
+| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average PV usage per pod. | 80% |
+| Average Working set memory % | Average Working set memory % | Calculates average Working set memory for a node. | 80% |
+| Restarting container count | Restarting container count | Calculates number of restarting containers. | 0 |
+| Failed Pod Counts | Failed Pod Counts | Calculates number of restarting containers. | 0 |
+| Node NotReady status | Node NotReady status | Calculates if any node is in NotReady state. | 0 |
+| OOM Killed Containers | OOM Killed Containers | Calculates number of OOM killed containers. | 0 |
+| Pods ready % | Pods ready % | Calculates the average ready state of pods. | 80% |
+| Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 |
-To view alerts created for the enabled rules, in the **Recommended alerts** pane select **View in alerts**. You are redirected to the alert menu for the AKS cluster, where you can see all the alerts currently created for your cluster.
+> [!NOTE]
+> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule is not included with the Prometheus alert rules.
+>
+> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) using the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
-## Configure alertable metrics in ConfigMaps
-Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
+Common properties across all of these alert rules include:
-* *cpuExceededPercentage*
-* *cpuThresholdViolated*
-* *memoryRssExceededPercentage*
-* *memoryRssThresholdViolated*
-* *memoryWorkingSetExceededPercentage*
-* *memoryWorkingSetThresholdViolated*
-* *pvUsageExceededPercentage*
-* *pvUsageThresholdViolated*
+- All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
+- All alert rules are disabled by default.
+- Alerts rules don't have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.
+- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before modifying its threshold.
-1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
+The following metrics have unique behavior characteristics:
- - To modify the *cpuExceededPercentage* threshold to 90% and begin collection of this metric when that threshold is met and exceeded, configure the ConfigMap file using the following example:
+**Prometheus and custom metrics**
+- `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.
+- `containerRestartCount` metric is only sent when there are containers restarting.
+- `oomKilledContainerCount` metric is only sent when there are OOM killed containers.
+- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). cpuThresholdViolated, memoryRssThresholdViolated, and memoryWorkingSetThresholdViolated metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.
+- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). `pvUsageThresholdViolated` metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
+- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
- ```
- [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
- # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
- container_cpu_threshold_percentage = 90.0
- # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
- container_memory_rss_threshold_percentage = 95.0
- # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
- container_memory_working_set_threshold_percentage = 95.0
- ```
+
+**Prometheus only**
+- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), you should configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. See [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
+- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
- - To modify the *pvUsageExceededPercentage* threshold to 80% and begin collection of this metric when that threshold is met and exceeded, configure the ConfigMap file using the following example:
- ```
- [alertable_metrics_configuration_settings.pv_utilization_thresholds]
- # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
- pv_usage_threshold_percentage = 80.0
- ```
-2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+## View alerts
+View fired alerts for your cluster from [**Alerts** in the **Monitor menu** in the Azure portal] with other fired alerts in your subscription. You can also select **View in alerts** from the **Recommended alerts** pane to view alerts from custom metrics.
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+> [!NOTE]
+> Prometheus alerts will not currently be displayed when you select **Alerts** from your AKs cluster since the alert rule doesn't use the cluster as its target.
-The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor agent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
## Next steps -- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.--- To learn more about Azure Monitor and how to monitor other aspects of your Kubernetes cluster, see [View Kubernetes cluster performance](container-insights-analyze.md).
+- [Read about the different alert rule types in Azure Monitor](../alerts/alerts-types.md).
+- [Read about alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The activity log uses a diagnostic setting but has its own user interface becaus
This section discusses requirements and limitations.
+### Time before telemetry gets to destination
+
+Once you have set up a diagnostic setting, data should start flowing to your selected destination(s) with 90 minutes. If you get no information within 24 hours, then either
+- no logs are being generated or
+- something is wrong in the underlying routing mechanism. Try disabling the configuration and then reenabling it. Contact Azure support through the Azure portal if you continue to have issues.
+ ### Metrics as a source There are certain limitations with exporting metrics:
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 09/12/2022 Last updated : 09/13/2022
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|CACompliantDeviceSuccessCount|Yes|CACompliantDeviceSuccessCount|Count|Count|CA compliant device success count for Azure AD|No Dimensions|
-|CAManagedDeviceSuccessCount|No|CAManagedDeviceSuccessCount|Count|Count|CA domain join device success count for Azure AD|No Dimensions|
-|MFAAttemptCount|No|MFAAttemptCount|Count|Count|MFA attempt count for Azure AD|No Dimensions|
-|MFAFailureCount|No|MFAFailureCount|Count|Count|MFA failure count for Azure AD|No Dimensions|
-|MFASuccessCount|No|MFASuccessCount|Count|Count|MFA success count for Azure AD|No Dimensions|
-|SamlFailureCount|Yes|SamlFailureCount|Count|Count|Saml token failure count for relying party scenario|No Dimensions|
-|SamlSuccessCount|Yes|SamlSuccessCount|Count|Count|Saml token success count for relying party scenario|No Dimensions|
+|ThrottledRequests|No|ThrottledRequests|Count|Average|azureADMetrics type metric|No Dimensions|
## Microsoft.AnalysisServices/servers
This latest update adds a new column and reorders the metrics to be alphabetical
|qpu_metric|Yes|QPU|Count|Average|QPU. Range 0-100 for S1, 0-200 for S2 and 0-400 for S4|ServerResourceType| |QueryPoolBusyThreads|Yes|Query Pool Busy Threads|Count|Average|Number of busy threads in the query thread pool.|ServerResourceType| |QueryPoolIdleThreads|Yes|Threads: Query pool idle threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|ServerResourceType|
-|QueryPoolJobQueueLength|Yes|Threads: Query pool job queue length|Count|Average|Number of jobs in the queue of the query thread pool.|ServerResourceType|
+|QueryPoolJobQueueLength|Yes|Threads: Query pool job queue lengt|Count|Average|Number of jobs in the queue of the query thread pool.|ServerResourceType|
|Quota|Yes|Memory: Quota|Bytes|Average|Current memory quota, in bytes. Memory quota is also known as a memory grant or memory reservation.|ServerResourceType| |QuotaBlocked|Yes|Memory: Quota Blocked|Count|Average|Current number of quota requests that are blocked until other memory quotas are freed.|ServerResourceType| |RowsConvertedPerSec|Yes|Processing: Rows converted per sec|CountPerSecond|Average|Rate of rows converted during processing.|ServerResourceType|
This latest update adds a new column and reorders the metrics to be alphabetical
|total-requests|Yes|total-requests|Count|Average|Total number of requests in the lifetime of the process|Deployment, AppName, Pod| |working-set|Yes|working-set|Count|Average|Amount of working set used by the process (MB)|Deployment, AppName, Pod| - ## Microsoft.Automation/automationAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|TotalJob|Yes|Total Jobs|Count|Total|The total number of jobs|Runbook Name, Status|
+|TotalJob|Yes|Total Jobs|Count|Total|The total number of jobs|Runbook, Status|
|TotalUpdateDeploymentMachineRuns|Yes|Total Update Deployment Machine Runs|Count|Total|Total software update deployment machine runs in a software update deployment run|SoftwareUpdateConfigurationName, Status, TargetComputer, SoftwareUpdateConfigurationRunId| |TotalUpdateDeploymentRuns|Yes|Total Update Deployment Runs|Count|Total|Total software update deployment runs|SoftwareUpdateConfigurationName, Status|
This latest update adds a new column and reorders the metrics to be alphabetical
|WaitingForStartTaskNodeCount|No|Waiting For Start Task Node Count|Count|Total|Number of nodes waiting for the Start Task to complete|No Dimensions|
-## Microsoft.BatchAI/workspaces
+## Microsoft.BatchAI/workspaces
+|Category|Category Display Name|Costs To Export|
+||||
+|BaiClusterEvent|BaiClusterEvent|No|
+|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
+|BaiJobEvent|BaiJobEvent|No|
-|Category|Category Display Name|Costs To Export|
-||||
-|BaiClusterEvent|BaiClusterEvent|No|
-|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
-|BaiJobEvent|BaiJobEvent|No|
## microsoft.bing/accounts
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|allcachehits|Yes|Cache Hits (Instance Based)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allcachemisses|Yes|Cache Misses (Instance Based)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allcacheRead|Yes|Cache Read (Instance Based)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allcacheWrite|Yes|Cache Write (Instance Based)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allconnectedclients|Yes|Connected Clients (Instance Based)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allConnectionsClosedPerSecond|Yes|Connections Closed Per Second (Instance Based)|CountPerSecond|Maximum|The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). For more details, see https://aka.ms/redis/metrics.|ShardId, Primary, Ssl|
-|allConnectionsCreatedPerSecond|Yes|Connections Created Per Second (Instance Based)|CountPerSecond|Maximum|The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). For more details, see https://aka.ms/redis/metrics.|ShardId, Primary, Ssl|
-|allevictedkeys|Yes|Evicted Keys (Instance Based)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allexpiredkeys|Yes|Expired Keys (Instance Based)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allgetcommands|Yes|Gets (Instance Based)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|alloperationsPerSecond|Yes|Operations Per Second (Instance Based)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allpercentprocessortime|Yes|CPU (Instance Based)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allserverLoad|Yes|Server Load (Instance Based)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allsetcommands|Yes|Sets (Instance Based)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|alltotalcommandsprocessed|Yes|Total Operations (Instance Based)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|alltotalkeys|Yes|Total Keys (Instance Based)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allusedmemory|Yes|Used Memory (Instance Based)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allusedmemorypercentage|Yes|Used Memory Percentage (Instance Based)|Percent|Maximum|The percentage of cache memory used for key/value pairs. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|allusedmemoryRss|Yes|Used Memory RSS (Instance Based)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|ShardId, Port, Primary|
-|cachehits|Yes|Cache Hits|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cachehits0|Yes|Cache Hits (Shard 0)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits1|Yes|Cache Hits (Shard 1)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits2|Yes|Cache Hits (Shard 2)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits3|Yes|Cache Hits (Shard 3)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits4|Yes|Cache Hits (Shard 4)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits5|Yes|Cache Hits (Shard 5)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits6|Yes|Cache Hits (Shard 6)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits7|Yes|Cache Hits (Shard 7)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits8|Yes|Cache Hits (Shard 8)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachehits9|Yes|Cache Hits (Shard 9)|Count|Total|The number of successful key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheLatency|Yes|Cache Latency Microseconds (Preview)|Count|Average|The latency to the cache in microseconds. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cachemisses|Yes|Cache Misses|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cachemisses0|Yes|Cache Misses (Shard 0)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses1|Yes|Cache Misses (Shard 1)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses2|Yes|Cache Misses (Shard 2)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses3|Yes|Cache Misses (Shard 3)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses4|Yes|Cache Misses (Shard 4)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses5|Yes|Cache Misses (Shard 5)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses6|Yes|Cache Misses (Shard 6)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses7|Yes|Cache Misses (Shard 7)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses8|Yes|Cache Misses (Shard 8)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemisses9|Yes|Cache Misses (Shard 9)|Count|Total|The number of failed key lookups. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cachemissrate|Yes|Cache Miss Rate|Percent|Total|The % of get requests that miss. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cacheRead|Yes|Cache Read|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cacheRead0|Yes|Cache Read (Shard 0)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead1|Yes|Cache Read (Shard 1)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead2|Yes|Cache Read (Shard 2)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead3|Yes|Cache Read (Shard 3)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead4|Yes|Cache Read (Shard 4)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead5|Yes|Cache Read (Shard 5)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead6|Yes|Cache Read (Shard 6)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead7|Yes|Cache Read (Shard 7)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead8|Yes|Cache Read (Shard 8)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheRead9|Yes|Cache Read (Shard 9)|BytesPerSecond|Maximum|The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|ShardId|
-|cacheWrite0|Yes|Cache Write (Shard 0)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite1|Yes|Cache Write (Shard 1)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite2|Yes|Cache Write (Shard 2)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite3|Yes|Cache Write (Shard 3)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite4|Yes|Cache Write (Shard 4)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite5|Yes|Cache Write (Shard 5)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite6|Yes|Cache Write (Shard 6)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite7|Yes|Cache Write (Shard 7)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite8|Yes|Cache Write (Shard 8)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|cacheWrite9|Yes|Cache Write (Shard 9)|BytesPerSecond|Maximum|The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients|Yes|Connected Clients|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|connectedclients0|Yes|Connected Clients (Shard 0)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients1|Yes|Connected Clients (Shard 1)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients2|Yes|Connected Clients (Shard 2)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients3|Yes|Connected Clients (Shard 3)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients4|Yes|Connected Clients (Shard 4)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients5|Yes|Connected Clients (Shard 5)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients6|Yes|Connected Clients (Shard 6)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients7|Yes|Connected Clients (Shard 7)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients8|Yes|Connected Clients (Shard 8)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|connectedclients9|Yes|Connected Clients (Shard 9)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|errors|Yes|Errors|Count|Maximum|The number errors that occurred on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, ErrorType|
-|evictedkeys|Yes|Evicted Keys|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|evictedkeys0|Yes|Evicted Keys (Shard 0)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys1|Yes|Evicted Keys (Shard 1)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys2|Yes|Evicted Keys (Shard 2)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys3|Yes|Evicted Keys (Shard 3)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys4|Yes|Evicted Keys (Shard 4)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys5|Yes|Evicted Keys (Shard 5)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys6|Yes|Evicted Keys (Shard 6)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys7|Yes|Evicted Keys (Shard 7)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys8|Yes|Evicted Keys (Shard 8)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|evictedkeys9|Yes|Evicted Keys (Shard 9)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys|Yes|Expired Keys|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|expiredkeys0|Yes|Expired Keys (Shard 0)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys1|Yes|Expired Keys (Shard 1)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys2|Yes|Expired Keys (Shard 2)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys3|Yes|Expired Keys (Shard 3)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys4|Yes|Expired Keys (Shard 4)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys5|Yes|Expired Keys (Shard 5)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys6|Yes|Expired Keys (Shard 6)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys7|Yes|Expired Keys (Shard 7)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys8|Yes|Expired Keys (Shard 8)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|expiredkeys9|Yes|Expired Keys (Shard 9)|Count|Total|The number of items expired from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands|Yes|Gets|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|getcommands0|Yes|Gets (Shard 0)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands1|Yes|Gets (Shard 1)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands2|Yes|Gets (Shard 2)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands3|Yes|Gets (Shard 3)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands4|Yes|Gets (Shard 4)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands5|Yes|Gets (Shard 5)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands6|Yes|Gets (Shard 6)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands7|Yes|Gets (Shard 7)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands8|Yes|Gets (Shard 8)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|getcommands9|Yes|Gets (Shard 9)|Count|Total|The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond|Yes|Operations Per Second|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|operationsPerSecond0|Yes|Operations Per Second (Shard 0)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond1|Yes|Operations Per Second (Shard 1)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond2|Yes|Operations Per Second (Shard 2)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond3|Yes|Operations Per Second (Shard 3)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond4|Yes|Operations Per Second (Shard 4)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond5|Yes|Operations Per Second (Shard 5)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond6|Yes|Operations Per Second (Shard 6)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond7|Yes|Operations Per Second (Shard 7)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond8|Yes|Operations Per Second (Shard 8)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|operationsPerSecond9|Yes|Operations Per Second (Shard 9)|Count|Maximum|The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime|Yes|CPU|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|percentProcessorTime0|Yes|CPU (Shard 0)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime1|Yes|CPU (Shard 1)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime2|Yes|CPU (Shard 2)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime3|Yes|CPU (Shard 3)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime4|Yes|CPU (Shard 4)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime5|Yes|CPU (Shard 5)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime6|Yes|CPU (Shard 6)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime7|Yes|CPU (Shard 7)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime8|Yes|CPU (Shard 8)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|percentProcessorTime9|Yes|CPU (Shard 9)|Percent|Maximum|The CPU utilization of the Azure Redis Cache server as a percentage. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad|Yes|Server Load|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|serverLoad0|Yes|Server Load (Shard 0)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad1|Yes|Server Load (Shard 1)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad2|Yes|Server Load (Shard 2)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad3|Yes|Server Load (Shard 3)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad4|Yes|Server Load (Shard 4)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad5|Yes|Server Load (Shard 5)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad6|Yes|Server Load (Shard 6)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad7|Yes|Server Load (Shard 7)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad8|Yes|Server Load (Shard 8)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|serverLoad9|Yes|Server Load (Shard 9)|Percent|Maximum|The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands|Yes|Sets|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|setcommands0|Yes|Sets (Shard 0)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands1|Yes|Sets (Shard 1)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands2|Yes|Sets (Shard 2)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands3|Yes|Sets (Shard 3)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands4|Yes|Sets (Shard 4)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands5|Yes|Sets (Shard 5)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands6|Yes|Sets (Shard 6)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands7|Yes|Sets (Shard 7)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands8|Yes|Sets (Shard 8)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|setcommands9|Yes|Sets (Shard 9)|Count|Total|The number of set operations to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed|Yes|Total Operations|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|totalcommandsprocessed0|Yes|Total Operations (Shard 0)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed1|Yes|Total Operations (Shard 1)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed2|Yes|Total Operations (Shard 2)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed3|Yes|Total Operations (Shard 3)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed4|Yes|Total Operations (Shard 4)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed5|Yes|Total Operations (Shard 5)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed6|Yes|Total Operations (Shard 6)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed7|Yes|Total Operations (Shard 7)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed8|Yes|Total Operations (Shard 8)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalcommandsprocessed9|Yes|Total Operations (Shard 9)|Count|Total|The total number of commands processed by the cache server. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys|Yes|Total Keys|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|totalkeys0|Yes|Total Keys (Shard 0)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys1|Yes|Total Keys (Shard 1)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys2|Yes|Total Keys (Shard 2)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys3|Yes|Total Keys (Shard 3)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys4|Yes|Total Keys (Shard 4)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys5|Yes|Total Keys (Shard 5)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys6|Yes|Total Keys (Shard 6)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys7|Yes|Total Keys (Shard 7)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys8|Yes|Total Keys (Shard 8)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|totalkeys9|Yes|Total Keys (Shard 9)|Count|Maximum|The total number of items in the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory|Yes|Used Memory|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|usedmemory0|Yes|Used Memory (Shard 0)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory1|Yes|Used Memory (Shard 1)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory2|Yes|Used Memory (Shard 2)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory3|Yes|Used Memory (Shard 3)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory4|Yes|Used Memory (Shard 4)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory5|Yes|Used Memory (Shard 5)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory6|Yes|Used Memory (Shard 6)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory7|Yes|Used Memory (Shard 7)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory8|Yes|Used Memory (Shard 8)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemory9|Yes|Used Memory (Shard 9)|Bytes|Maximum|The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemorypercentage|Yes|Used Memory Percentage|Percent|Maximum|The percentage of cache memory used for key/value pairs. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|usedmemoryRss|Yes|Used Memory RSS|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|ShardId|
-|usedmemoryRss0|Yes|Used Memory RSS (Shard 0)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss1|Yes|Used Memory RSS (Shard 1)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss2|Yes|Used Memory RSS (Shard 2)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss3|Yes|Used Memory RSS (Shard 3)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss4|Yes|Used Memory RSS (Shard 4)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss5|Yes|Used Memory RSS (Shard 5)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss6|Yes|Used Memory RSS (Shard 6)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss7|Yes|Used Memory RSS (Shard 7)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum|The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
+|allcachehits|Yes|Cache Hits (Instance Based)|Count|Total||ShardId, Port, Primary|
+|allcachemisses|Yes|Cache Misses (Instance Based)|Count|Total||ShardId, Port, Primary|
+|allcacheRead|Yes|Cache Read (Instance Based)|BytesPerSecond|Maximum||ShardId, Port, Primary|
+|allcacheWrite|Yes|Cache Write (Instance Based)|BytesPerSecond|Maximum||ShardId, Port, Primary|
+|allconnectedclients|Yes|Connected Clients (Instance Based)|Count|Maximum||ShardId, Port, Primary|
+|allevictedkeys|Yes|Evicted Keys (Instance Based)|Count|Total||ShardId, Port, Primary|
+|allexpiredkeys|Yes|Expired Keys (Instance Based)|Count|Total||ShardId, Port, Primary|
+|allgetcommands|Yes|Gets (Instance Based)|Count|Total||ShardId, Port, Primary|
+|alloperationsPerSecond|Yes|Operations Per Second (Instance Based)|Count|Maximum||ShardId, Port, Primary|
+|allpercentprocessortime|Yes|CPU (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
+|allserverLoad|Yes|Server Load (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
+|allsetcommands|Yes|Sets (Instance Based)|Count|Total||ShardId, Port, Primary|
+|alltotalcommandsprocessed|Yes|Total Operations (Instance Based)|Count|Total||ShardId, Port, Primary|
+|alltotalkeys|Yes|Total Keys (Instance Based)|Count|Maximum||ShardId, Port, Primary|
+|allusedmemory|Yes|Used Memory (Instance Based)|Bytes|Maximum||ShardId, Port, Primary|
+|allusedmemorypercentage|Yes|Used Memory Percentage (Instance Based)|Percent|Maximum||ShardId, Port, Primary|
+|allusedmemoryRss|Yes|Used Memory RSS (Instance Based)|Bytes|Maximum||ShardId, Port, Primary|
+|cachehits|Yes|Cache Hits|Count|Total||ShardId|
+|cachehits0|Yes|Cache Hits (Shard 0)|Count|Total||No Dimensions|
+|cachehits1|Yes|Cache Hits (Shard 1)|Count|Total||No Dimensions|
+|cachehits2|Yes|Cache Hits (Shard 2)|Count|Total||No Dimensions|
+|cachehits3|Yes|Cache Hits (Shard 3)|Count|Total||No Dimensions|
+|cachehits4|Yes|Cache Hits (Shard 4)|Count|Total||No Dimensions|
+|cachehits5|Yes|Cache Hits (Shard 5)|Count|Total||No Dimensions|
+|cachehits6|Yes|Cache Hits (Shard 6)|Count|Total||No Dimensions|
+|cachehits7|Yes|Cache Hits (Shard 7)|Count|Total||No Dimensions|
+|cachehits8|Yes|Cache Hits (Shard 8)|Count|Total||No Dimensions|
+|cachehits9|Yes|Cache Hits (Shard 9)|Count|Total||No Dimensions|
+|cacheLatency|Yes|Cache Latency Microseconds (Preview)|Count|Average||ShardId|
+|cachemisses|Yes|Cache Misses|Count|Total||ShardId|
+|cachemisses0|Yes|Cache Misses (Shard 0)|Count|Total||No Dimensions|
+|cachemisses1|Yes|Cache Misses (Shard 1)|Count|Total||No Dimensions|
+|cachemisses2|Yes|Cache Misses (Shard 2)|Count|Total||No Dimensions|
+|cachemisses3|Yes|Cache Misses (Shard 3)|Count|Total||No Dimensions|
+|cachemisses4|Yes|Cache Misses (Shard 4)|Count|Total||No Dimensions|
+|cachemisses5|Yes|Cache Misses (Shard 5)|Count|Total||No Dimensions|
+|cachemisses6|Yes|Cache Misses (Shard 6)|Count|Total||No Dimensions|
+|cachemisses7|Yes|Cache Misses (Shard 7)|Count|Total||No Dimensions|
+|cachemisses8|Yes|Cache Misses (Shard 8)|Count|Total||No Dimensions|
+|cachemisses9|Yes|Cache Misses (Shard 9)|Count|Total||No Dimensions|
+|cachemissrate|Yes|Cache Miss Rate|Percent|cachemissrate||ShardId|
+|cacheRead|Yes|Cache Read|BytesPerSecond|Maximum||ShardId|
+|cacheRead0|Yes|Cache Read (Shard 0)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead1|Yes|Cache Read (Shard 1)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead2|Yes|Cache Read (Shard 2)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead3|Yes|Cache Read (Shard 3)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead4|Yes|Cache Read (Shard 4)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead5|Yes|Cache Read (Shard 5)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead6|Yes|Cache Read (Shard 6)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead7|Yes|Cache Read (Shard 7)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead8|Yes|Cache Read (Shard 8)|BytesPerSecond|Maximum||No Dimensions|
+|cacheRead9|Yes|Cache Read (Shard 9)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite|Yes|Cache Write|BytesPerSecond|Maximum||ShardId|
+|cacheWrite0|Yes|Cache Write (Shard 0)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite1|Yes|Cache Write (Shard 1)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite2|Yes|Cache Write (Shard 2)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite3|Yes|Cache Write (Shard 3)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite4|Yes|Cache Write (Shard 4)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite5|Yes|Cache Write (Shard 5)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite6|Yes|Cache Write (Shard 6)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite7|Yes|Cache Write (Shard 7)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite8|Yes|Cache Write (Shard 8)|BytesPerSecond|Maximum||No Dimensions|
+|cacheWrite9|Yes|Cache Write (Shard 9)|BytesPerSecond|Maximum||No Dimensions|
+|connectedclients|Yes|Connected Clients|Count|Maximum||ShardId|
+|connectedclients0|Yes|Connected Clients (Shard 0)|Count|Maximum||No Dimensions|
+|connectedclients1|Yes|Connected Clients (Shard 1)|Count|Maximum||No Dimensions|
+|connectedclients2|Yes|Connected Clients (Shard 2)|Count|Maximum||No Dimensions|
+|connectedclients3|Yes|Connected Clients (Shard 3)|Count|Maximum||No Dimensions|
+|connectedclients4|Yes|Connected Clients (Shard 4)|Count|Maximum||No Dimensions|
+|connectedclients5|Yes|Connected Clients (Shard 5)|Count|Maximum||No Dimensions|
+|connectedclients6|Yes|Connected Clients (Shard 6)|Count|Maximum||No Dimensions|
+|connectedclients7|Yes|Connected Clients (Shard 7)|Count|Maximum||No Dimensions|
+|connectedclients8|Yes|Connected Clients (Shard 8)|Count|Maximum||No Dimensions|
+|connectedclients9|Yes|Connected Clients (Shard 9)|Count|Maximum||No Dimensions|
+|errors|Yes|Errors|Count|Maximum||ShardId, ErrorType|
+|evictedkeys|Yes|Evicted Keys|Count|Total||ShardId|
+|evictedkeys0|Yes|Evicted Keys (Shard 0)|Count|Total||No Dimensions|
+|evictedkeys1|Yes|Evicted Keys (Shard 1)|Count|Total||No Dimensions|
+|evictedkeys2|Yes|Evicted Keys (Shard 2)|Count|Total||No Dimensions|
+|evictedkeys3|Yes|Evicted Keys (Shard 3)|Count|Total||No Dimensions|
+|evictedkeys4|Yes|Evicted Keys (Shard 4)|Count|Total||No Dimensions|
+|evictedkeys5|Yes|Evicted Keys (Shard 5)|Count|Total||No Dimensions|
+|evictedkeys6|Yes|Evicted Keys (Shard 6)|Count|Total||No Dimensions|
+|evictedkeys7|Yes|Evicted Keys (Shard 7)|Count|Total||No Dimensions|
+|evictedkeys8|Yes|Evicted Keys (Shard 8)|Count|Total||No Dimensions|
+|evictedkeys9|Yes|Evicted Keys (Shard 9)|Count|Total||No Dimensions|
+|expiredkeys|Yes|Expired Keys|Count|Total||ShardId|
+|expiredkeys0|Yes|Expired Keys (Shard 0)|Count|Total||No Dimensions|
+|expiredkeys1|Yes|Expired Keys (Shard 1)|Count|Total||No Dimensions|
+|expiredkeys2|Yes|Expired Keys (Shard 2)|Count|Total||No Dimensions|
+|expiredkeys3|Yes|Expired Keys (Shard 3)|Count|Total||No Dimensions|
+|expiredkeys4|Yes|Expired Keys (Shard 4)|Count|Total||No Dimensions|
+|expiredkeys5|Yes|Expired Keys (Shard 5)|Count|Total||No Dimensions|
+|expiredkeys6|Yes|Expired Keys (Shard 6)|Count|Total||No Dimensions|
+|expiredkeys7|Yes|Expired Keys (Shard 7)|Count|Total||No Dimensions|
+|expiredkeys8|Yes|Expired Keys (Shard 8)|Count|Total||No Dimensions|
+|expiredkeys9|Yes|Expired Keys (Shard 9)|Count|Total||No Dimensions|
+|getcommands|Yes|Gets|Count|Total||ShardId|
+|getcommands0|Yes|Gets (Shard 0)|Count|Total||No Dimensions|
+|getcommands1|Yes|Gets (Shard 1)|Count|Total||No Dimensions|
+|getcommands2|Yes|Gets (Shard 2)|Count|Total||No Dimensions|
+|getcommands3|Yes|Gets (Shard 3)|Count|Total||No Dimensions|
+|getcommands4|Yes|Gets (Shard 4)|Count|Total||No Dimensions|
+|getcommands5|Yes|Gets (Shard 5)|Count|Total||No Dimensions|
+|getcommands6|Yes|Gets (Shard 6)|Count|Total||No Dimensions|
+|getcommands7|Yes|Gets (Shard 7)|Count|Total||No Dimensions|
+|getcommands8|Yes|Gets (Shard 8)|Count|Total||No Dimensions|
+|getcommands9|Yes|Gets (Shard 9)|Count|Total||No Dimensions|
+|operationsPerSecond|Yes|Operations Per Second|Count|Maximum||ShardId|
+|operationsPerSecond0|Yes|Operations Per Second (Shard 0)|Count|Maximum||No Dimensions|
+|operationsPerSecond1|Yes|Operations Per Second (Shard 1)|Count|Maximum||No Dimensions|
+|operationsPerSecond2|Yes|Operations Per Second (Shard 2)|Count|Maximum||No Dimensions|
+|operationsPerSecond3|Yes|Operations Per Second (Shard 3)|Count|Maximum||No Dimensions|
+|operationsPerSecond4|Yes|Operations Per Second (Shard 4)|Count|Maximum||No Dimensions|
+|operationsPerSecond5|Yes|Operations Per Second (Shard 5)|Count|Maximum||No Dimensions|
+|operationsPerSecond6|Yes|Operations Per Second (Shard 6)|Count|Maximum||No Dimensions|
+|operationsPerSecond7|Yes|Operations Per Second (Shard 7)|Count|Maximum||No Dimensions|
+|operationsPerSecond8|Yes|Operations Per Second (Shard 8)|Count|Maximum||No Dimensions|
+|operationsPerSecond9|Yes|Operations Per Second (Shard 9)|Count|Maximum||No Dimensions|
+|percentProcessorTime|Yes|CPU|Percent|Maximum||ShardId|
+|percentProcessorTime0|Yes|CPU (Shard 0)|Percent|Maximum||No Dimensions|
+|percentProcessorTime1|Yes|CPU (Shard 1)|Percent|Maximum||No Dimensions|
+|percentProcessorTime2|Yes|CPU (Shard 2)|Percent|Maximum||No Dimensions|
+|percentProcessorTime3|Yes|CPU (Shard 3)|Percent|Maximum||No Dimensions|
+|percentProcessorTime4|Yes|CPU (Shard 4)|Percent|Maximum||No Dimensions|
+|percentProcessorTime5|Yes|CPU (Shard 5)|Percent|Maximum||No Dimensions|
+|percentProcessorTime6|Yes|CPU (Shard 6)|Percent|Maximum||No Dimensions|
+|percentProcessorTime7|Yes|CPU (Shard 7)|Percent|Maximum||No Dimensions|
+|percentProcessorTime8|Yes|CPU (Shard 8)|Percent|Maximum||No Dimensions|
+|percentProcessorTime9|Yes|CPU (Shard 9)|Percent|Maximum||No Dimensions|
+|serverLoad|Yes|Server Load|Percent|Maximum||ShardId|
+|serverLoad0|Yes|Server Load (Shard 0)|Percent|Maximum||No Dimensions|
+|serverLoad1|Yes|Server Load (Shard 1)|Percent|Maximum||No Dimensions|
+|serverLoad2|Yes|Server Load (Shard 2)|Percent|Maximum||No Dimensions|
+|serverLoad3|Yes|Server Load (Shard 3)|Percent|Maximum||No Dimensions|
+|serverLoad4|Yes|Server Load (Shard 4)|Percent|Maximum||No Dimensions|
+|serverLoad5|Yes|Server Load (Shard 5)|Percent|Maximum||No Dimensions|
+|serverLoad6|Yes|Server Load (Shard 6)|Percent|Maximum||No Dimensions|
+|serverLoad7|Yes|Server Load (Shard 7)|Percent|Maximum||No Dimensions|
+|serverLoad8|Yes|Server Load (Shard 8)|Percent|Maximum||No Dimensions|
+|serverLoad9|Yes|Server Load (Shard 9)|Percent|Maximum||No Dimensions|
+|setcommands|Yes|Sets|Count|Total||ShardId|
+|setcommands0|Yes|Sets (Shard 0)|Count|Total||No Dimensions|
+|setcommands1|Yes|Sets (Shard 1)|Count|Total||No Dimensions|
+|setcommands2|Yes|Sets (Shard 2)|Count|Total||No Dimensions|
+|setcommands3|Yes|Sets (Shard 3)|Count|Total||No Dimensions|
+|setcommands4|Yes|Sets (Shard 4)|Count|Total||No Dimensions|
+|setcommands5|Yes|Sets (Shard 5)|Count|Total||No Dimensions|
+|setcommands6|Yes|Sets (Shard 6)|Count|Total||No Dimensions|
+|setcommands7|Yes|Sets (Shard 7)|Count|Total||No Dimensions|
+|setcommands8|Yes|Sets (Shard 8)|Count|Total||No Dimensions|
+|setcommands9|Yes|Sets (Shard 9)|Count|Total||No Dimensions|
+|totalcommandsprocessed|Yes|Total Operations|Count|Total||ShardId|
+|totalcommandsprocessed0|Yes|Total Operations (Shard 0)|Count|Total||No Dimensions|
+|totalcommandsprocessed1|Yes|Total Operations (Shard 1)|Count|Total||No Dimensions|
+|totalcommandsprocessed2|Yes|Total Operations (Shard 2)|Count|Total||No Dimensions|
+|totalcommandsprocessed3|Yes|Total Operations (Shard 3)|Count|Total||No Dimensions|
+|totalcommandsprocessed4|Yes|Total Operations (Shard 4)|Count|Total||No Dimensions|
+|totalcommandsprocessed5|Yes|Total Operations (Shard 5)|Count|Total||No Dimensions|
+|totalcommandsprocessed6|Yes|Total Operations (Shard 6)|Count|Total||No Dimensions|
+|totalcommandsprocessed7|Yes|Total Operations (Shard 7)|Count|Total||No Dimensions|
+|totalcommandsprocessed8|Yes|Total Operations (Shard 8)|Count|Total||No Dimensions|
+|totalcommandsprocessed9|Yes|Total Operations (Shard 9)|Count|Total||No Dimensions|
+|totalkeys|Yes|Total Keys|Count|Maximum||ShardId|
+|totalkeys0|Yes|Total Keys (Shard 0)|Count|Maximum||No Dimensions|
+|totalkeys1|Yes|Total Keys (Shard 1)|Count|Maximum||No Dimensions|
+|totalkeys2|Yes|Total Keys (Shard 2)|Count|Maximum||No Dimensions|
+|totalkeys3|Yes|Total Keys (Shard 3)|Count|Maximum||No Dimensions|
+|totalkeys4|Yes|Total Keys (Shard 4)|Count|Maximum||No Dimensions|
+|totalkeys5|Yes|Total Keys (Shard 5)|Count|Maximum||No Dimensions|
+|totalkeys6|Yes|Total Keys (Shard 6)|Count|Maximum||No Dimensions|
+|totalkeys7|Yes|Total Keys (Shard 7)|Count|Maximum||No Dimensions|
+|totalkeys8|Yes|Total Keys (Shard 8)|Count|Maximum||No Dimensions|
+|totalkeys9|Yes|Total Keys (Shard 9)|Count|Maximum||No Dimensions|
+|usedmemory|Yes|Used Memory|Bytes|Maximum||ShardId|
+|usedmemory0|Yes|Used Memory (Shard 0)|Bytes|Maximum||No Dimensions|
+|usedmemory1|Yes|Used Memory (Shard 1)|Bytes|Maximum||No Dimensions|
+|usedmemory2|Yes|Used Memory (Shard 2)|Bytes|Maximum||No Dimensions|
+|usedmemory3|Yes|Used Memory (Shard 3)|Bytes|Maximum||No Dimensions|
+|usedmemory4|Yes|Used Memory (Shard 4)|Bytes|Maximum||No Dimensions|
+|usedmemory5|Yes|Used Memory (Shard 5)|Bytes|Maximum||No Dimensions|
+|usedmemory6|Yes|Used Memory (Shard 6)|Bytes|Maximum||No Dimensions|
+|usedmemory7|Yes|Used Memory (Shard 7)|Bytes|Maximum||No Dimensions|
+|usedmemory8|Yes|Used Memory (Shard 8)|Bytes|Maximum||No Dimensions|
+|usedmemory9|Yes|Used Memory (Shard 9)|Bytes|Maximum||No Dimensions|
+|usedmemorypercentage|Yes|Used Memory Percentage|Percent|Maximum||ShardId|
+|usedmemoryRss|Yes|Used Memory RSS|Bytes|Maximum||ShardId|
+|usedmemoryRss0|Yes|Used Memory RSS (Shard 0)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss1|Yes|Used Memory RSS (Shard 1)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss2|Yes|Used Memory RSS (Shard 2)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss3|Yes|Used Memory RSS (Shard 3)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss4|Yes|Used Memory RSS (Shard 4)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss5|Yes|Used Memory RSS (Shard 5)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss6|Yes|Used Memory RSS (Shard 6)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss7|Yes|Used Memory RSS (Shard 7)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss8|Yes|Used Memory RSS (Shard 8)|Bytes|Maximum||No Dimensions|
+|usedmemoryRss9|Yes|Used Memory RSS (Shard 9)|Bytes|Maximum||No Dimensions|
## Microsoft.Cache/redisEnterprise
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
|UsedCapacity|Yes|Used capacity|Bytes|Average|Account used capacity|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.ClassicStorage/storageAccounts/fileServices
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication, FileShare| |Egress|Yes|Egress|Bytes|Total|The amount of egress data, in bytes. This number includes egress from an external client into Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication, FileShare| |FileCapacity|No|File Capacity|Bytes|Average|The amount of storage used by the storage account's File service in bytes.|FileShare|
-|FileCount|No|File Count|Count|Average|The number of files in the storage account's File service.|FileShare|
+|FileCount|No|File Count|Count|Average|The number of file in the storage account's File service.|FileShare|
|FileShareCount|No|File Share Count|Count|Average|The number of file shares in the storage account's File service.|No Dimensions| |FileShareQuota|No|File share quota size|Bytes|Average|The upper limit on the amount of storage that can be used by Azure Files Service in bytes.|FileShare| |FileShareSnapshotCount|No|File Share Snapshot Count|Count|Average|The number of snapshots present on the share in storage account's Files Service.|FileShare|
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication, FileShare| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication, FileShare| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication, FileShare|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication, FileShare|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication, FileShare|
## Microsoft.ClassicStorage/storageAccounts/queueServices
This latest update adds a new column and reorders the metrics to be alphabetical
|QueueMessageCount|Yes|Queue Message Count|Count|Average|The approximate number of queue messages in the storage account's Queue service.|No Dimensions| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.ClassicStorage/storageAccounts/tableServices
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication| |TableCapacity|Yes|Table Capacity|Bytes|Average|The amount of storage used by the storage account's Table service in bytes.|No Dimensions|
-|TableCount|Yes|Table Count|Count|Average|The number of tables in the storage account's Table service.|No Dimensions|
+|TableCount|Yes|Table Count|Count|Average|The number of table in the storage account's Table service.|No Dimensions|
|TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage account's Table service.|No Dimensions|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.Cloudtest/hostedpools
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|broker.msgs.delivered|Yes|Broker: Messages Delivered (Preview)|Count|Total|Total number of messages delivered by the broker|Result, FailureReasonCategory, QoS, TopicSpaceName|
+|broker.msgs.delivery.throttlingLatency|Yes|Broker: Message delivery latency from throttling (Preview)|Milliseconds|Average|The average egress message delivery latency due to throttling|No Dimensions|
+|broker.msgs.published|Yes|Broker: Messages Published (Preview)|Count|Total|Total number of messages published to the broker|Result, FailureReasonCategory, QoS|
|c2d.commands.egress.abandon.success|Yes|C2D messages abandoned|Count|Total|Number of cloud-to-device messages abandoned by the device|No Dimensions| |c2d.commands.egress.complete.success|Yes|C2D message deliveries completed|Count|Total|Number of cloud-to-device message deliveries completed successfully by the device|No Dimensions| |c2d.commands.egress.reject.success|Yes|C2D messages rejected|Count|Total|Number of cloud-to-device messages rejected by the device|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|jobs.listJobs.success|Yes|Successful calls to list jobs|Count|Total|The count of all successful calls to list jobs.|No Dimensions| |jobs.queryJobs.failure|Yes|Failed job queries|Count|Total|The count of all failed calls to query jobs.|No Dimensions| |jobs.queryJobs.success|Yes|Successful job queries|Count|Total|The count of all successful calls to query jobs.|No Dimensions|
+|mqtt.connections|Yes|MQTT: New Connections (Preview)|Count|Total|The number of new connections per IoT Hub|SessionType, MqttEndpoint|
+|mqtt.sessions|Yes|MQTT: New Sessions (Preview)|Count|Total|The number of new sessions per IoT Hub|SessionType, MqttEndpoint|
+|mqtt.sessions.dropped|Yes|MQTT: Dropped Sessions (Preview)|Percent|Average|The rate of dropped sessions per IoT Hub|DropReason|
+|mqtt.subscriptions|Yes|MQTT: New Subscriptions (Preview)|Count|Total|The number of subscriptions|Result, FailureReasonCategory, OperationType, TopicSpaceName|
|RoutingDataSizeInBytesDelivered|Yes|Routing Delivery Message Size in Bytes (preview)|Bytes|Total|The total size in bytes of messages delivered by IoT hub to an endpoint. You can use the EndpointName and EndpointType dimensions to view the size of the messages in bytes delivered to your different endpoints. The metric value increases for every message delivered, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, RoutingSource| |RoutingDeliveries|Yes|Routing Deliveries (preview)|Count|Total|The number of times IoT Hub attempted to deliver messages to all endpoints using routing. To see the number of successful or failed attempts, use the Result dimension. To see the reason of failure, like invalid, dropped, or orphaned, use the FailureReasonCategory dimension. You can also use the EndpointName and EndpointType dimensions to understand how many messages were delivered to your different endpoints. The metric value increases by one for each delivery attempt, including if the message is delivered to multiple endpoints or if the message is delivered to the same endpoint multiple times.|EndpointType, EndpointName, FailureReasonCategory, Result, RoutingSource| |RoutingDeliveryLatency|Yes|Routing Delivery Latency (preview)|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into an endpoint. You can use the EndpointName and EndpointType dimensions to understand the latency to your different endpoints.|EndpointType, EndpointName, RoutingSource|
This latest update adds a new column and reorders the metrics to be alphabetical
|IntegratedCacheItemHitRate|No|IntegratedCacheItemHitRate|Percent|Average|Number of point reads that used the integrated cache divided by number of point reads routed through the dedicated gateway with eventual consistency|Region, | |IntegratedCacheQueryExpirationCount|No|IntegratedCacheQueryExpirationCount|Count|Average|Number of queries evicted from the integrated cache due to TTL expiration|Region, | |IntegratedCacheQueryHitRate|No|IntegratedCacheQueryHitRate|Percent|Average|Number of queries that used the integrated cache divided by number of queries routed through the dedicated gateway with eventual consistency|Region, |
-|MetadataRequests|No|Metadata Requests|Count|Count|Count of metadata requests. Azure Cosmos DB maintains system metadata collection for each account, that allows you to enumerate collections, databases, etc, and their configurations, free of charge.|DatabaseName, CollectionName, Region, StatusCode, |
+|MetadataRequests|No|Metadata Requests|Count|Count|Count of metadata requests. Cosmos DB maintains system metadata collection for each account, that allows you to enumerate collections, databases, etc, and their configurations, free of charge.|DatabaseName, CollectionName, Region, StatusCode, |
|MongoCollectionCreate|No|Mongo Collection Created|Count|Count|Mongo Collection Created|ResourceName, ChildResourceName, | |MongoCollectionDelete|No|Mongo Collection Deleted|Count|Count|Mongo Collection Deleted|ResourceName, ChildResourceName, | |MongoCollectionThroughputUpdate|No|Mongo Collection Throughput Updated|Count|Count|Mongo Collection Throughput Updated|ResourceName, ChildResourceName, |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |Availability|Yes|Availability|Percent|Average|The availability rate of the service.|No Dimensions|
-|CosmosDbCollectionSize|Yes|Azure Cosmos DB Collection Size|Bytes|Total|The size of the backing Azure Cosmos DB collection, in bytes.|No Dimensions|
-|CosmosDbIndexSize|Yes|Azure Cosmos DB Index Size|Bytes|Total|The size of the backing Azure Cosmos DB collection's index, in bytes.|No Dimensions|
-|CosmosDbRequestCharge|Yes|Azure Cosmos DB RU usage|Count|Total|The RU usage of requests to the service's backing Azure Cosmos DB.|Operation, ResourceType|
-|CosmosDbRequests|Yes|Service Azure Cosmos DB requests|Count|Sum|The total number of requests made to a service's backing Azure Cosmos DB.|Operation, ResourceType|
-|CosmosDbThrottleRate|Yes|Service Azure Cosmos DB throttle rate|Count|Sum|The total number of 429 responses from a service's backing Azure Cosmos DB.|Operation, ResourceType|
+|CosmosDbCollectionSize|Yes|Cosmos DB Collection Size|Bytes|Total|The size of the backing Cosmos DB collection, in bytes.|No Dimensions|
+|CosmosDbIndexSize|Yes|Cosmos DB Index Size|Bytes|Total|The size of the backing Cosmos DB collection's index, in bytes.|No Dimensions|
+|CosmosDbRequestCharge|Yes|Cosmos DB RU usage|Count|Total|The RU usage of requests to the service's backing Cosmos DB.|Operation, ResourceType|
+|CosmosDbRequests|Yes|Service Cosmos DB requests|Count|Sum|The total number of requests made to a service's backing Cosmos DB.|Operation, ResourceType|
+|CosmosDbThrottleRate|Yes|Service Cosmos DB throttle rate|Count|Sum|The total number of 429 responses from a service's backing Cosmos DB.|Operation, ResourceType|
|IoTConnectorDeviceEvent|Yes|Number of Incoming Messages|Count|Sum|The total number of messages received by the Azure IoT Connector for FHIR prior to any normalization.|Operation, ConnectorName| |IoTConnectorDeviceEventProcessingLatencyMs|Yes|Average Normalize Stage Latency|Milliseconds|Average|The average time between an event's ingestion time and the time the event is processed for normalization.|Operation, ConnectorName| |IoTConnectorMeasurement|Yes|Number of Measurements|Count|Sum|The number of normalized value readings received by the FHIR conversion stage of the Azure IoT Connector for FHIR.|Operation, ConnectorName|
This latest update adds a new column and reorders the metrics to be alphabetical
|GpuUtilization|Yes|GpuUtilization|Count|Average|Percentage of utilization on a GPU node. Utilization is reported at one minute intervals.|Scenario, runId, NodeId, DeviceId, ClusterName| |GpuUtilizationMilliGPUs|Yes|GpuUtilizationMilliGPUs|Count|Average|Utilization of a GPU device in milli-GPUs. Utilization is aggregated in one minute intervals.|RunId, InstanceId, DeviceId, ComputeName| |GpuUtilizationPercentage|Yes|GpuUtilizationPercentage|Count|Average|Utilization percentage of a GPU device. Utilization is aggregated in one minute intervals.|RunId, InstanceId, DeviceId, ComputeName|
-|IBReceiveMegabytes|Yes|IBReceiveMegabytes|Count|Average|Network data received over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
-|IBTransmitMegabytes|Yes|IBTransmitMegabytes|Count|Average|Network data sent over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|IBReceiveMegabytes|Yes|IBReceiveMegabytes|Count|Average|Network data received over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName, DeviceId|
+|IBTransmitMegabytes|Yes|IBTransmitMegabytes|Count|Average|Network data sent over InfiniBand in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName, DeviceId|
|Idle Cores|Yes|Idle Cores|Count|Average|Number of idle cores|Scenario, ClusterName| |Idle Nodes|Yes|Idle Nodes|Count|Average|Number of idle nodes. Idle nodes are the nodes which are not running any jobs but can accept new job if available.|Scenario, ClusterName| |Leaving Cores|Yes|Leaving Cores|Count|Average|Number of leaving cores|Scenario, ClusterName|
This latest update adds a new column and reorders the metrics to be alphabetical
|Model Deploy Succeeded|Yes|Model Deploy Succeeded|Count|Total|Number of model deployments that succeeded in this workspace|Scenario| |Model Register Failed|Yes|Model Register Failed|Count|Total|Number of model registrations that failed in this workspace|Scenario, StatusCode| |Model Register Succeeded|Yes|Model Register Succeeded|Count|Total|Number of model registrations that succeeded in this workspace|Scenario|
-|NetworkInputMegabytes|Yes|NetworkInputMegabytes|Count|Average|Network data received in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
-|NetworkOutputMegabytes|Yes|NetworkOutputMegabytes|Count|Average|Network data sent in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName|
+|NetworkInputMegabytes|Yes|NetworkInputMegabytes|Count|Average|Network data received in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName, DeviceId|
+|NetworkOutputMegabytes|Yes|NetworkOutputMegabytes|Count|Average|Network data sent in megabytes. Metrics are aggregated in one minute intervals.|RunId, InstanceId, ComputeName, DeviceId|
|Not Responding Runs|Yes|Not Responding Runs|Count|Total|Number of runs not responding for this workspace. Count is updated when a run enters Not Responding state.|Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Not Started Runs|Yes|Not Started Runs|Count|Total|Number of runs in Not Started state for this workspace. Count is updated when a request is received to create a run but run information has not yet been populated. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName| |Preempted Cores|Yes|Preempted Cores|Count|Average|Number of preempted cores|Scenario, ClusterName|
This latest update adds a new column and reorders the metrics to be alphabetical
|ScheduledMessages|No|Count of scheduled messages in a Queue/Topic.|Count|Average|Count of scheduled messages in a Queue/Topic.|EntityName| |ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, | |ServerSendLatency|Yes|Server Send Latency.|Milliseconds|Average|Server Send Latency.|EntityName|
-|Size|No|Size|Bytes|Average|Size of a Queue/Topic in Bytes.|EntityName|
+|Size|No|Size|Bytes|Average|Size of an Queue/Topic in Bytes.|EntityName|
|SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, | |ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, MessagingErrorSubCode| |UserErrors|No|User Errors.|Count|Total|User Errors for Microsoft.ServiceBus.|EntityName, |
This latest update adds a new column and reorders the metrics to be alphabetical
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication| |Egress|Yes|Egress|Bytes|Total|The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication|
-|SuccessE2ELatency|Yes|Success E2E Latency|MilliSeconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
-|SuccessServerLatency|Yes|Success Server Latency|MilliSeconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication, TransactionType|
+|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
+|SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
|UsedCapacity|Yes|Used capacity|Bytes|Average|The amount of storage used by the storage account. For standard storage accounts, it's the sum of capacity used by blob, table, file, and queue. For premium storage accounts and Blob storage accounts, it is the same as BlobCapacity or FileCapacity.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.Storage/storageAccounts/fileServices
This latest update adds a new column and reorders the metrics to be alphabetical
|Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication, FileShare| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication, FileShare| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication, FileShare|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication, FileShare|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication, FileShare|
## Microsoft.Storage/storageAccounts/queueServices
This latest update adds a new column and reorders the metrics to be alphabetical
|QueueMessageCount|Yes|Queue Message Count|Count|Average|The number of unexpired queue messages in the storage account.|No Dimensions| |SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication| |SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.Storage/storageAccounts/tableServices
This latest update adds a new column and reorders the metrics to be alphabetical
|TableCapacity|Yes|Table Capacity|Bytes|Average|The amount of Table storage used by the storage account.|No Dimensions| |TableCount|Yes|Table Count|Count|Average|The number of tables in the storage account.|No Dimensions| |TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage account.|No Dimensions|
-|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different types of response.|ResponseType, GeoType, ApiName, Authentication|
+|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication|
## Microsoft.StorageCache/caches
This latest update adds a new column and reorders the metrics to be alphabetical
|LateInputEvents|Yes|Late Input Events|Count|Total|Late Input Events|LogicalName, PartitionId, ProcessorInstance, NodeName| |OutputEvents|Yes|Output Events|Count|Total|Output Events|LogicalName, PartitionId, ProcessorInstance, NodeName| |OutputWatermarkDelaySeconds|Yes|Watermark Delay|Seconds|Maximum|Watermark Delay|LogicalName, PartitionId, ProcessorInstance, NodeName|
-|ProcessCPUUsagePercentage|Yes|CPU % Utilization (Preview)|Percent|Maximum|CPU % Utilization (Preview)|LogicalName, PartitionId, ProcessorInstance, NodeName|
+|ProcessCPUUsagePercentage|Yes|CPU % Utilization|Percent|Maximum|CPU % Utilization|LogicalName, PartitionId, ProcessorInstance, NodeName|
|ResourceUtilization|Yes|SU (Memory) % Utilization|Percent|Maximum|SU (Memory) % Utilization|LogicalName, PartitionId, ProcessorInstance, NodeName|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |BuiltinSqlPoolDataProcessedBytes|No|Data processed (bytes)|Bytes|Total|Amount of data processed by queries|No Dimensions|
-|BuiltinSqlPoolLoginAttempts|No|Login attempts|Count|Total|Count of login attempts that succeeded or failed|Result|
+|BuiltinSqlPoolLoginAttempts|No|Login attempts|Count|Total|Count of login attempts that succeded or failed|Result|
|BuiltinSqlPoolRequestsEnded|No|Requests ended|Count|Total|Count of Requests that succeeded, failed, or were cancelled|Result| |IntegrationActivityRunsEnded|No|Activity runs ended|Count|Total|Count of integration activities that succeeded, failed, or were cancelled|Result, FailureType, Activity, ActivityType, Pipeline| |IntegrationPipelineRunsEnded|No|Pipeline runs ended|Count|Total|Count of integration pipeline runs that succeeded, failed, or were cancelled|Result, FailureType, Pipeline|
This latest update adds a new column and reorders the metrics to be alphabetical
|DWULimit|No|DWU limit|Count|Maximum|Service level objective of the SQL pool|No Dimensions| |DWUUsed|No|DWU used|Count|Maximum|Represents a high-level representation of usage across the SQL pool. Measured by DWU limit * DWU percentage|No Dimensions| |DWUUsedPercent|No|DWU used percentage|Percent|Maximum|Represents a high-level representation of usage across the SQL pool. Measured by taking the maximum between CPU percentage and Data IO percentage|No Dimensions|
-|LocalTempDBUsedPercent|No|Local tempdb used percentage|Percent|Maximum|Local tempdb utilization across all compute nodes - values are emitted every five minutes|No Dimensions|
+|LocalTempDBUsedPercent|No|Local tempdb used percentage|Percent|Maximum|Local tempdb utilization across all compute nodes - values are emitted every five minute|No Dimensions|
|MemoryUsedPercent|No|Memory used percentage|Percent|Maximum|Memory utilization across all nodes in the SQL pool|No Dimensions| |QueuedQueries|No|Queued queries|Count|Total|Cumulative count of requests queued after the max concurrency limit was reached|IsUserDefined| |WLGActiveQueries|No|Workload group active queries|Count|Total|The active queries within the workload group. Using this metric unfiltered and unsplit displays all active queries running on the system|IsUserDefined, WorkloadGroup|
This latest update adds a new column and reorders the metrics to be alphabetical
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see - https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
This latest update adds a new column and reorders the metrics to be alphabetical
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see - https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see: https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 09/07/2022 Last updated : 10/13/2022
If you think something is missing, you can open a GitHub comment at the bottom o
|BlockchainApplication|Blockchain Application|No|
-## microsoft.botservice/botservices
+## Microsoft.BotService/botServices
|Category|Category Display Name|Costs To Export| ||||
-|BotRequest|Requests from the channels to the bot|No|
+|logSpecification.Name.Empty|logSpecification.DisplayName.empty|Yes|
## Microsoft.Cache/redis
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|admissionsenforcer|AKS Guardrails/Admissions Enforcer|Yes|
|cloud-controller-manager|Kubernetes Cloud Controller Manager|Yes| |cluster-autoscaler|Kubernetes Cluster Autoscaler|No|
+|csi-azuredisk-controller|Kubernetes CSI Azuredisk Controller|Yes|
+|csi-azuredisk-controller-v2|Kubernetes CSI Azuredisk V2 Controller|Yes|
+|csi-azurefile-controller|Kubernetes CSI Azurefile Controller|Yes|
+|csi-blob-controller|Kubernetes CSI Blob Controller|Yes|
+|csi-snapshot-controller|Kubernetes CSI Snapshot Controller|Yes|
|guard|Kubernetes Guard|No| |kube-apiserver|Kubernetes API Server|No| |kube-audit|Kubernetes Audit|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|deltaPipelines|Databricks Delta Pipelines|Yes| |featureStore|Databricks Feature Store|Yes| |genie|Databricks Genie|Yes|
+|gitCredentials|Databricks Git Credentials|Yes|
|globalInitScripts|Databricks Global Init Scripts|Yes| |iamRole|Databricks IAM Role|Yes| |instancePools|Instance Pools|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|sqlPermissions|Databricks SQLPermissions|No| |ssh|Databricks SSH|No| |unityCatalog|Databricks Unity Catalog|Yes|
+|webTerminal|Databricks Web Terminal|Yes|
|workspace|Databricks Workspace|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|ExPCompute|ExPCompute|Yes|
-|Request|Request|No|
+|ExPCompute|ExPComput
## Microsoft.HealthcareApis/services
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditEvent|Audit Logs|No|
+|AzurePolicyEvaluationDetails|Azure Policy Evaluation Details|Yes|
## Microsoft.Kusto/Clusters
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|CollectionCrudLogEvent|CollectionCrud|Yes|
|ScanStatusLogEvent|ScanStatus|No|
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-security.md
You can use these additional security features to further secure your Azure Moni
## Tamper-proofing and immutability
-Azure Monitor is an append-only data platform that includes provisions to delete data for compliance purposes. [Set a lock on your Log Analytics workspace](../../azure-resource-manager/management/lock-resources.md) to block all activities that could delete data: purge, table delete, and table- or workspace-level data retention changes.
+Azure Monitor is an append-only data platform, but includes provisions to delete data for compliance purposes. You can [set a lock on your Log Analytics workspace](../../azure-resource-manager/management/lock-resources.md) to block all activities that could delete data: purge, table delete, and table- or workspace-level data retention changes. However, this lock can still be removed.
-To tamper-proof your monitoring solution, we recommend you [export data to an immutable storage solution](../../storage/blobs/immutable-storage-overview.md).
+To fully tamper-proof your monitoring solution, we recommend you [export your data to an immutable storage solution](../../storage/blobs/immutable-storage-overview.md).
## Next steps
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Search jobs are asynchronous queries that fetch records into a new search table
## When to use search jobs
-Use a search job when the log query timeout of 10 minutes isn't sufficient to search through large volumes of data or if your running a slow query.
+Use a search job when the log query timeout of 10 minutes isn't sufficient to search through large volumes of data or if you're running a slow query.
Search jobs also let you retrieve records from [Archived Logs](data-retention-archive.md) and [Basic Logs](basic-logs-configure.md) tables into a new log table you can use for queries. In this way, running a search job can be an alternative to:
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Spring Apps](../spring-apps/overview.md) | Microsoft.AppPlatform/Spring | [**Yes**](./essentials/metrics-supported.md#microsoftappplatformspring) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappplatformspring) | | | | [Azure Attestation Service](../attestation/overview.md) | Microsoft.Attestation/attestationProviders | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftattestationattestationproviders) | | | | [Azure Automation](../automation/index.yml) | Microsoft.Automation/automationAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftautomationautomationaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftautomationautomationaccounts) | | |
- | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.AVS/privateClouds | [**Yes**](./essentials/metrics-supported.md#microsoftavsprivateclouds) | [**Yes**](./essentials/resource-logs-categories.md#microsoftavsprivateclouds) | | |
+ | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.AVS/privateClouds | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure Batch](../batch/index.yml) | Microsoft.Batch/batchAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftbatchbatchaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbatchbatchaccounts) | | | | [Azure Batch](../batch/index.yml) | Microsoft.BatchAI/workspaces | No | No | | |
- | [Azure Cognitive Services- Bing Search API](../cognitive-services/bing-web-search/index.yml) | Microsoft.Bing/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftbingaccounts) | No | | |
+ | [Azure Cognitive Services- Bing Search API](../cognitive-services/bing-web-search/index.yml) | Microsoft.Bing/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftmapsaccounts) | No | | |
| [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/blockchainMembers | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/cordaMembers | No | [**Yes**](./essentials/resource-logs-categories.md) | | | | [Azure Bot Service](/azure/bot-service/) | Microsoft.BotService/botServices | [**Yes**](./essentials/metrics-supported.md#microsoftbotservicebotservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbotservicebotservices) | | |
- | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredis) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcacheredis) | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
+ | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
| [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/redisEnterprise | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredisenterprise) | No | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | | | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdncdnwebapplicationfirewallpolicies) | | | | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofiles) | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Files (Classic)](../storage/files/index.yml) | Microsoft.ClassicStorage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Queue Storage (Classic)](../storage/queues/index.yml) | Microsoft.ClassicStorage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Table Storage (Classic)](../storage/tables/index.yml) | Microsoft.ClassicStorage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | Microsoft Cloud Test Platform | Microsoft.Cloudtest/hostedpools | [**Yes**](./essentials/metrics-supported.md#microsoftcloudtesthostedpools) | No | | |
- | Microsoft Cloud Test Platform | Microsoft.Cloudtest/pools | [**Yes**](./essentials/metrics-supported.md#microsoftcloudtestpools) | No | | |
- | [Cray ClusterStor in Azure](https://azure.microsoft.com/blog/supercomputing-in-the-cloud-announcing-three-new-cray-in-azure-offers/) | Microsoft.ClusterStor/nodes | [**Yes**](./essentials/metrics-supported.md#microsoftclusterstornodes) | No | | |
+ | Microsoft Cloud Test Platform | Microsoft.Cloudtest/hostedpools | [**Yes**](./essentials/metrics-supported.md) | No | | |
+ | Microsoft Cloud Test Platform | Microsoft.Cloudtest/pools | [**Yes**](./essentials/metrics-supported.md) | No | | |
+ | [Cray ClusterStor in Azure](https://azure.microsoft.com/blog/supercomputing-in-the-cloud-announcing-three-new-cray-in-azure-offers/) | Microsoft.ClusterStor/nodes | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure Cognitive Services](../cognitive-services/index.yml) | Microsoft.CognitiveServices/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcognitiveservicesaccounts) | | |
- | [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md#microsoftcommunicationcommunicationservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcommunicationcommunicationservices) | | |
+ | [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservices) | No | | Agent required to monitor guest operating system and workflows.| | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices/roles | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | No | | Agent required to monitor guest operating system and workflows.|
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md#microsoftcomputedisks) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | |
+ | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | |
| [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
- | [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftconnectedvehicleplatformaccounts) | | |
+ | [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure Container Instances](../container-instances/index.yml) | Microsoft.ContainerInstance/containerGroups | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | No | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | | | [Azure Container Registry](../container-registry/index.yml) | Microsoft.ContainerRegistry/registries | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerregistryregistries) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerregistryregistries) | | | | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.ContainerService/managedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerservicemanagedclusters) | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlflexibleservers) | | | | [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlservers) | | | | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlflexibleservers) | | |
- | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serverGroupsv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlservergroupsv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlservergroupsv2) | | |
+ | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serverGroupsv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | |
| [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlservers) | | | | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serversv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | | | [Microsoft Azure Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/applicationgroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationapplicationgroups) | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/topics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridtopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridtopics) | | | | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/clusters | [**Yes**](./essentials/metrics-supported.md#microsofteventhubclusters) | No | 0 | | | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/namespaces | [**Yes**](./essentials/metrics-supported.md#microsofteventhubnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventhubnamespaces) | 0 | |
- | [Microsoft Experimentation Platform](https://www.microsoft.com/research/group/experimentation-platform-exp/) | microsoft.experimentation/experimentWorkspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md#microsoftexperimentationexperimentworkspaces) | | |
+ | [Microsoft Experimentation Platform](https://www.microsoft.com/research/group/experimentation-platform-exp/) | microsoft.experimentation/experimentWorkspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure HDInsight](../hdinsight/index.yml) | Microsoft.HDInsight/clusters | [**Yes**](./essentials/metrics-supported.md#microsofthdinsightclusters) | No | [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | | | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/services | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisservices) | [**Yes**](./essentials/resource-logs-categories.md#microsofthealthcareapisservices) | | | | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/workspaces/iotconnectors | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors) | No | | |
- | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworknetworkfunctions) | No | | |
- | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworkvirtualnetworkfunctions) | No | | |
+ | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md) | No | | |
+ | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure Monitor](./index.yml) | microsoft.insights/autoscalesettings | [**Yes**](./essentials/metrics-supported.md#microsoftinsightsautoscalesettings) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings) | | | | [Azure Monitor](./index.yml) | microsoft.insights/components | [**Yes**](./essentials/metrics-supported.md#microsoftinsightscomponents) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightscomponents) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure IoT Central](../iot-central/index.yml) | Microsoft.IoTCentral/IoTApps | [**Yes**](./essentials/metrics-supported.md#microsoftiotcentraliotapps) | No | | | | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/managedHSMs | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultmanagedhsms) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | | | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | |
- | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftkubernetesconnectedclusters) | No | | |
+ | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure Data Explorer](/azure/data-explorer/) | Microsoft.Kusto/clusters | [**Yes**](./essentials/metrics-supported.md#microsoftkustoclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkustoclusters) | | | | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationAccounts | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicintegrationaccounts) | | | | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationServiceEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) | No | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservices) | | | | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservicesliveevents) | No | | | | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) | No | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediavideoanalyzers) | | |
+ | [Azure Media Services](/azure/media-services/) | Microsoft.Medi) | | |
| [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/remoteRenderingAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityremoterenderingaccounts) | No | | | | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/spatialAnchorsAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityspatialanchorsaccounts) | No | | | | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) | No | | | | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools/volumes | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) | No | | | | [Azure Application Gateway](../application-gateway/index.yml) | Microsoft.Network/applicationGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkapplicationgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways) | | | | [Azure Firewall](../firewall/index.yml) | Microsoft.Network/azureFirewalls | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkazurefirewalls) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkazurefirewalls) | | |
- | [Azure Bastion](../bastion/index.yml) | Microsoft.Network/bastionHosts | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkbastionhosts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkbastionhosts) | | |
+ | [Azure Bastion](../bastion/index.yml) | Microsoft.Network/bastionHosts | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/connections | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkconnections) | No | | | | [Azure DNS](../dns/index.yml) | Microsoft.Network/dnszones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkdnszones) | No | | | | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteCircuits | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Private Link](../private-link/private-link-overview.md) | Microsoft.Network/privateLinkServices | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivatelinkservices) | No | | | | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/publicIPAddresses | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkpublicipaddresses) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | | | [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) | Microsoft.Network/trafficmanagerprofiles | [**Yes**](./essentials/metrics-supported.md#microsoftnetworktrafficmanagerprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworktrafficmanagerprofiles) | | |
- | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/virtualHubs | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualhubs) | No | | |
+ | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/virtualHubs | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/virtualNetworkGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworkgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworkgateways) | | | | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualNetworks | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworks) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworks) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | | | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualRouters | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualrouters) | No | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenantsworkspaces) | | | | [Power BI Embedded](/azure/power-bi-embedded/) | Microsoft.PowerBIDedicated/capacities | [**Yes**](./essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbidedicatedcapacities) | | | | [Microsoft Purview](../purview/index.yml) | Microsoft.Purview/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftpurviewaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpurviewaccounts) | | |
- | [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftrecoveryservicesvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrecoveryservicesvaults) | | |
+ | [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | |
| [Azure Relay](../azure-relay/relay-what-is-it.md) | Microsoft.Relay/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftrelaynamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrelaynamespaces) | | |
- | [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md#microsoftresourcessubscriptions) | No | | |
+ | [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure Cognitive Search](../search/index.yml) | Microsoft.Search/searchServices | [**Yes**](./essentials/metrics-supported.md#microsoftsearchsearchservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsearchsearchservices) | | | | [Azure Service Bus](/azure/service-bus/) | Microsoft.ServiceBus/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftservicebusnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftservicebusnamespaces) | [Azure Service Bus](/azure/service-bus/) | | | [Azure Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.|
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/sqlPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspacessqlpools) | | | | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironments) | | | | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments/eventsources | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironmentseventsources) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironmentseventsources) | | |
- | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.VMwareCloudSimple/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftvmwarecloudsimplevirtualmachines) | No | | |
+ | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.VMwareCloudSimple/virtualMachines | [**Yes**](./essentials/metrics-supported.md) | No | | |
| [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/connections | [**Yes**](./essentials/metrics-supported.md#microsoftwebconnections) | No | | | | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebhostingenvironments) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments/multiRolePools | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
azure-monitor Workbooks Commonly Used Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-commonly-used-components.md
You can summarize status by using a simple visual indication instead of presenti
The following example shows how to set up a traffic light icon per computer based on the CPU utilization metric. 1. [Create a new empty workbook](workbooks-create-workbook.md).
-1. [Add a parameter](workbooks-create-workbook.md#add-a-parameter-to-an-azure-workbook), make it a [time range parameter](workbooks-time.md), and name it **TimeRange**.
+1. [Add a parameter](workbooks-create-workbook.md#add-parameters), make it a [time range parameter](workbooks-time.md), and name it **TimeRange**.
1. Select **Add query** to add a log query control to the workbook. 1. For **Query type**, select `Logs`, and for **Resource type**, select `Log Analytics`. Select a Log Analytics workspace in your subscription that has VM performance data as a resource. 1. In the query editor, enter:
The following example shows how to enable this scenario. Let's say you want the
### Set up parameters
-1. [Create a new empty workbook](workbooks-create-workbook.md) and [add a parameter component](workbooks-create-workbook.md#add-a-parameter-to-an-azure-workbook).
+1. [Create a new empty workbook](workbooks-create-workbook.md) and [add a parameter component](workbooks-create-workbook.md#add-parameters).
1. Select **Add parameter** to create a new parameter. Use the following settings: - **Parameter name**: `OsFilter` - **Display name**: `Operating system`
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
This video walks you through creating workbooks.
To create a new Azure workbook: 1. From the Azure Workbooks page, select an empty template or select **New** in the top toolbar. 1. Combine any of these elements to add to your workbook:
- - [Text](#adding-text)
- - [Parameters](#adding-parameters)
- - [Queries](#adding-queries)
- - [Metric charts](#adding-metric-charts)
- - [Links](#adding-links)
- - [Groups](#adding-groups)
+ - [Text](#add-text)
+ - [Parameters](#add-parameters)
+ - [Queries](#add-queries)
+ - [Metric charts](#add-metric-charts)
+ - [Links](#add-links)
+ - [Groups](#add-groups)
- Configuration options
-## Adding text
+## Add text
Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
Text is added through a markdown control into which an author can add their cont
**Preview mode**: :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
-### Add text to an Azure workbook
+To add text to an Azure workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a query by doing either of these steps: - Select **Add**, and **Add text** below an existing element, or at the bottom of the workbook.
You can also choose a text parameter as the source of the style. The parameter v
**Warning style example**: :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
-## Adding queries
+## Add queries
Azure Workbooks allow you to query any of the supported workbook [data sources](workbooks-data-sources.md). For example, you can query Azure Resource Health to help you view any service problems affecting your resources. You can also query Azure Monitor metrics, which is numeric data collected at regular intervals. Azure Monitor metrics provide information about an aspect of a system at a particular time.
-### Add a query to an Azure Workbook
+To add a query to an Azure Workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a query by doing either of these steps: - Select **Add**, and **Add query** below an existing element, or at the bottom of the workbook.
For example, you can query Azure Resource Health to help you view any service pr
``` In this case, the query returns no rows if the **AzureDiagnostics** table is missing, or if the **ResourceId** column is missing from the table.
-## Adding parameters
+## Add parameters
-You can collect input from consumers and reference it in other parts of the workbook using parameters. Often, you would use parameters to scope the result set or to set the right visual. Parameters help you build interactive reports and experiences.
+You can collect input from consumers and reference it in other parts of the workbook using parameters. Use parameters to scope the result set or to set the right visual. Parameters help you build interactive reports and experiences. For more information on how parameters can be used, see [workbook parameters](workbooks-parameters.md).
Workbooks allow you to control how your parameter controls are presented to consumers ΓÇô text box vs. drop down, single- vs. multi-select, values from text, JSON, KQL, or Azure Resource Graph, etc.
-### Add a parameter to an Azure Workbook
+Watch this video to learn how to use parameters and log data in Azure Workbooks.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE59Wee]
+
+To add a parameter to an Azure Workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a parameter by doing either of these steps: - Select **Add**, and **Add parameter** below an existing element, or at the bottom of the workbook.
Workbooks allow you to control how your parameter controls are presented to cons
:::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing the creation of a time range parameter.":::
-## Adding metric charts
+## Add metric charts
Most Azure resources emit metric data about state and health such as CPU utilization, storage availability, count of database transactions, failing app requests, etc. Using workbooks, you can create visualizations of the metric data as time-series charts.
The example below shows the number of transactions in a storage account over the
:::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" alt-text="Screenshot showing a metric area chart for storage transactions in a workbook.":::
-### Add a metric chart to an Azure Workbook
+To add a metric chart to an Azure Workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a metric chart by doing either of these steps: - Select **Add**, and **Add metric** below an existing element, or at the bottom of the workbook.
This is a metric chart in edit mode:
:::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" alt-text="Screenshot showing a metric scatter chart for storage latency.":::
-## Adding links
+## Add links
You can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs. :::image type="content" source="media/workbooks-create-workbook/workbooks-empty-links.png" alt-text="Screenshot of adding a link to a workbook.":::+
+Watch this video to learn how to use tabs, groups, and contextual links in Azure Workbooks:
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE59YTe]
### Link styles You can apply styles to the link element itself and to individual links.
You can apply styles to the link element itself and to individual links.
|List |:::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-list.png" alt-text="Screenshot of list style workbook link."::: | Links appear as a list of links, with no bullets. | |Paragraph | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-paragraph.png" alt-text="Screenshot of paragraph style workbook link."::: |Links appear as a paragraph of links, wrapped like a paragraph of text. | |Navigation | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-navigation.png" alt-text="Screenshot of navigation style workbook link."::: | Links appear as links, with vertical dividers, or pipes (`|`) between each link. |
-|Tabs | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-tabs.png" alt-text="Screenshot of tabs style workbook link."::: |Links appear as tabs. Each link appears as a tab, no link styling options apply to individual links. See the [tabs](#using-tabs) section below for how to configure tabs. |
-|Toolbar | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-toolbar.png" alt-text="Screenshot of toolbar style workbook link."::: | Links appear an Azure portal styled toolbar, with icons and text. Each link appears as a toolbar button. See the [toolbar](#using-toolbars) section below for how to configure toolbars. |
+|Tabs | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-tabs.png" alt-text="Screenshot of tabs style workbook link."::: |Links appear as tabs. Each link appears as a tab, no link styling options apply to individual links. See the [tabs](#tabs) section below for how to configure tabs. |
+|Toolbar | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-toolbar.png" alt-text="Screenshot of toolbar style workbook link."::: | Links appear an Azure portal styled toolbar, with icons and text. Each link appears as a toolbar button. See the [toolbar](#toolbars) section below for how to configure toolbars. |
**Link styles**
Links can use all of the link actions available in [link actions](workbooks-link
|Set a parameter value | A parameter can be set to a value when selecting a link, button, or tab. Tabs are often configured to set a parameter to a value, which hides and shows other parts of the workbook based on that value.| |Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another step visible. This action can be used to create a "table of contents", or a "go back to the top" style experience. |
-### Using tabs
+### Tabs
Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where selecting either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
g
- The first tab is selected by default, invoking whatever action that tab has specified. If the first tab's action opens another view, as soon as the tabs are created, a view appears. - You can use tabs to open another views, but this functionality should be used sparingly, since most users won't expect to navigate by selecting a tab. If other tabs are setting a parameter to a specific value, a tab that opens a view wouldn't change that value, so the rest of the workbook content will continue to show the view or data for the previous tab.
-### Using toolbars
+### Toolbars
Use the Toolbar style to have your links appear styled as a toolbar. In toolbar style, the author must fill in fields for:
If any required parameters are used in button text, tooltip text, or value field
A sample workbook with toolbars, globals parameters, and ARM Actions is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-toolbar-links).
-## Adding groups
+## Add groups
A group item in a workbook allows you to logically group a set of steps in a workbook.
Groups in workbooks are useful for several things:
- **Visibility**: When you want several items to hide or show together, you can set the visibility of the entire group of items, instead of setting visibility settings on each individual item. This can be useful in templates that use tabs, as you can use a group as the content of the tab, and the entire group can be hidden/shown based on a parameter set by the selected tab. - **Performance**: When you have a large template with many sections or tabs, you can convert each section into its own subtemplate, and use groups to load all the subtemplates within the top-level template. The content of the subtemplates won't load or run until a user makes those groups visible. Learn more about [how to split a large template into many templates](#splitting-a-large-template-into-many-templates).
-### Add a group to your workbook
+To add a group to your workbook:
1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a parameter by doing either of these steps: - Select **Add**, and **Add group** below an existing element, or at the bottom of the workbook.
azure-monitor Workbooks Graph Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-graph-visualizations.md
The following graph shows data flowing in and out of a computer via various port
[![Screenshot that shows a tile summary view.](./media/workbooks-graph-visualizations/graph.png)](./media/workbooks-graph-visualizations/graph.png#lightbox)
+Watch this video to learn how to create graphs and use links in Azure Workbooks.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5ah5O]
+ ## Add a graph 1. Switch the workbook to edit mode by selecting **Edit**.
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | locks | scope of assignment | 1-90 | Alphanumerics, periods, underscores, hyphens, and parenthesis.<br><br>Can't end in period. | > | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. | > | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
+> | policyExemptions | scope of exemption | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. | > | roleAssignments | tenant | 36 | Must be a globally unique identifier (GUID). | > | roleDefinitions | tenant | 36 | Must be a globally unique identifier (GUID). |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
This section talks about limited access features in Azure Video Indexer.
|When did I create the account?|Trial account (free)| Paid account <br/>(classic or ARM-based)| ||||
-|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. <br/><br/>We proactively sent emails to these customers + AEs.|
+|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period.|
|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features (after max 10 days).| \*In Brazil South we also disabled the face detection.
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
This section talks about limited access features in Azure Video Indexer.
|When did I create the account?|Trial Account (Free)| Paid Account <br/>(classic or ARM-based)| ||||
-|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. <br/><br/>We proactively sent emails to these customers + AEs.|
+|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period.|
|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features (after max 10 days).| \*In Brazil South we also disabled the face detection.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Before you begin the prerequisites, review the [Performance best practices](#per
Azure VMware Solution currently supports the following regions:
-**America** : East US, East US 2, West US, Central US, South Central US, North Central US, Canada East, Canada Central .
-
-**Europe** : West Europe, North Europe, UK West, UK South, France Central, Switzerland West, Germany West Central.
-
-**Asia** : East Asia, Southeast Asia, Japan East, Japan West.
+**Asia** : East Asia, Japan East, Japan West, Southeast Asia.
**Australia** : Australia East, Australia Southeast. **Brazil** : Brazil South.
+**Europe** : France Central, Germany West Central, North Europe, Switzerland West, UK South, UK West, West Europe
+
+**North America** : Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US.
+ The list of supported regions will expand as the preview progresses. ## Performance best practices
backup Backup Azure Reports Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reports-data-model.md
Title: Data model for Azure Backup diagnostics events description: This data model is in reference to the Resource Specific Mode of sending diagnostic events to Log Analytics (LA). Previously updated : 10/30/2019 Last updated : 10/19/2022+++ # Data Model for Azure Backup Diagnostics Events
This table provides information about core backup entities, such as vaults and b
| BackupItemFriendlyName | Text | Friendly name of the backup item | | BackupItemName | Text | Name of the backup item | | BackupItemProtectionState | Text | Protection State of the Backup Item |
-| BackupItemFrontEndSize | Text | Front-end size of the backup item |
+| BackupItemFrontEndSize | Text | Front-end size (in MBs) of the backup item |
| BackupItemType | Text | Type of backup item. For example: VM, FileFolder | | BackupItemUniqueId | Text | Unique identifier of the backup item | | BackupManagementServerType | Text | Type of the Backup Management Server, as in MABS, SC DPM |
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
Title: Overview of Backup vaults description: An overview of Backup vaults. Previously updated : 09/26/2022 Last updated : 10/19/2022
This section explains how to move a Backup vault (configured for Azure Backup) a
### Supported regions
-The vault move across subscriptions and resource groups is supported in all public regions.
+The vault move across subscriptions and resource groups is supported in all public and national regions.
### Use Azure portal to move Backup vault to a different resource group
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
Last updated 09/27/2021
Microsoft Azure offers a cloud infrastructure with a wide range of integrated cloud services to meet your business needs. In some cases, though, you may need to run services on bare metal servers without a virtualization layer. You may need root access and control over the operating system (OS). To meet this need, Azure offers BareMetal Infrastructure for several high-value, mission-critical applications. BareMetal Infrastructure is made up of dedicated BareMetal instances (compute instances). It features:-- High-performance storage appropriate to the application (NFS, ISCSI, and Fiber Channel). Storage can also be shared across BareMetal instances to enable features like scale-out clusters or high availability pairs with STONITH.
+- High-performance storage appropriate to the application (NFS, ISCSI, and Fiber Channel). Storage can also be shared across BareMetal instances to enable features like scale-out clusters or high availability pairs with failed-node-fencing capability.
- A set of function-specific virtual LANs (VLANs) in an isolated environment. This environment also has special VLANs you can access if you're running virtual machines (VMs) on one or more Azure Virtual Networks (VNets) in your Azure subscription. The entire environment is represented as a resource group in your Azure subscription.
bastion Bastion Connect Vm Rdp Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-rdp-linux.md
description: Learn how to use Azure Bastion to connect to Linux VM using RDP.
Previously updated : 08/08/2022 Last updated : 10/18/2022
In order to make a connection, the following roles are required:
* Reader role on the virtual machine * Reader role on the NIC with private IP of the virtual machine * Reader role on the Azure Bastion resource
+* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network).
+ ### Ports
To connect to the Linux VM via RDP, you must have the following ports open on yo
## Next steps
-Read the [Bastion FAQ](bastion-faq.md).
+Read the [Bastion FAQ](bastion-faq.md) for more information.
bastion Bastion Connect Vm Ssh Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-windows.md
description: Learn how to use Azure Bastion to connect to Windows VM using SSH.
Previously updated : 08/18/2022 Last updated : 10/18/2022
In order to make a connection, the following roles are required:
* Reader role on the virtual machine * Reader role on the NIC with private IP of the virtual machine * Reader role on the Azure Bastion resource
+* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network).
### Ports
In order to connect to the Windows VM via SSH, you must have the following ports
* Inbound port: SSH (22) *or* * Inbound port: Custom value (you will then need to specify this custom port when you connect to the VM via Azure Bastion)
+See the [Azure Bastion FAQ](bastion-faq.md) for additional requirements.
+ ### Supported configurations Currently, Azure Bastion only supports connecting to Windows VMs via SSH using **OpenSSH**.
center-sap-solutions Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/deploy-s4hana.md
Title: Deploy S/4HANA infrastructure (preview)
-description: Learn how to deploy S/4HANA infrastructure with Azure Center for SAP solutions (ACSS) through the Azure portal. You can deploy High Availability (HA), non-HA, and single-server configurations.
+description: Learn how to deploy S/4HANA infrastructure with Azure Center for SAP solutions through the Azure portal. You can deploy High Availability (HA), non-HA, and single-server configurations.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to deploy S/4HANA infrastructure using Azure Center for SAP solutions so that I can manage SAP workloads in the Azure portal.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azure Center for SAP solutions (ACSS)*. There are [three deployment options](#deployment-types): distributed with High Availability (HA), distributed non-HA, and single server.
+In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azure Center for SAP solutions*. There are [three deployment options](#deployment-types): distributed with High Availability (HA), distributed non-HA, and single server.
## Prerequisites
There are three deployment options that you can select for your infrastructure,
1. In the search bar, enter and select **Azure Center for SAP solutions**.
-1. On the ACSS landing page, select **Create a new SAP system**.
+1. On the Azure Center for SAP solutions landing page, select **Create a new SAP system**.
1. On the **Create Virtual Instance for SAP solutions** page, on the **Basics** tab, fill in the details for your project.
There are three deployment options that you can select for your infrastructure,
1. For **SAP FQDN**, provide FQDN for you system such "sap.contoso.com"
-1. Under **User assigned managed identity**, provide the identity which ACSS will use to deploy infrastructure.
+1. Under **User assigned managed identity**, provide the identity which Azure Center for SAP solutions will use to deploy infrastructure.
1. For **Managed identity source**, choose if you want to create a new identity or use an existing identity.
There are three deployment options that you can select for your infrastructure,
1. Select **Next: Virtual machines**.
-1. In the **Virtual machines** tab, generate SKU size and total VM count recommendations for each SAP instance from ACSS.
+1. In the **Virtual machines** tab, generate SKU size and total VM count recommendations for each SAP instance from Azure Center for SAP solutions.
1. For **Generate Recommendation based on**, under **Get virtual machine recommendations**, select **SAP Application Performance Standard (SAPS)**.
There are three deployment options that you can select for your infrastructure,
The number of VMs for ASCS and Database instances aren't editable. The default number for each is **2**.
- ACSS automatically configures a database disk layout for the deployment. To view the layout for a single database server, make sure to select a VM SKU. Then, select **View disk configuration**. If there's more than one database server, the layout applies to each server.
+ Azure Center for SAP solutions automatically configures a database disk layout for the deployment. To view the layout for a single database server, make sure to select a VM SKU. Then, select **View disk configuration**. If there's more than one database server, the layout applies to each server.
1. Select **Next: Tags**.
-1. Optionally, enter tags to apply to all resources created by the ACSS process. These resources include the VIS, ASCS instances, Application Server instances, Database instances, VMs, disks, and NICs.
+1. Optionally, enter tags to apply to all resources created by the Azure Center for SAP solutions process. These resources include the VIS, ASCS instances, Application Server instances, Database instances, VMs, disks, and NICs.
1. Select **Review + Create**.
center-sap-solutions Get Quality Checks Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/get-quality-checks-insights.md
Title: Get quality checks and insights for a Virtual Instance for SAP solutions (preview)
-description: Learn how to get quality checks and insights for a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS) through the Azure portal.
+description: Learn how to get quality checks and insights for a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to use the quality checks feature so that I can learn more insights about virtual machines within my Virtual Instance for SAP resource.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-The *Quality Insights* Azure workbook in *Azure Center for SAP solutions (ACSS)* provides insights about the SAP system resources. The feature is part of the monitoring capabilities built in to the *Virtual Instance for SAP solutions (VIS)*. These quality checks make sure that your SAP system uses Azure and SAP best practices for reliability and performance.
+The *Quality Insights* Azure workbook in *Azure Center for SAP solutions* provides insights about the SAP system resources. The feature is part of the monitoring capabilities built in to the *Virtual Instance for SAP solutions (VIS)*. These quality checks make sure that your SAP system uses Azure and SAP best practices for reliability and performance.
In this how-to guide, you'll learn how to use quality checks and insights to get more information about virtual machine (VM) configurations within your SAP system. ## Prerequisites -- An SAP system that you've [created with ACSS](deploy-s4hana.md) or [registered with ACSS](register-existing-system.md).
+- An SAP system that you've [created with Azure Center for SAP solutions](deploy-s4hana.md) or [registered with Azure Center for SAP solutions](register-existing-system.md).
## Open Quality Insights workbook
To open the workbook:
:::image type="content" source="media/get-quality-checks-insights/quality-insights.png" lightbox="media/get-quality-checks-insights/quality-insights.png" alt-text="Screenshot of Azure portal, showing the Quality Insights workbook page selected in the sidebar menu for a virtual Instance for SAP solutions."::: There are multiple sections in the workbook:-- Select the default **Advisor Recommendations** tab to [see the list of recommendations made by ACSS for the different instances in your VIS](#get-advisor-recommendations)
+- Select the default **Advisor Recommendations** tab to [see the list of recommendations made by Azure Center for SAP solutions for the different instances in your VIS](#get-advisor-recommendations)
- Select the **Virtual Machine** tab to [find information about the VMs in your VIS](#get-vm-information) - Select the **Configuration Checks** tab to [see configuration checks for your VIS](#run-configuration-checks) ## Get Advisor Recommendations
-The **Quality checks** feature in ACSS runs validation checks for all VIS resources. These quality checks validate the SAP system configurations follow the best practices recommended by SAP and Azure. If a VIS doesn't follow these best practices, you receive a recommendation from Azure Advisor.
+The **Quality checks** feature in Azure Center for SAP solutions runs validation checks for all VIS resources. These quality checks validate the SAP system configurations follow the best practices recommended by SAP and Azure. If a VIS doesn't follow these best practices, you receive a recommendation from Azure Advisor.
The table in the **Advisor Recommendations** tab shows all the recommendations for ASCS, Application and Database instances in the VIS.
The following checks are run for each VIS:
> [!NOTE] > These quality checks run on all VIS instances at a regular frequency of 12 hours. The corresponding recommendations in Azure Advisor also refresh at the same 12-hour frequency.
-If you take action on one or more recommendations from ACSS, wait for the next refresh to see any new recommendations from Azure Advisor.
+If you take action on one or more recommendations from Azure Center for SAP solutions, wait for the next refresh to see any new recommendations from Azure Advisor.
## Get VM information
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
Title: Install SAP software (preview)
-description: Learn how to install software on your SAP system created using Azure Center for SAP solutions (ACSS).
+description: Learn how to install software on your SAP system created using Azure Center for SAP solutions.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to install SAP software so that I can use Azure Center for SAP solutions.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-After you've created infrastructure for your new SAP system using *Azure Center for SAP solutions (ACSS)*, you need to install the SAP software.
+After you've created infrastructure for your new SAP system using *Azure Center for SAP solutions*, you need to install the SAP software.
In this how-to guide, you'll learn how to upload and install all the required components in your Azure account. You can either [run a pre-installation script to automate the upload process](#option-1-upload-software-components-with-script) or [manually upload the components](#option-2-upload-software-components-manually). Then, you can [run the software installation wizard](#install-software).
In this how-to guide, you'll learn how to upload and install all the required co
## Supported software
-ACSS supports the following SAP software version: **S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00**.
+Azure Center for SAP solutions supports the following SAP software version: **S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00**.
Following is the operating system (OS) software versions compatibility with SAP Software Version: | Publisher | Version | Generation SKU | Patch version name | Supported SAP Software Version |
The following components are necessary for the SAP installation:
- `jq` version 1.6 - `ansible` version 2.9.27 - `netaddr` version 0.8.0-- The SAP Bill of Materials (BOM), as generated by ACSS. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_064_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information:
+- The SAP Bill of Materials (BOM), as generated by Azure Center for SAP solutions. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_064_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information:
- The full name of the SAP package (`name`) - The package name with its file extension as downloaded (`archive`) - The checksum of the package as specified by SAP (`checksum`)
You can use the following method to download and upload the SAP components to yo
You also can [run scripts to automate this process](#option-1-upload-software-components-with-script) instead. 1. Create a new Azure storage account for storing the software components.
-1. Grant the ACSS application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access to this storage account.
+1. Grant the Azure Center for SAP solutions application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access to this storage account.
1. Create a container within the storage account. You can choose any container name; for example, **sapbits**. 1. Create two folders within the container, named **deployervmpackages** and **sapfiles**. > [!WARNING]
Now, you can [install the SAP software](#install-software) using the installatio
## Install software
-To install the SAP software on Azure, use the ACSS installation wizard.
+To install the SAP software on Azure, use the Azure Center for SAP solutions installation wizard.
1. Sign in to the [Azure portal](https://portal.azure.com).
center-sap-solutions Manage Virtual Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/manage-virtual-instance.md
Title: Manage a Virtual Instance for SAP solutions (preview)
-description: Learn how to configure a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS) through the Azure portal.
+description: Learn how to configure a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to configure my Virtual Instance for SAP solutions resource so that I can find system properties and connect to databases.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this article, you'll learn how to view the *Virtual Instance for SAP solutions (VIS)* resource created in *Azure Center for SAP solutions (ACSS)* through the Azure portal. You can use these steps to find your SAP system's properties and connect parts of the VIS to other resources like databases.
+In this article, you'll learn how to view the *Virtual Instance for SAP solutions (VIS)* resource created in *Azure Center for SAP solutions* through the Azure portal. You can use these steps to find your SAP system's properties and connect parts of the VIS to other resources like databases.
## Prerequisites
To configure your VIS in the Azure portal:
1. On the **Azure Center for SAP solutions** overview page, search for and select **Virtual Instances for SAP solutions** in the sidebar menu. 1. On the **Virtual Instances for SAP solutions** page, select the VIS that you want to view.
- :::image type="content" source="media/configure-virtual-instance/select-vis.png" lightbox="media/configure-virtual-instance/select-vis.png" alt-text="Screenshot of Azure portal, showing the VIS page in the ACSS service with a table of available VIS resources.":::
+ :::image type="content" source="media/configure-virtual-instance/select-vis.png" lightbox="media/configure-virtual-instance/select-vis.png" alt-text="Screenshot of Azure portal, showing the VIS page in the Azure Center for SAP solutions service with a table of available VIS resources.":::
## Monitor VIS
In the sidebar menu, look under the section **SAP resources**:
## Connect to HANA database
-If you've deployed an SAP system using ACSS, [find the SAP system's main password and HANA database passwords](#find-sap-and-hana-passwords).
+If you've deployed an SAP system using Azure Center for SAP solutions, [find the SAP system's main password and HANA database passwords](#find-sap-and-hana-passwords).
The HANA database username is either `system` or `SYSTEM` for:
To delete a VIS:
1. Select **Delete** to delete the VIS. 1. Wait for the deletion operation to complete for the VIS and related resources.
-After you delete a VIS, you can register the SAP system again. Open ACSS in the Azure portal, and select **Register an existing SAP system**.
+After you delete a VIS, you can register the SAP system again. Open Azure Center for SAP solutions in the Azure portal, and select **Register an existing SAP system**.
## Next steps
center-sap-solutions Monitor Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/monitor-portal.md
Title: Monitor SAP system from the Azure portal (preview)
-description: Learn how to monitor the health and status of your SAP system, along with important SAP metrics, using the Azure Center for SAP solutions (ACSS) within the Azure portal.
+description: Learn how to monitor the health and status of your SAP system, along with important SAP metrics, using the Azure Center for SAP solutions within the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to set up monitoring for my Virtual Instance for SAP solutions, so that I can monitor the health and status of my SAP system in Azure Center for SAP solutions.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to monitor the health and status of your SAP system with *Azure Center for SAP solutions (ACSS)* through the Azure portal. The following capabilities are available for your *Virtual Instance for SAP solutions* resource:
+In this how-to guide, you'll learn how to monitor the health and status of your SAP system with *Azure Center for SAP solutions* through the Azure portal. The following capabilities are available for your *Virtual Instance for SAP solutions* resource:
- Monitor your SAP system, along with its instances and VMs. - Analyze important SAP infrastructure metrics.-- Create and/or register an instance of Azure Monitor for SAP solutions (AMS) to monitor SAP platform metrics.
+- Create and/or register an instance of Azure Monitor for SAP solutions to monitor SAP platform metrics.
## System health
-The *health* of an SAP system within ACSS is based on the status of its underlying instances. Codes for health are also determined by the collective impact of these instances on the performance of the SAP system.
+The *health* of an SAP system within Azure Center for SAP solutions is based on the status of its underlying instances. Codes for health are also determined by the collective impact of these instances on the performance of the SAP system.
Possible values for health are:
Possible values for health are:
## System status
-The *status* of an SAP system within ACSS indicates the current state of the system.
+The *status* of an SAP system within Azure Center for SAP solutions indicates the current state of the system.
Possible values for status are:
To check basic health and status settings:
1. On the page for the VIS, review the table of instances. There is an overview of health and status information for each VIS.
- :::image type="content" source="media/monitor-portal/all-vis-statuses.png" lightbox="media/monitor-portal/all-vis-statuses.png" alt-text="Screenshot of the ACSS service in the Azure portal, showing a page of all VIS resources with their health and status information.":::
+ :::image type="content" source="media/monitor-portal/all-vis-statuses.png" lightbox="media/monitor-portal/all-vis-statuses.png" alt-text="Screenshot of the Azure Center for SAP solutions service in the Azure portal, showing a page of all VIS resources with their health and status information.":::
1. Select the VIS you want to check.
To see information about SAP application server instances:
## Monitor SAP infrastructure
-ACSS enables you to analyze important SAP infrastructure metrics from the Azure portal.
+Azure Center for SAP solutions enables you to analyze important SAP infrastructure metrics from the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com).
ACSS enables you to analyze important SAP infrastructure metrics from the Azure
## Configure Azure Monitor
-You can also set up or register AMS to monitor SAP platform-level metrics.
+You can also set up or register Azure Monitor for SAP solutions to monitor SAP platform-level metrics.
1. Sign in to the [Azure portal](https://portal.azure.com).
You can also set up or register AMS to monitor SAP platform-level metrics.
1. On the page for the VIS, select the VIS from the table.
-1. In the sidebar menu for the VIS, under **Monitoring**, select **Azure Monitor for SAP**.
+1. In the sidebar menu for the VIS, under **Monitoring**, select **Azure Monitor for SAP solutions**.
-1. Select whether you want to [create a new AMS instance](#create-new-ams-resource), or [register an existing AMS instance](#register-existing-ams-resource). If you don't see this option, you've already configured this setting.
+1. Select whether you want to [create a new Azure Monitor for SAP solutions instance](#create-new-Azure Monitor for SAP solutions-resource), or [register an existing Azure Monitor for SAP solutions instance](#register-existing-Azure Monitor for SAP solutions-resource). If you don't see this option, you've already configured this setting.
- :::image type="content" source="media/monitor-portal/monitoring-setup.png" lightbox="media/monitor-portal/monitoring-setup.png" alt-text="Screenshot of AMS page inside a VIS resource in the Azure portal, showing the option to create or register a new instance.":::
+ :::image type="content" source="media/monitor-portal/monitoring-setup.png" lightbox="media/monitor-portal/monitoring-setup.png" alt-text="Screenshot of Azure Monitor for SAP solutions page inside a VIS resource in the Azure portal, showing the option to create or register a new instance.":::
-1. After you create or register your AMS instance, you are redirected to the AMS instance.
+1. After you create or register your Azure Monitor for SAP solutions instance, you are redirected to the Azure Monitor for SAP solutions instance.
-### Create new AMS resource
+### Create new Azure Monitor for SAP solutions resource
-To configure a new AMS resource:
+To configure a new Azure Monitor for SAP solutions resource:
-1. On the **Create new AMS resource** page, select the **Basics** tab.
+1. On the **Create new Azure Monitor for SAP solutions resource** page, select the **Basics** tab.
- :::image type="content" source="media/monitor-portal/ams-creation.png" lightbox="media/monitor-portal/ams-creation.png" alt-text="Screenshot of AMS creation page, showing the Basics tab and required fields.":::
+ :::image type="content" source="media/monitor-portal/ams-creation.png" lightbox="media/monitor-portal/ams-creation.png" alt-text="Screenshot of Azure Monitor for SAP solutions creation page, showing the Basics tab and required fields.":::
1. Under **Project details**, configure your resource. 1. For **Subscription**, select your Azure subscription.
- 1. For **AMS resource group**, select the same resource group as the VIS.
+ 1. For **Azure Monitor for SAP solutions resource group**, select the same resource group as the VIS.
> [!IMPORTANT] > If you select a resource group that's different from the resource group of the VIS, the deployment fails.
-1. Under **AMS instance details**, configure your AMS instance.
+1. Under **Azure Monitor for SAP solutions instance details**, configure your Azure Monitor for SAP solutions instance.
- 1. For **Resource name**, enter a name for your AMS resource.
+ 1. For **Resource name**, enter a name for your Azure Monitor for SAP solutions resource.
1. For **Workload region**, select an Azure region for your workload.
To configure a new AMS resource:
1. Select the **Review + Create** tab.
-### Register existing AMS resource
+### Register existing Azure Monitor for SAP solutions resource
-To register an existing **AMS resource**, select the instance from the drop-down menu on the **Register AMS** page.
+To register an existing Azure Monitor for SAP solutions resource, select the instance from the drop-down menu on the registration page.
> [!NOTE]
-> You can only view and select the current version of AMS resources. AMS (classic) resources aren't available.
+> You can only view and select the current version of Azure Monitor for SAP solutions resources. Azure Monitor for SAP solutions (classic) resources aren't available.
- :::image type="content" source="media/monitor-portal/ams-registration.png" lightbox="media/monitor-portal/ams-registration.png" alt-text="Screenshot of AMS registration page, showing the selection of an existing AMS resource.":::
+ :::image type="content" source="media/monitor-portal/ams-registration.png" lightbox="media/monitor-portal/ams-registration.png" alt-text="Screenshot of Azure Monitor for SAP solutions registration page, showing the selection of an existing Azure Monitor for SAP solutions resource.":::
-## Unregister AMS from VIS
+## Unregister Azure Monitor for SAP solutions from VIS
> [!NOTE]
-> This operation only unregisters the AMS resource from the VIS. To delete the AMS resource, you need to delete the AMS instance.
+> This operation only unregisters the Azure Monitor for SAP solutions resource from the VIS. To delete the Azure Monitor for SAP solutions resource, you need to delete the Azure Monitor for SAP solutions instance.
-To remove the link between your AMS resource and your VIS:
+To remove the link between your Azure Monitor for SAP solutions resource and your VIS:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the sidebar menu, under **Monitoring**, select **Azure Monitor for SAP**.
+1. In the sidebar menu, under **Monitoring**, select **Azure Monitor for SAP solutions**.
-1. On the AMS page, select **Delete** to unregister the resource.
+1. On the Azure Monitor for SAP solutions page, select **Delete** to unregister the resource.
1. Wait for the confirmation message, **Azure Monitor for SAP solutions has been unregistered successfully**.
center-sap-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/overview.md
Title: Azure Center for SAP solutions (preview)
-description: Azure Center for SAP solutions (ACSS) is an Azure offering that makes SAP a top-level workload on Azure. You can use ACSS to deploy or manage SAP systems on Azure seamlessly.
+description: Azure Center for SAP solutions is an Azure offering that makes SAP a top-level workload on Azure. You can use Azure Center for SAP solutions to deploy or manage SAP systems on Azure seamlessly.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to learn about Azure Center for SAP solutions so that I can decide to use the service with a new or existing SAP system.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-*Azure Center for SAP solutions (ACSS)* is an Azure offering that makes SAP a top-level workload on Azure. ACSS is an end-to-end solution that enables you to create and run SAP systems as a unified workload on Azure and provides a more seamless foundation for innovation. You can take advantage of the management capabilities for both new and existing Azure-based SAP systems.
+*Azure Center for SAP solutions* is an Azure offering that makes SAP a top-level workload on Azure. Azure Center for SAP solutions is an end-to-end solution that enables you to create and run SAP systems as a unified workload on Azure and provides a more seamless foundation for innovation. You can take advantage of the management capabilities for both new and existing Azure-based SAP systems.
-The guided deployment experience takes care of creating the necessary compute, storage and networking components needed to run your SAP system. ACSS then helps automate the installation of the SAP software according to Microsoft best practices.
+The guided deployment experience takes care of creating the necessary compute, storage and networking components needed to run your SAP system. Azure Center for SAP solutions then helps automate the installation of the SAP software according to Microsoft best practices.
-In ACSS, you either create a new SAP system or register an existing one, which then creates a *Virtual Instance for SAP solutions (VIS)*. The VIS brings SAP awareness to Azure by providing management capabilities, such as being able to see the status and health of your SAP systems. Another example is quality checks and insights, which allow you to know when your system isn't following documented best practices and standards.
+In Azure Center for SAP solutions, you either create a new SAP system or register an existing one, which then creates a *Virtual Instance for SAP solutions (VIS)*. The VIS brings SAP awareness to Azure by providing management capabilities, such as being able to see the status and health of your SAP systems. Another example is quality checks and insights, which allow you to know when your system isn't following documented best practices and standards.
-You can use ACSS to deploy the following types of SAP systems:
+You can use Azure Center for SAP solutions to deploy the following types of SAP systems:
- Single server - Distributed
For existing SAP systems that run on Azure, there's a simple registration experi
- SAP systems that run on Windows, SUSE and RHEL Linux operating systems - SAP systems that run on HANA, DB2, SQL Server, Oracle, Max DB, or SAP ASE databases
-ACSS brings services, tools and frameworks together to provide an end-to-end unified experience for deployment and management of SAP workloads on Azure, creating the foundation for you to build innovative solutions for your unique requirements.
+Azure Center for SAP solutions brings services, tools and frameworks together to provide an end-to-end unified experience for deployment and management of SAP workloads on Azure, creating the foundation for you to build innovative solutions for your unique requirements.
## What is a Virtual Instance for SAP solutions?
-When you use ACSS, you'll create a *Virtual Instance for SAP solutions (VIS)* resource. The VIS is a logical representation of an SAP system on Azure.
+When you use Azure Center for SAP solutions, you'll create a *Virtual Instance for SAP solutions (VIS)* resource. The VIS is a logical representation of an SAP system on Azure.
-Every time that you [create a new SAP system through ACSS](deploy-s4hana.md), or [register an existing SAP system to ACSS](register-existing-system.md), Azure creates a VIS. A VIS contains the metadata for the entire SAP system.
+Every time that you [create a new SAP system through Azure Center for SAP solutions](deploy-s4hana.md), or [register an existing SAP system to Azure Center for SAP solutions](register-existing-system.md), Azure creates a VIS. A VIS contains the metadata for the entire SAP system.
Each VIS consists of:
Each VIS consists of:
Inside the VIS, the SID is the parent resource. Your VIS resource is named after the SID of your SAP system. Any ASCS, Application Server, or database instances are child resources of the SID. The child resources are associated with one or more VM resources outside of the VIS. A standalone system has all three instances mapped to a single VM. A distributed system has one ASCS and one Database instance, with each mapped to a VM. High Availability (HA) deployments have the ASCS and Database instances mapped to multiple VMs to enable HA. A distributed or HA type SAP system can have multiple Application Server instances linked to their respective VMs.
-## What can you do with ACSS?
+## What can you do with Azure Center for SAP solutions?
After you create a VIS, you can:
After you create a VIS, you can:
## Next steps - [Create a network for a new VIS deployment](prepare-network.md)-- [Register an existing SAP system in ACSS](register-existing-system.md)
+- [Register an existing SAP system in Azure Center for SAP solutions](register-existing-system.md)
center-sap-solutions Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/prepare-network.md
Title: Prepare network for infrastructure deployment (preview)
-description: Learn how to prepare a network for use with an S/4HANA infrastructure deployment with Azure Center for SAP solutions (ACSS) through the Azure portal.
+description: Learn how to prepare a network for use with an S/4HANA infrastructure deployment with Azure Center for SAP solutions through the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to create a virtual network so that I can deploy S/4HANA infrastructure in Azure Center for SAP solutions.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to prepare a virtual network to deploy S/4 HANA infrastructure using *Azure Center for SAP solutions (ACSS)*. This article provides general guidance about creating a virtual network. Your individual environment and use case will determine how you need to configure your own network settings for use with a *Virtual Instance for SAP (VIS)* resource.
+In this how-to guide, you'll learn how to prepare a virtual network to deploy S/4 HANA infrastructure using *Azure Center for SAP solutions*. This article provides general guidance about creating a virtual network. Your individual environment and use case will determine how you need to configure your own network settings for use with a *Virtual Instance for SAP (VIS)* resource.
-If you have an existing network that you're ready to use with ACSS, [go to the deployment guide](deploy-s4hana.md) instead of following this guide.
+If you have an existing network that you're ready to use with Azure Center for SAP solutions, [go to the deployment guide](deploy-s4hana.md) instead of following this guide.
## Prerequisites - An Azure subscription. - [Review the quotas for your Azure subscription](../azure-portal/supportability/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error. - It's recommended to have multiple IP addresses in the subnet or subnets before you begin deployment. For example, it's always better to have a `/26` mask instead of `/29`. -- Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow ACSS to size your SAP system. If you're not sure, you can also select the VMs. There are:
+- Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are:
- A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS. - A single or cluster of Database VMs, which make up a single Database instance in the VIS. - A single Application Server VM, which makes up a single Application instance in the VIS. Depending on the number of Application Servers being deployed or registered, there can be multiple application instances.
If you're using Red Hat for the VMs, [allowlist the Red Hat endpoints](../virtua
### Allowlist storage accounts
-ACSS needs access to the following storage accounts to install SAP software correctly:
+Azure Center for SAP solutions needs access to the following storage accounts to install SAP software correctly:
- The storage account where you're storing the SAP media that is required during software installation.-- The storage account created by ACSS in a managed resource group, which ACSS also owns and manages.
+- The storage account created by Azure Center for SAP solutions in a managed resource group, which Azure Center for SAP solutions also owns and manages.
There are multiple options to allow access to these storage accounts:
There are multiple options to allow access to these storage accounts:
### Allowlist Key Vault
-ACSS creates a key vault to store and access the secret keys during software installation. This key vault also stores the SAP system password. To allow access to this key vault, you can:
+Azure Center for SAP solutions creates a key vault to store and access the secret keys during software installation. This key vault also stores the SAP system password. To allow access to this key vault, you can:
- Allow internet connectivity - Configure a [**AzureKeyVault** service tag](../virtual-network/service-tags-overview.md#available-service-tags)
ACSS creates a key vault to store and access the secret keys during software ins
### Allowlist Azure AD
-ACSS uses Azure AD to get the authentication token for obtaining secrets from a managed key vault during SAP installation. To allow access to Azure AD, you can:
+Azure Center for SAP solutions uses Azure AD to get the authentication token for obtaining secrets from a managed key vault during SAP installation. To allow access to Azure AD, you can:
- Allow internet connectivity - Configure an [**AzureActiveDirectory** service tag](../virtual-network/service-tags-overview.md#available-service-tags). ### Allowlist Azure Resource Manager
-ACSS uses a managed identity for software installation. Managed identity authentication requires a call to the Azure Resource Manager endpoint. To allow access to this endpoint, you can:
+Azure Center for SAP solutions uses a managed identity for software installation. Managed identity authentication requires a call to the Azure Resource Manager endpoint. To allow access to this endpoint, you can:
- Allow internet connectivity - Configure an [**AzureResourceManager** service tag](../virtual-network/service-tags-overview.md#available-service-tags).
center-sap-solutions Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/register-existing-system.md
Title: Register existing SAP system (preview)
-description: Learn how to register an existing SAP system in Azure Center for SAP solutions (ACSS) through the Azure portal. You can visualize, manage, and monitor your existing SAP system through ACSS.
+description: Learn how to register an existing SAP system in Azure Center for SAP solutions through the Azure portal. You can visualize, manage, and monitor your existing SAP system through Azure Center for SAP solutions.
Previously updated : 07/19/2022 Last updated : 10/19/2022
-#Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions (ACSS).
+#Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions.
# Register existing SAP system (preview) [!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to register an existing SAP system with *Azure Center for SAP solutions (ACSS)*. After you register an SAP system with ACSS, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
+In this how-to guide, you'll learn how to register an existing SAP system with *Azure Center for SAP solutions*. After you register an SAP system with Azure Center for SAP solutions, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
- View and track the SAP system as an Azure resource, called the *Virtual Instance for SAP solutions (VIS)*. - Get recommendations for your SAP infrastructure, based on quality checks that evaluate best practices for SAP on Azure.
In this how-to guide, you'll learn how to register an existing SAP system with *
- Check that you're trying to register a [supported SAP system configuration](#supported-systems) - Check that your Azure account has **Contributor** role access on the subscription or resource groups where you have the SAP system resources. - Register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system.-- A **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. ACSS service uses this identity to discover your SAP system resources and register the system as a VIS resource.
+- A **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
- Make sure each virtual machine (VM) in the SAP system is currently running on Azure. These VMs include: - The ABAP SAP Central Services (ASCS) Server instance - The Application Server instance or instances
In this how-to guide, you'll learn how to register an existing SAP system with *
## Supported systems
-You can register SAP systems with ACSS that run on the following configurations:
+You can register SAP systems with Azure Center for SAP solutions that run on the following configurations:
- SAP NetWeaver or ABAP stacks - Windows, SUSE and RHEL Linux operating systems - HANA, DB2, SQL Server, Oracle, Max DB, and SAP ASE databases
-The following SAP system configurations aren't supported in ACSS:
+The following SAP system configurations aren't supported in Azure Center for SAP solutions:
- HANA Large Instance (HLI) - Systems with HANA Scale-out configuration
The following SAP system configurations aren't supported in ACSS:
- Systems distributed across peered virtual networks - Systems using IPv6 addresses
-## Enable ACSS resource permissions
+## Enable resource permissions
-When you register an existing SAP system as a VIS, ACSS service needs a **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. Before you register an SAP system with ACSS, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
+When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
-ACSS uses this user-assigned managed identity to install VM extensions on the ASCS, Application Server and DB VMs. This step allows ACSS to discover the SAP system components, and other SAP system metadata. ACSS also needs this user-assigned managed identity to enable SAP system monitoring and management capabilities.
+Azure Center for SAP solutions uses this user-assigned managed identity to install VM extensions on the ASCS, Application Server and DB VMs. This step allows Azure Center for SAP solutions to discover the SAP system components, and other SAP system metadata. Azure Center for SAP solutions also needs this user-assigned managed identity to enable SAP system monitoring and management capabilities.
### Setup User-assigned managed identity To provide permissions to the SAP system resources to a user-assigned managed identity:
-1. [Create a new user-assigned managed identity](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) if needed or use an existing one.
-1. [Assign **Contributor** role access](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on all Resource Groups in which the SAP system resources exist. That is, Compute, Network and Storage Resource Groups.
-1. Once the permissions are assigned, this managed identity can be used in ACSS to register and manage SAP systems.
+1. [Create a new user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) if needed or use an existing one.
+1. [Assign **Contributor** role access](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on all Resource Groups in which the SAP system resources exist. That is, Compute, Network and Storage Resource Groups.
+1. Once the permissions are assigned, this managed identity can be used in Azure Center for SAP solutions to register and manage SAP systems.
## Register SAP system
-To register an existing SAP system in ACSS:
+To register an existing SAP system in Azure Center for SAP solutions:
-1. Sign in to the [Azure portal](https://portal.azure.com). Make sure to sign in with an Azure account that has **Contributor** role access to the subscription or resource groups where the SAP system exists. For more information, see the [resource permissions explanation](#enable-acss-resource-permissions).
+1. Sign in to the [Azure portal](https://portal.azure.com). Make sure to sign in with an Azure account that has **Contributor** role access to the subscription or resource groups where the SAP system exists. For more information, see the [resource permissions explanation](#enable-resource-permissions).
1. Search for and select **Azure Center for SAP solutions** in the Azure portal's search bar. 1. On the **Azure Center for SAP solutions** page, select **Register an existing SAP system**.
- :::image type="content" source="media/register-existing-system/register-button.png" alt-text="Screenshot of ACSS service overview page in the Azure portal, showing button to register an existing SAP system." lightbox="media/register-existing-system/register-button.png":::
+ :::image type="content" source="media/register-existing-system/register-button.png" alt-text="Screenshot of Azure Center for SAP solutions service overview page in the Azure portal, showing button to register an existing SAP system." lightbox="media/register-existing-system/register-button.png":::
1. On the **Basics** tab of the **Register existing SAP system** page, provide information about the SAP system. 1. For **ASCS virtual machine**, select **Select ASCS virtual machine** and select the ASCS VM resource.
To register an existing SAP system in ACSS:
1. For **SAP product**, select the SAP system product from the drop-down menu. 1. For **Environment**, select the environment type from the drop-down menu. For example, production or non-production environments. 1. For **Managed identity source**, select **Use existing user-assigned managed identity** option.
- 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Contributor** role access to the [resources of this SAP system.](#enable-acss-resource-permissions)
+ 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Contributor** role access to the [resources of this SAP system.](#enable-resource-permissions)
1. Select **Review + register** to discover the SAP system and begin the registration process.
- :::image type="content" source="media/register-existing-system/registration-page.png" alt-text="Screenshot of ACSS registration page, highlighting mandatory fields to identify the existing SAP system." lightbox="media/register-existing-system/registration-page.png":::
+ :::image type="content" source="media/register-existing-system/registration-page.png" alt-text="Screenshot of Azure Center for SAP solutions registration page, highlighting mandatory fields to identify the existing SAP system." lightbox="media/register-existing-system/registration-page.png":::
1. On the **Review + register** pane, make sure your settings are correct. Then, select **Register**.
To register an existing SAP system in ACSS:
You can now review the VIS resource in the Azure portal. The resource page shows the SAP system resources, and information about the system.
-If the registration doesn't succeed, see [what to do when an SAP system registration fails in ACSS](#fix-registration-failure).
+If the registration doesn't succeed, see [what to do when an SAP system registration fails in Azure Center for SAP solutions](#fix-registration-failure).
## Fix registration failure
-The process of registering an SAP system in ACSS might fail for the following reasons:
+The process of registering an SAP system in Azure Center for SAP solutions might fail for the following reasons:
- The selected ASCS VM and SID don't match. Make sure to select the correct ASCS VM for the SAP system that you chose, and vice versa. - The ASCS instance or VM isn't running. Make sure the instance and VM are in the **Running** state.
The process of registering an SAP system in ACSS might fail for the following re
- Command to start up sapstartsrv process on SAP VMs: /usr/sap/hostctrl/exe/hostexecstart -start - At least one Application Server and the Database aren't running for the SAP system that you chose. Make sure the Application Servers and Database VMs are in the **Running** state. - The user trying to register the SAP system doesn't have **Contributor** role permissions. For more information, see the [prerequisites for registering an SAP system](#prerequisites).-- The user-assigned managed identity doesn't have **Contributor** role access to the Azure subscription or resource groups where the SAP system exists. For more information, see [how to enable ACSS resource permissions](#enable-acss-resource-permissions).
+- The user-assigned managed identity doesn't have **Contributor** role access to the Azure subscription or resource groups where the SAP system exists. For more information, see [how to enable Azure Center for SAP solutions resource permissions](#enable-resource-permissions).
There's also a known issue with registering *S/4HANA 2021* version SAP systems. You might receive the error message: **Failed to discover details from the Db VM**. This error happens when the Database identifier is incorrectly configured on the SAP system. One possible cause is that the Application Server profile parameter `rsdb/dbid` has an incorrect identifier for the HANA Database. To fix the error:
center-sap-solutions Start Stop Sap Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/start-stop-sap-systems.md
Title: Start and stop SAP systems (preview)
-description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS) through the Azure portal.
+description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
Previously updated : 07/19/2022 Last updated : 10/19/2022
-#Customer intent: As a developer, I want to start and stop SAP systems in ACSS so that I can control instances through the Virtual Instance for SAP resource.
+#Customer intent: As a developer, I want to start and stop SAP systems in Azure Center for SAP solutions so that I can control instances through the Virtual Instance for SAP resource.
# Start and stop SAP systems (preview) [!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn to start and stop your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions (ACSS)*.
+In this how-to guide, you'll learn to start and stop your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions*.
Through the Azure portal, you can start and stop:
Through the Azure portal, you can start and stop:
## Prerequisites -- An SAP system that you've [created in ACSS](prepare-network.md) or [registered with ACSS](register-existing-system.md).
+- An SAP system that you've [created in Azure Center for SAP solutions](prepare-network.md) or [registered with Azure Center for SAP solutions](register-existing-system.md).
- For the start operation to work, all virtual machines (VMs) inside the SAP system must be running. This capability starts or stops the SAP application instances, not the VMs that make up the SAP system resources. - The `sapstartsrv` service must be running on all VMs related to the SAP system. - For HA deployments, the HA interface cluster connector for SAP (`sap_vendor_cluster_connector`) must be installed on the ASCS instance. For more information, see the [SUSE connector specifications](https://www.suse.com/c/sap-netweaver-suse-cluster-integration-new-sap_suse_cluster_connector-version-3-0-0/) and [RHEL connector specifications](https://access.redhat.com/solutions/3606101).
center-sap-solutions View Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/view-cost-analysis.md
Title: View post-deployment cost analysis in Azure Center for SAP solutions (preview)
-description: Learn how to view the cost of running an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS).
+description: Learn how to view the cost of running an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
Previously updated : 09/23/2022 Last updated : 10/19/2022 #Customer intent: As an SAP Basis Admin, I want to understand the cost incurred for running SAP systems on Azure.
[!INCLUDE [Preview content notice](./includes/preview.md)]
-In this how-to guide, you'll learn how to view the running cost of your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions (ACSS)*.
+In this how-to guide, you'll learn how to view the running cost of your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions*.
After you deploy or register an SAP system as a VIS resource, you can [view the cost of running that SAP system on the VIS resource's page](#view-cost-analysis). This feature shows the post-deployment running costs in the context of your SAP system. When you have Azure resources of multiple SAP systems in a single resource group, you no longer need to analyze the cost for each system. Instead, you can easily view the system-level cost from the VIS resource. ## How does cost analysis work?
-When you deploy infrastructure for a new SAP system with ACSS or register an existing system with ACSS, the **costanalysis-parent** tag is added to all virtual machines (VMs), disks, and load balancers related to that SAP system. The cost is determined by the total cost of all the Azure resources in the system with the **costanalysis-parent** tag.
+When you deploy infrastructure for a new SAP system with Azure Center for SAP solutions or register an existing system with Azure Center for SAP solutions, the **costanalysis-parent** tag is added to all virtual machines (VMs), disks, and load balancers related to that SAP system. The cost is determined by the total cost of all the Azure resources in the system with the **costanalysis-parent** tag.
Whenever there are changes to the SAP system, such as the addition or removal of Application Server Instance VMs, tags are updated on the relevant Azure resources. > [!NOTE]
cognitive-services Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-face-data.md
Title: "Migrate your face data across subscriptions - Face"
description: This guide shows you how to migrate your stored face data from one Face subscription to another. -+ Last updated 02/22/2021-+ ms.devlang: csharp
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
Title: What is Spatial Analysis?
description: This document explains the basic concepts and features of the Azure Spatial Analysis container. -+ -+
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
For a more structured approach, follow a Learn module for Image Analysis.
You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library.md) to get started.
+### Extract text from images (preview)
+
+Version 4.0 preview of Image Analysis offers the ability to extract text from images. Contextual information like line number and position is also returned. Text reading is also available through the main [OCR service](overview-ocr.md), but in Image Analysis this feature is optimized for image inputs as opposed to documents. [Reading text in images](concept-ocr.md)
+
+### Detect people in images (preview)
+
+Version 4.0 preview of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. [People detection](concept-people-detection.md)
### Tag visual features
Analyze the contents of an image to return the coordinates of the *area of inter
You can use Computer Vision to [detect adult content](concept-detecting-adult-content.md) in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.
-### Read text in images (preview)
-
-Version 4.0 of Image Analysis offers the ability to extract text from images. Contextual information like line number and position is also returned. Text reading is also available through the main [OCR service](overview-ocr.md), but in Image Analysis this feature is optimized for image inputs as opposed to documents. [Reading text in images](concept-ocr.md)
-
-### Detect people in images (preview)
-
-Version 4.0 of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. [People detection](concept-people-detection.md)
- ## Image requirements #### [Version 3.2](#tab/3-2)
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
## Run the container in disconnected environments
-Starting in container version 3.0.0, select customers can run speech-to-text containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
+You must request access to use containers disconnected from the internet. For more information, see [Request access to use containers in disconnected environments](../containers/disconnected-containers.md#request-access-to-use-containers-in-disconnected-environments).
-Starting in container version 2.0.0, select customers can run neural-text-to-speech containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
+> [!NOTE]
+> For general container requirements, see [Container requirements and recommendations](#container-requirements-and-recommendations).
# [Speech-to-text](#tab/stt)
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
Title: Limited Access features for Cognitive Services
description: Azure Cognitive Services that are available with Limited Access are described below. -+ Last updated 06/16/2022-+ # Limited Access features for Cognitive Services
cognitive-services Create Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-bicep.md
Title: Create an Azure Cognitive Services resource using Bicep | Microsoft Docs
description: Create an Azure Cognitive Service resource with Bicep. keywords: cognitive services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence -+ Last updated 04/29/2022-+
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
Last updated 11/23/2021
The question answering Authoring API is used to automate common tasks like adding new question answer pairs, as well as creating, publishing, and maintaining projects/knowledge bases. > [!NOTE]
-> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
+> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
## Prerequisites
cognitive-services Power Virtual Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/power-virtual-agents.md
In this tutorial, you learn how to:
> * Publish Power Virtual Agents > * Test Power Virtual Agents, and receive an answer from your Question Answering project
-> [!Note]
+> [!NOTE]
> The QnA Maker service is being retired on the 31st of March, 2025. A newer version of the question and answering capability is now available as part of [Azure Cognitive Service for Language](/azure/cognitive-services/language-service/). For question answering capabilities within the Language Service, see [question answering](../overview.md). Starting 1st October, 2022 you wonΓÇÖt be able to create new QnA Maker resources. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../how-to/migrate-qnamaker.md). ## Create and publish a project
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md
By default, Text Analytics for health will use the latest available AI model on
| Supported Versions | latest version | |--|--|
+| `2022-08-15-preview` | `2022-08-15-preview` |
| `2022-03-01` | `2022-03-01` | | `2021-05-15` | `2021-05-15` |
The [Text Analytics for health container](use-containers.md) uses separate model
### Input languages
-Currently the Text Analytics for health hosted API only [supports](../language-support.md) the English language. Additional languages are currently in preview when deploying the API in a container, as detailed [under Text Analytics for health languages support](../language-support.md).
+The Text Analytics for health supports English in addition to multiple languages that are currently in preview. You can use the hosted API or deploy the API in a container, as detailed [under Text Analytics for health languages support](../language-support.md).
## Submitting data
Analysis is performed upon receipt of the request. If you send a request using t
[!INCLUDE [asynchronous-result-availability](../../includes/async-result-availability.md)]
+## Submitting a Fast Healthcare Interoperability Resources (FHIR) request
+
+To receive your result using the **FHIR** structure, you must send the FHIR version in the API request body. You can also send the **document type** as a parameter to the FHIR API request body. If the request does not specify a document type, the value is set to none.
+
+| Parameter Name | Type | Value |
+|--|--|--|
+| fhirVersion | string | `4.0.1` |
+| documentType | string | `ClinicalTrial`, `Consult`, `DischargeSummary`, `HistoryAndPhysical`, `Imaging`, `None`, `Pathology`, `ProcedureNote`, `ProgressNote`|
+++ ## Getting results from the feature Depending on your API request, and the data you submit to the Text Analytics for health, you will get:
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
Use this article to learn which natural languages are supported by Text Analytic
## Hosted API Service
-The hosted API service supports English language, model version 03-01-2022.
+The hosted API service supports English language, model version 03-01-2022. Additional languages, English, Spanish, French, German Italian, Portuguese and Hebrew are supported with model version 2022-08-15-preview.
+
+When structuring the API request, the relevant language tags must be added for these languages:
+
+```
+English ΓÇô ΓÇ£enΓÇ¥
+Spanish ΓÇô ΓÇ£esΓÇ¥
+French - ΓÇ£frΓÇ¥
+German ΓÇô ΓÇ£deΓÇ¥
+Italian ΓÇô ΓÇ£itΓÇ¥
+Portuguese ΓÇô ΓÇ£ptΓÇ¥
+Hebrew ΓÇô ΓÇ£heΓÇ¥
+```
+```json
+json
+
+{
+ "analysisInput": {
+ "documents": [
+ {
+ "text": "El médico prescrió 200 mg de ibuprofeno.",
+ "language": "es",
+ "id": "1"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "Healthcare",
+ }
+ ]
+}
+```
## Docker container
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: Using Text Analytics for health client library and REST API > [!IMPORTANT]
-> Fast Healthcare Interoperability Resources (FHIR) structuring is available for preview using the Language REST API. The client libraries are not currently supported.
+> Fast Healthcare Interoperability Resources (FHIR) structuring is available for preview using the Language REST API. The client libraries are not currently supported. [Learn more](./how-to/call-api.md) on how to use FHIR structuring in your API call.
::: zone pivot="programming-language-csharp"
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* Expanded language support for: * [Sentiment analysis](./sentiment-opinion-mining/language-support.md) * [Key phrase extraction](./key-phrase-extraction/language-support.md)
- * [Named entity recognition](./key-phrase-extraction/language-support.md)
+ * [Named entity recognition](./named-entity-recognition/language-support.md)
+ * [Text Analytics for health](./text-analytics-for-health/language-support.md)
* [Multi-region deployment](./concepts/custom-features/multi-region-deployment.md) and [project asset versioning](./concepts/custom-features/project-versioning.md) for: * [Conversational language understanding](./conversational-language-understanding/overview.md) * [Orchestration workflow](./orchestration-workflow/overview.md)
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Conversational language understanding](./conversational-language-understanding/service-limits.md#regional-availability) * [Orchestration workflow](./orchestration-workflow/service-limits.md#regional-availability) * [Custom text classification](./custom-text-classification/service-limits.md#regional-availability)
- * [Custom named entity recognition](./custom-named-entity-recognition/service-limits.md#regional-availability)
+ * [Custom named entity recognition](./custom-named-entity-recognition/service-limits.md#regional-availability)
+* Document type as an input supported for [Text Analytics for health](./text-analytics-for-health/how-to/call-api.md) FHIR requests
## September 2022
cognitive-services Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/manage-resources.md
Title: Recover deleted Cognitive Services resource
description: This article provides instructions on how to recover an already-deleted Cognitive Services resource. -+ Last updated 07/02/2021-+ # Recover deleted Cognitive Services resources
cognitive-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/responsible-use-of-ai-overview.md
Title: Overview of Responsible use of AI
description: Azure Cognitive Services provides information and guidelines on how to responsibly use our AI services in applications. Below are the links to articles that provide this guidance for the different services within the Cognitive Services suite. -+ Last updated 1/10/2022-+ # Responsible use of AI with Cognitive Services
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services
description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 10/12/2022 --++
communication-services Credentials Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/credentials-best-practices.md
leaveChatBtn.addEventListener('click', function() {
}); ```
+If you want to cancel subsequent refresh tasks, [dispose](#clean-up-resources) of the Credential object.
+ ### Clean up resources
-Since the Credential object can be passed to multiple Chat or Calling client instances, the SDK will make no assumptions about its lifetime and leaves the responsibility of its disposal to the developer. It's up to the Communication Services applications to dispose the Credential instance when it's no longer needed. Disposing the credential is also the recommended way of canceling scheduled refresh actions when the proactive refreshing is enabled.
+Since the Credential object can be passed to multiple Chat or Calling client instances, the SDK will make no assumptions about its lifetime and leaves the responsibility of its disposal to the developer. It's up to the Communication Services applications to dispose the Credential instance when it's no longer needed. Disposing the credential will also cancel scheduled refresh actions when the proactive refreshing is enabled.
Call the `.dispose()` function.
const chatClient = new ChatClient("<endpoint-url>", tokenCredential);
tokenCredential.dispose() ```
+## Handle a sign-out
+
+Depending on your scenario, you may want to sign a user out from one or more
+
+- To sign a user out from a single service, [dispose](#clean-up-resources) of the Credential object.
+- To sign a user out from multiple services, implement a signaling mechanism to notify all services to [dispose](#clean-up-resources) of the Credential object, and additionally, [revoke all access tokens](../quickstarts/access-tokens.md?tabs=windows&pivots=programming-language-javascript#revoke-access-tokens) for a given identity.
+ ## Next steps
communication-services Manage Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-video.md
Last updated 08/10/2021
-zone_pivot_groups: acs-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
#Customer intent: As a developer, I want to manage video calls with the acs sdks so that I can create a calling application that provides video capabilities.
Learn how to manage video calls with the Azure Communication Services SDKS. We'l
[!INCLUDE [Manage Video Calls iOS](./includes/manage-video/manage-video-ios.md)] ::: zone-end + ## Next steps - [Learn how to manage calls](./manage-calls.md) - [Learn how to record calls](./record-calls.md)
confidential-computing Quick Create Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-marketplace.md
If you don't have an Azure subscription, [create an account](https://azure.micro
1. Fill in the following information in the Basics tab:
- * **Authentication type**: Select **SSH public key** if you're creating a Linux VM.
+ * **Authentication type**: Select **SSH public key** if you're creating a Linux VM.
> [!NOTE] > You have the choice of using an SSH public key or a Password for authentication. SSH is more secure. For instructions on how to generate an SSH key, see [Create SSH keys on Linux and Mac for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
If you don't have an Azure subscription, [create an account](https://azure.micro
## Connect to the Linux VM
-If you already use a BASH shell, connect to the Azure VM using the **ssh** command. In the following command, replace the VM user name and IP address to connect to your Linux VM.
+Open your SSH client of choice, like Bash on Linux or PowerShell on Windows. The `ssh` command is typically included in Linux, macOS, and Windows. If you are using Windows 7 or older, where Win32 OpenSSH is not included by default, consider installing [WSL](/windows/wsl/about) or using [Azure Cloud Shell](../cloud-shell/overview.md) from the browser. In the following command, replace the VM user name and IP address to connect to your Linux VM.
```bash ssh azureadmin@40.55.55.555
You can find the Public IP address of your VM in the Azure portal, under the Ove
:::image type="content" source="media/quick-create-portal/public-ip-virtual-machine.png" alt-text="IP address in Azure portal":::
-If you're running on Windows and don't have a BASH shell, install an SSH client, such as PuTTY.
-
-1. [Download and install PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).
-
-1. Run PuTTY.
-
-1. On the PuTTY configuration screen, enter your VM's public IP address.
-
-1. Select **Open** and enter your username and password at the prompts.
-
-For more information about connecting to Linux VMs, see [Create a Linux VM on Azure using the Portal](../virtual-machines/linux/quick-create-portal.md).
-
-> [!NOTE]
-> If you see a PuTTY security alert about the server's host key not being cached in the registry, choose from the following options. If you trust this host, select **Yes** to add the key to PuTTy's cache and continue connecting. If you want to carry on connecting just once, without adding the key to the cache, select **No**. If you don't trust this host, select **Cancel** to abandon the connection.
- ## Intel SGX Drivers > [!NOTE]
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-portal.md
If you don't have an Azure subscription, [create an account](https://azure.micro
## Connect to the Linux VM
-If you already use a BASH shell, connect to the Azure VM using the **ssh** command. In the following command, replace the VM user name and IP address to connect to your Linux VM.
+Open your SSH client of choice, like Bash on Linux or PowerShell on Windows. The `ssh` command is typically included in Linux, macOS, and Windows. If you are using Windows 7 or older, where Win32 OpenSSH is not included by default, consider installing [WSL](/windows/wsl/about) or using [Azure Cloud Shell](../cloud-shell/overview.md) from the browser. In the following command, replace the VM user name and IP address to connect to your Linux VM.
```bash ssh azureadmin@40.55.55.555
You can find the Public IP address of your VM in the Azure portal, under the Ove
:::image type="content" source="media/quick-create-portal/public-ip-virtual-machine.png" alt-text="IP address in Azure portal":::
-If you're running on Windows and don't have a BASH shell, install an SSH client, such as PuTTY.
-
-1. [Download and install PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
-
-1. Run PuTTY.
-
-1. On the PuTTY configuration screen, enter your VM's public IP address.
-
-1. Select **Open** and enter your username and password at the prompts.
For more information about connecting to Linux VMs, see [Create a Linux VM on Azure using the Portal](../virtual-machines/linux/quick-create-portal.md).
-> [!NOTE]
-> If you see a PuTTY security alert about the server's host key not being cached in the registry, choose from the following options. If you trust this host, select **Yes** to add the key to PuTTy's cache and continue connecting. If you want to carry on connecting just once, without adding the key to the cache, select **No**. If you don't trust this host, select **Cancel** to abandon the connection.
- ## Install Azure DCAP Client > [!NOTE]
container-apps Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment.md
Previously updated : 12/05/2021 Last updated : 10/18/2022
Reasons to deploy container apps to the same environment include situations when
- Manage related services - Deploy different applications to the same virtual network-- Have applications communicate with each other using Dapr
+- Instrument Dapr applications that communicate via the Dapr service invocation API
- Have applications to share the same Dapr configuration - Have applications share the same log analytics workspace Reasons to deploy container apps to different environments include situations when you want to ensure: - Two applications never share the same compute resources-- Two applications can't communicate with each other via Dapr
+- Two Dapr applications can't communicate via the Dapr service invocation API
## Logs
container-registry Container Registry Content Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-content-trust.md
Important to any distributed system designed with security in mind is verifying
As an image publisher, content trust allows you to **sign** the images you push to your registry. Consumers of your images (people or systems pulling images from your registry) can configure their clients to pull *only* signed images. When an image consumer pulls a signed image, their Docker client verifies the integrity of the image. In this model, consumers are assured that the signed images in your registry were indeed published by you, and that they've not been modified since being published.
+> [!NOTE]
+> Azure Container Registry (ACR) does not support `acr import` to import images signed with Docker Content Trust (DCT). By design, the signatures are not visible after the import, and the notary v2 stores these signatures as artifacts.
+ ### Trusted images Content trust works with the **tags** in a repository. Image repositories can contain images with both signed and unsigned tags. For example, you might sign only the `myimage:stable` and `myimage:latest` images, but not `myimage:dev`.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
In some situations, you may want to override this automatic behavior to better s
Azure Cosmos DB supports two indexing modes: - **Consistent**: The index is updated synchronously as you create, update or delete items. This means that the consistency of your read queries will be the [consistency configured for the account](consistency-levels.md).-- **None**: Indexing is disabled on the container. This is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete.
+- **None**: Indexing is disabled on the container. This mode is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete.
> [!NOTE] > Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmoslazyindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing).
-By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index documents as they are written.
+By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index documents as they're written.
## <a id="index-size"></a>Index size
Taking the same example again:
- the path to anything under `headquarters` is `/headquarters/*`
-For example, we could include the `/headquarters/employees/?` path. This path would ensure that we index the employees property but would not index additional nested JSON within this property.
+For example, we could include the `/headquarters/employees/?` path. This path would ensure that we index the employees property but wouldn't index additional nested JSON within this property.
## Include/exclude strategy Any indexing policy has to include the root path `/*` as either an included or an excluded path. -- Include the root path to selectively exclude paths that don't need to be indexed. This is the recommended approach as it lets Azure Cosmos DB proactively index any new property that may be added to your model.-- Exclude the root path to selectively include paths that need to be indexed.
+- Include the root path to selectively exclude paths that don't need to be indexed. This approach is recommended as it lets Azure Cosmos DB proactively index any new property that may be added to your model.
-- For paths with regular characters that include: alphanumeric characters and _ (underscore), you don't have to escape the path string around double quotes (for example, "/path/?"). For paths with other special characters, you need to escape the path string around double quotes (for example, "/\"path-abc\"/?"). If you expect special characters in your path, you can escape every path for safety. Functionally, it doesn't make any difference if you escape every path Vs just the ones that have special characters.
+- Exclude the root path to selectively include paths that need to be indexed. The partition key property path isn't indexed by default with the exclude strategy and should be explicitly included if needed.
+
+- For paths with regular characters that include: alphanumeric characters and _ (underscore), you don't have to escape the path string around double quotes (for example, "/path/?"). For paths with other special characters, you need to escape the path string around double quotes (for example, "/\"path-abc\"/?"). If you expect special characters in your path, you can escape every path for safety. Functionally, it doesn't make any difference if you escape every path or just the ones that have special characters.
- The system property `_etag` is excluded from indexing by default, unless the etag is added to the included path for indexing.
When including and excluding paths, you may encounter the following attributes:
- `precision` is a number defined at the index level for included paths. A value of `-1` indicates maximum precision. We recommend always setting this value to `-1`. -- `dataType` can be either `String` or `Number`. This indicates the types of JSON properties which will be indexed.
+- `dataType` can be either `String` or `Number`. This indicates the types of JSON properties that will be indexed.
-It is no longer necessary to set these properties. When not specified, these properties will have the following default values:
+It's no longer necessary to set these properties. When not specified, these properties will have the following default values:
| **Property Name** | **Default Value** | | -- | -- |
Here's an example:
**Excluded Path**: `/food/ingredients/*`
-In this case, the included path takes precedence over the excluded path because it is more precise. Based on these paths, any data in the `food/ingredients` path or nested within would be excluded from the index. The exception would be data within the included path: `/food/ingredients/nutrition/*`, which would be indexed.
+In this case, the included path takes precedence over the excluded path because it's more precise. Based on these paths, any data in the `food/ingredients` path or nested within would be excluded from the index. The exception would be data within the included path: `/food/ingredients/nutrition/*`, which would be indexed.
Here are some rules for included and excluded paths precedence in Azure Cosmos DB:
When you define a spatial path in the indexing policy, you should define which i
* LineString
-Azure Cosmos DB, by default, will not create any spatial indexes. If you would like to use spatial SQL built-in functions, you should create a spatial index on the required properties. See [this section](sql-query-geospatial-index.md) for indexing policy examples for adding spatial indexes.
+Azure Cosmos DB, by default, won't create any spatial indexes. If you would like to use spatial SQL built-in functions, you should create a spatial index on the required properties. See [this section](sql-query-geospatial-index.md) for indexing policy examples for adding spatial indexes.
## Composite indexes Queries that have an `ORDER BY` clause with two or more properties require a composite index. You can also define a composite index to improve the performance of many equality and range queries. By default, no composite indexes are defined so you should [add composite indexes](how-to-manage-indexing-policy.md#composite-index) as needed.
-Unlike with included or excluded paths, you can't create a path with the `/*` wildcard. Every composite path has an implicit `/?` at the end of the path that you don't need to specify. Composite paths lead to a scalar value and this is the only value that is included in the composite index.
+Unlike with included or excluded paths, you can't create a path with the `/*` wildcard. Every composite path has an implicit `/?` at the end of the path that you don't need to specify. Composite paths lead to a scalar value that is the only value included in the composite index.
When defining a composite index, you specify:
When defining a composite index, you specify:
The following considerations are used when using composite indexes for queries with an `ORDER BY` clause with two or more properties: -- If the composite index paths do not match the sequence of the properties in the `ORDER BY` clause, then the composite index can't support the query.
+- If the composite index paths don't match the sequence of the properties in the `ORDER BY` clause, then the composite index can't support the query.
- The order of composite index paths (ascending or descending) should also match the `order` in the `ORDER BY` clause.
You should customize your indexing policy so you can serve all necessary `ORDER
If a query has filters on two or more properties, it may be helpful to create a composite index for these properties.
-For example, consider the following query which has both an equality and range filter:
+For example, consider the following query that has both an equality and range filter:
```sql SELECT *
FROM c
WHERE c.name = "John" AND c.age > 18 ```
-This query will be more efficient, taking less time and consuming fewer RU's, if it is able to leverage a composite index on `(name ASC, age ASC)`.
+This query will be more efficient, taking less time and consuming fewer RUs, if it's able to leverage a composite index on `(name ASC, age ASC)`.
Queries with multiple range filters can also be optimized with a composite index. However, each individual composite index can only optimize a single range filter. Range filters include `>`, `<`, `<=`, `>=`, and `!=`. The range filter should be defined last in the composite index.
FROM c
WHERE c.name = "John" AND c.age > 18 AND c._ts > 1612212188 ```
-This query will be more efficient with a composite index on `(name ASC, age ASC)` and `(name ASC, _ts ASC)`. However, the query would not utilize a composite index on `(age ASC, name ASC)` because the properties with equality filters must be defined first in the composite index. Two separate composite indexes are required instead of a single composite index on `(name ASC, age ASC, _ts ASC)` since each composite index can only optimize a single range filter.
+This query will be more efficient with a composite index on `(name ASC, age ASC)` and `(name ASC, _ts ASC)`. However, the query wouldn't utilize a composite index on `(age ASC, name ASC)` because the properties with equality filters must be defined first in the composite index. Two separate composite indexes are required instead of a single composite index on `(name ASC, age ASC, _ts ASC)` since each composite index can only optimize a single range filter.
The following considerations are used when creating composite indexes for queries with filters on multiple properties - Filter expressions can use multiple composite indexes.-- The properties in the query's filter should match those in composite index. If a property is in the composite index but is not included in the query as a filter, the query will not utilize the composite index.-- If a query has additional properties in the filter that were not defined in a composite index, then a combination of composite and range indexes will be used to evaluate the query. This will require fewer RU's than exclusively using range indexes.
+- The properties in the query's filter should match those in composite index. If a property is in the composite index but isn't included in the query as a filter, the query won't utilize the composite index.
+- If a query has other properties in the filter that aren't defined in a composite index, then a combination of composite and range indexes will be used to evaluate the query. This will require fewer RUs than exclusively using range indexes.
- If a property has a range filter (`>`, `<`, `<=`, `>=`, or `!=`), then this property should be defined last in the composite index. If a query has more than one range filter, it may benefit from multiple composite indexes. - When creating a composite index to optimize queries with multiple filters, the `ORDER` of the composite index will have no impact on the results. This property is optional.
ORDER BY c.firstName, c.lastName
The following considerations apply when creating composite indexes to optimize a query with a filter and `ORDER BY` clause:
-* If you do not define a composite index on a query with a filter on one property and a separate `ORDER BY` clause using a different property, the query will still succeed. However, the RU cost of the query can be reduced with a composite index, particularly if the property in the `ORDER BY` clause has a high cardinality.
-* If the query filters on properties, these should be included first in the `ORDER BY` clause.
+* If you don't define a composite index on a query with a filter on one property and a separate `ORDER BY` clause using a different property, the query will still succeed. However, the RU cost of the query can be reduced with a composite index, particularly if the property in the `ORDER BY` clause has a high cardinality.
+* If the query filters on properties, these properties should be included first in the `ORDER BY` clause.
* If the query filters on multiple properties, the equality filters must be the first properties in the `ORDER BY` clause. * If the query filters on multiple properties, you can have a maximum of one range filter or system function utilized per composite index. The property used in the range filter or system function should be defined last in the composite index. * All considerations for creating composite indexes for `ORDER BY` queries with multiple properties as well as queries with filters on multiple properties still apply.
The following considerations apply when creating composite indexes to optimize a
* If the query filters on multiple properties, the equality filters must be the first properties in the composite index. * You can have a maximum of one range filter per composite index and it must be on the property in the aggregate system function. * The property in the aggregate system function should be defined last in the composite index.
-* The `order` (`ASC` or `DESC`) does not matter.
+* The `order` (`ASC` or `DESC`) doesn't matter.
| **Composite Index** | **Sample Query** | **Supported by Composite Index?** | | - | | |
A container's indexing policy can be updated at any time [by using the Azure por
> [!NOTE] > You can track the progress of index transformation in the Azure portal or [by using one of the SDKs](how-to-manage-indexing-policy.md).
-There is no impact to write availability during any index transformations. The index transformation uses your provisioned RUs but at a lower priority than your CRUD operations or queries.
+There's no impact to write availability during any index transformations. The index transformation uses your provisioned RUs but at a lower priority than your CRUD operations or queries.
-There is no impact to read availability when adding new indexed paths. Queries will only utilize new indexed paths once an index transformation is complete. In other words, when adding a new indexed paths, queries that benefit from that indexed path will have the same performance before and during the index transformation. After the index transformation is complete, the query engine will begin to use the new indexed paths.
+There's no impact to read availability when adding new indexed paths. Queries will only utilize new indexed paths once an index transformation is complete. In other words, when adding a new indexed path, queries that benefit from that indexed path will have the same performance before and during the index transformation. After the index transformation is complete, the query engine will begin to use the new indexed paths.
-When removing indexed paths, you should group all your changes into one indexing policy transformation. If you remove multiple indexes and do so in one single indexing policy change, the query engine provides consistent and complete results throughout the index transformation. However, if you remove indexes through multiple indexing policy changes, the query engine will not provide consistent or complete results until all index transformations complete. Most developers do not drop indexes and then immediately try to run queries that utilize these indexes so, in practice, this situation is unlikely.
+When removing indexed paths, you should group all your changes into one indexing policy transformation. If you remove multiple indexes and do so in one single indexing policy change, the query engine provides consistent and complete results throughout the index transformation. However, if you remove indexes through multiple indexing policy changes, the query engine won't provide consistent or complete results until all index transformations complete. Most developers don't drop indexes and then immediately try to run queries that utilize these indexes so, in practice, this situation is unlikely.
-When you drop an indexed path, the query engine will immediately stop using it and instead do a full scan.
+When you drop an indexed path, the query engine will immediately stop using it, and will do a full scan instead.
> [!NOTE] > Where possible, you should always try to group multiple indexing changes into one single indexing policy modification
When you drop an indexed path, the query engine will immediately stop using it a
Using the [Time-to-Live (TTL) feature](time-to-live.md) requires indexing. This means that: -- it is not possible to activate TTL on a container where the indexing mode is set to `none`,-- it is not possible to set the indexing mode to None on a container where TTL is activated.
+- it isn't possible to activate TTL on a container where the indexing mode is set to `none`,
+- it isn't possible to set the indexing mode to None on a container where TTL is activated.
For scenarios where no property path needs to be indexed, but TTL is required, you can use an indexing policy with an indexing mode set to `consistent`, no included paths, and `/*` as the only excluded path.
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md
The following code shows how to register a pre-trigger using the JavaScript SDK:
```javascript const container = client.database("myDatabase").container("myContainer"); const triggerId = "trgPreValidateToDoItemTimestamp";
-await container.triggers.create({
+await container.scripts.triggers.create({
id: triggerId, body: require(`../js/${triggerId}`), triggerOperation: "create",
The following code shows how to register a post-trigger using the JavaScript SDK
```javascript const container = client.database("myDatabase").container("myContainer"); const triggerId = "trgPostUpdateMetadata";
-await container.triggers.create({
+await container.scripts.triggers.create({
id: triggerId, body: require(`../js/${triggerId}`), triggerOperation: "create",
cosmos-db Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/javascript-query-api.md
# JavaScript query API in Azure Cosmos DB [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-In addition to issuing queries using the API for NoSQL in Azure Cosmos DB, the [Azure Cosmos DB server-side SDK](https://github.com/Azure/azure-cosmosdb-js-server/) provides a JavaScript interface for performing optimized queries in Azure Cosmos DB Stored Procedures and Triggers. You don't have to be aware of the SQL language to use this JavaScript interface. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls, with a syntax familiar to ECMAScript5's array built-ins and popular JavaScript libraries like Lodash. Queries are parsed by the JavaScript runtime and efficiently executed using Azure Cosmos DB indices.
+In addition to issuing queries using the API for NoSQL in Azure Cosmos DB, the [Azure Cosmos DB server-side SDK](https://github.com/Azure/azure-cosmosdb-js-server/) provides a JavaScript interface for performing optimized queries in Azure Cosmos DB Stored Procedures and Triggers. You don't have to be aware of the SQL language to use this JavaScript interface. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls, with a syntax similar to ECMAScript5's array built-ins and popular JavaScript libraries like Lodash. Queries are parsed by the JavaScript runtime and efficiently executed using Azure Cosmos DB indices.
## Supported JavaScript functions
cosmos-db Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getting-started.md
Here are some examples of how to do **Point reads** with each SDK:
- [Java SDK](/java/api/com.azure.cosmos.cosmoscontainer.readitem#com-azure-cosmos-cosmoscontainer-(t)readitem(java-lang-string-com-azure-cosmos-models-partitionkey-com-azure-cosmos-models-cosmositemrequestoptions-java-lang-class(t))) - [Node.js SDK](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read) - [Python SDK](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-read-item)
+- [Go SDK](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#ContainerClient.ReadItem)
**SQL queries** - You can query data by writing queries using the Structured Query Language (SQL) as a JSON query language. Queries always cost at least 2.3 request units and, in general, will have a higher and more variable latency than point reads. Queries can return many items.
Here are some examples of how to do **SQL queries** with each SDK:
- [Java SDK](../samples-java.md#query-examples) - [Node.js SDK](../samples-nodejs.md#item-examples) - [Python SDK](../samples-python.md#item-examples)
+- [Go SDK](../samples-go.md#item-examples)
The remainder of this doc shows how to get started writing SQL queries in Azure Cosmos DB. SQL queries can be run through either the SDK or Azure portal.
cosmos-db Samples Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-go.md
Sample solutions that do CRUD operations and other common operations on Azure Co
## Database examples
-The [cosmos_client.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/dat) conceptual article.
+To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | |
The [cosmos_client.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/data/a
## Container examples
-The [cosmos_database.go](https://github.com/Azure/azure-sdk-for-go/blob/sdk/dat) conceptual article.
+To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | |
cosmos-db Tutorial Deploy App Bicep Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-deploy-app-bicep-aks.md
+
+ Title: 'Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep'
+description: Deploy an ASP.NET MVC web application with Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep.
++++++ Last updated : 10/17/2022++
+# Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep
++
+In this tutorial, you'll deploy a reference ASP.NET web application on an Azure Kubernetes Service (AKS) cluster that connects to Azure Cosmos DB for NoSQL.
+
+**[Azure Cosmos DB](../introduction.md)** is a fully managed distributed database platform for modern application development with NoSQL or relational databases.
+
+**[Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters.
+
+> [!IMPORTANT]
+>
+> - This article requires the latest version of Azure CLI. For more information, see [install Azure CLI](/cli/azure/install-azure-cli). If you are using the Azure Cloud Shell, the latest version is already installed.
+> - This article also requires the latest version of the Bicep CLI within Azure CLI. For more information, see [install Bicep tools](../../azure-resource-manager/bicep/install.md#azure-cli)
+> - If you are running the commands in this tutorial locally instead of in the Azure Cloud Shell, ensure you run the commands using an administrator account.
+>
+
+## Pre-requisites
+
+The following tools are required to compile the ASP.NET web application and create its container image.
+
+- [Docker Desktop](https://docs.docker.com/desktop/)
+- [Visual Studio Code](https://code.visualstudio.com/)
+ - [C# extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
+ - [Docker extension for Visual Studio Code](https://code.visualstudio.com/docs/containers/overview)
+ - [Azure Account extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account)
+
+## Overview
+
+This tutorial uses an [Infrastructure as Code (IaC)](/devops/deliver/what-is-infrastructure-as-code) approach to deploy the resources to Azure. We'll use **[Bicep](../../azure-resource-manager/bicep/overview.md)**, which is a new declarative language that offers the same capabilities as [ARM templates](../../azure-resource-manager/templates/overview.md). However, bicep includes a syntax that is more concise and easier to use.
+
+The Bicep modules will deploy the following Azure resources within the targeted subscription scope.
+
+1. A [resource group](../../azure-resource-manager/management/overview.md#resource-groups) to organize the resources
+1. A [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for authentication
+1. An [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md) for storing container images
+1. An [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster
+1. An [Azure Virtual Network (VNET)](../../virtual-network/network-overview.md) required for configuring AKS
+1. An [Azure Cosmos DB for NoSQL account](../introduction.md)) along with a database, container, and the [SQL role](/cli/azure/cosmosdb/sql/role)
+1. An [Azure Key Vault](../../key-vault/general/overview.md) to store secure keys
+1. (Optional) An [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md)f
+
+This tutorial uses the following security best practices with Azure Cosmos DB.
+
+1. Implements access control using [role-based access control](../../role-based-access-control/overview.md) and [managed identity](../../active-directory/managed-identities-azure-resources/overview.md). These features eliminate the need for developers to manage secrets, credentials, certificates, and keys used to secure communication between services.
+1. Limits Azure Cosmos DB access to the AKS subnet by [configuring a virtual network service endpoint](../how-to-configure-vnet-service-endpoint.md).
+1. Set `disableLocalAuth = true` in the **databaseAccount** resource to [enforce role-based access control as the only authentication method](../how-to-setup-rbac.md#disable-local-auth).
+
+> [!TIP]
+> The steps in this tutorial uses [Azure Cosmos DB for NoSQL](./quickstart-dotnet.md). However, the same concepts can also be applied to **[Azure Cosmos DB for MongoDB](../mongodb/introduction.md)**.
+
+## Download the Bicep modules
+
+Download or [clone](https://docs.github.com/repositories/creating-and-managing-repositories/cloning-a-repository) the Bicep modules from the **Bicep** folder of the [azure-samples/cosmos-aks-samples](https://github.com/Azure-Samples/cosmos-aks-samples/tree/main/Bicep) GitHub repository.
+
+```bash
+git clone https://github.com/Azure-Samples/cosmos-aks-samples.git
+
+cd Bicep/
+```
+
+## Connect to your Azure subscription
+
+Use [`az login`](/cli/azure/authenticate-azure-cli) to connect to your default Azure subscription.
+
+```azurecli
+az login
+```
+
+Optionally, use [`az account set`](/cli/azure/account#az-account-set) with the name or ID of a specific subscription to set the active subscription if you have multiple subscriptions.
+
+```azurecli
+az account set \
+ --subscription <subscription-id>
+```
+
+## Initialize the deployment parameters
+
+Create a **param.json** file by using the JSON in this example. Replace the `{resource group name}`, `{Azure Cosmos DB account name}`, and `{Azure Container Registry instance name}` placeholders with your own values for resource group name, Azure Cosmos DB account name, and Azure Container Registry instance name respectively.
+
+> [!IMPORTANT]
+> All resource names used in the steps below should be compliant with the **[naming rules and restrictions for Azure resources](../../azure-resource-manager/management/resource-name-rules.md)**, also ensure that the placeholders values are replaced consistently and match with values supplied in **param.json**.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "rgName": {
+ "value": "{resource group name}"
+ },
+ "cosmosName" :{
+ "value": "{Azure Cosmos DB account name}"
+ },
+ "acrName" :{
+ "value": "{Azure Container Registry instance name}"
+ }
+ }
+}
+```
+
+## Create a Bicep deployment
+
+Set shell variables using these commands replacing the `{deployment name}`, and `{location}` placeholders with your own values.
+
+```bash
+deploymentName='{deployment name}' # Name of the Deployment
+location='{location}' # Location for deploying the resources
+```
+
+Within the **Bicep** folder, use [`az deployment sub create`](/cli/azure/deployment/sub#az-deployment-sub-create) to deploy the template to the current subscription scope.
+
+```azurecli
+az deployment sub create \
+ --name $deploymentName \
+ --location $location \
+ --template-file main.bicep \
+ --parameters @param.json
+```
+
+During deployment, the console will output a message indicating that the deployment is still running:
+
+```output
+ / Running ..
+```
+
+The deployment could take somewhere around 20 to 30 minutes. Once provisioning is completed, the console will output JSON with `Succeeded` as the provisioning state.
+
+```output
+ }
+ ],
+ "provisioningState": "Succeeded",
+ "templateHash": "0000000000000000",
+ "templateLink": null,
+ "timestamp": "2022-01-01T00:00:00.000000+00:00",
+ "validatedResources": null
+ },
+ "tags": null,
+ "type": "Microsoft.Resources/deployments"
+}
+```
+
+You can also see the deployment status in the resource group
++
+> [!NOTE]
+> When creating an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks)
+
+## Link Azure Container Registry with AKS
+
+Replace the `{Azure Container Registry instance name}` and `{resource group name}` placeholders with your own values.
+
+```bash
+acrName='{Azure Container Registry instance name}'
+rgName='{resource group name}'
+aksName=$rgName'aks'
+```
+
+Run [`az aks update`](/cli/azure/aks#az-aks-update) to attach the existing ACR resource with the AKS cluster.
+
+```azurecli
+az aks update \
+ --resource-group $rgName \
+ --name $aksName \
+ --attach-acr $acrName
+```
+
+## Connect to the AKS cluster
+
+To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use [`az aks install-cli`](/cli/azure/aks#az-aks-install-cli):
+
+```azurecli
+az aks install-cli
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use [`az aks get-credentials`](/cli/azure/aks#az-aks-get-credentials). This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurecli
+az aks get-credentials \
+ --resource-group $rgName \
+ --name $aksName
+```
+
+## Connect the AKS pods to Azure Key Vault
+
+Azure Active Directory (Azure AD) pod-managed identities use AKS primitives to associate managed identities for Azure resources and identities in Azure AD with pods. We'll use these identities to grant access to the Azure Key Vault Secrets Provider for Secrets Store CSI driver.
+
+Use the command below to find the values of the Tenant ID (homeTenantId).
+
+```azurecli
+az account show
+```
+
+Use this YAML template to create a **secretproviderclass.yml** file. Make sure to update your own values for `{Tenant Id}` and `{resource group name}` placeholders. Ensure that the below values for resource group name placeholder match with values supplied in **param.json**.
+
+```yml
+# This is a SecretProviderClass example using aad-pod-identity to access the key vault
+apiVersion: secrets-store.csi.x-k8s.io/v1
+kind: SecretProviderClass
+metadata:
+ name: azure-kvname-podid
+spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "true"
+ keyvaultName: "{resource group name}kv" # Replace resource group name. Key Vault name is generated by Bicep
+ tenantId: "{Tenant Id}" # The tenant ID of your account, use 'homeTenantId' attribute value from the 'az account show' command output
+```
+
+## Apply the SecretProviderClass to the AKS cluster
+
+Use [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) to install the Secrets Store CSI Driver using the YAML.
+
+```bash
+kubectl apply \
+ --filename secretproviderclass.yml
+```
+
+## Build the ASP.NET web application
+
+Download or clone the web application source code from the **Application** folder of the [azure-samples/cosmos-aks-samples](https://github.com/Azure-Samples/cosmos-aks-samples/tree/main/Application) GitHub repository.
+
+```bash
+git clone https://github.com/Azure-Samples/cosmos-aks-samples.git
+
+cd Application/
+```
+
+Open the **Application folder** in **Visual Studio Code**. Run the application using either **F5** or the **Debug: Start Debugging** command.
+
+## Push the Docker container image to Azure Container Registry
+
+1. To create a container image from the Explorer tab on **Visual Studio Code**, open the context menu on the **Dockerfile** and select **Build Image...**. You'll then get a prompt asking for the name and version to tag the image. Enter the name `todo:latest`.
+
+ :::image type="content" source="./media/tutorial-deploy-app-bicep-aks/context-menu-build-docker-image.png" alt-text="Screenshot of the context menu in Visual Studio Code with the Build Image option selected.":::
+
+1. Use the Docker pane to push the built image to ACR. You'll find the built image under the **Images** node. Open the `todo` node, then open the context menu for the latest, and then finally select **Push...**.
+
+1. You'll then get prompts to select your Azure subscription, ACR resource, and image tags. The image tag format should be `{acrname}.azurecr.io/todo:latest`.
+
+ :::image type="content" source="./media/tutorial-deploy-app-bicep-aks/context-menu-push-docker-image.png" alt-text="Screenshot of the context menu in Visual Studio Code with the Push option selected.":::
+
+1. Wait for **Visual Studio Code** to push the container image to ACR.
+
+## Prepare Deployment YAML
+
+Use this YAML template to create an **akstododeploy.yml** file. Make sure to replace the values for `{ACR name}`, `{Image name}`, `{Version}`, and `{resource group name}` placeholders.
+
+```yml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: todo
+ labels:
+ aadpodidbinding: "cosmostodo-apppodidentity"
+ app: todo
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: todo
+ template:
+ metadata:
+ labels:
+ app: todo
+ aadpodidbinding: "cosmostodo-apppodidentity"
+ spec:
+ containers:
+ - name: mycontainer
+ image: "{ACR name}/{Image name}:{Version}" # update as per your environment, example myacrname.azurecr.io/todo:latest. Do NOT add https:// in ACR Name
+ ports:
+ - containerPort: 80
+ env:
+ - name: KeyVaultName
+ value: "{resource group name}kv" # Replace resource group name. Key Vault name is generated by Bicep
+ nodeSelector:
+ kubernetes.io/os: linux
+ volumes:
+ - name: secrets-store01-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: "azure-kvname-podid"
+
+
+kind: Service
+apiVersion: v1
+metadata:
+ name: todo
+spec:
+ selector:
+ app: todo
+ aadpodidbinding: "cosmostodo-apppodidentity"
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+```
+
+## Apply deployment YAML
+
+Use `kubectl apply` again to deploy the application pods and expose the pods via a load balancer.
+
+```bash
+kubectl apply \
+ --filename akstododeploy.yml \
+ --namespace 'my-app'
+```
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete
+
+Use [`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) to view the external IP exposed by the load balancer.
+
+```bash
+kubectl get services \
+ --namespace "my-app"
+```
+
+Open the IP received as output in a browser to access the application.
+
+## Clean up the resources
+
+To avoid Azure charges, you should clean up unneeded resources when the cluster is no longer needed. Use [`az group delete`](/cli/azure/group#az-group-delete) and [`az deployment sub delete`](/cli/azure/deployment/sub#az-deployment-sub-delete) to delete the resource group and subscription deployment respectively.
+
+```azurecli
+az group delete \
+ --resource-group $rgName
+ --yes
+
+az deployment sub delete \
+ --name $deploymentName
+```
+
+## Next steps
+
+- Learn how to [Develop a web application with Azure Cosmos DB](./tutorial-dotnet-web-app.md)
+- Learn how to [Query Azure Cosmos DB for NoSQL](./tutorial-query.md).
+- Learn how to [upgrade your cluster](../../aks/tutorial-kubernetes-upgrade-cluster.md)
+- Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md)
+- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md)
cosmos-db Howto Ingest Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-blob-storage.md
+
+ Title: Data ingestion with Azure Blob Storage - Azure Cosmos DB for PostgreSQL
+description: How to ingest data using Azure Blob Storage as a staging area
+++++ Last updated : 10/19/2022++
+# How to ingest data using Azure Blob Storage
++
+[Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/#features) (ABS) is a cloud-native scalable, durable and secure storage service. These characteristics of ABS make it a good choice of storing and moving existing data into the cloud.
+
+This article shows how to use the pg_azure_storage PostgreSQL extension to
+manipulate and load data into your Azure Cosmos DB for PostgreSQL directly from
+Azure Blob Storage.
+
+## Prepare database and blob storage
+
+To load data from Azure Blob Storage, install the `pg_azure_storage` PostgreSQL
+extension in your database:
+
+```sql
+SELECT * FROM create_extension('azure_storage');
+```
+
+We've prepared a public demonstration dataset for this article. To use your own
+dataset, follow [migrate your on-premises data to cloud
+storage](../../storage/common/storage-use-azcopy-migrate-on-premises-data.md)
+to learn how to get your datasets efficiently into Azure Blob Storage.
+
+> [!NOTE]
+>
+> Selecting "Container (anonymous read access for containers and blobs)" will allow you to ingest files from Azure Blob Storage using their public URLs and enumerating the container contents without the need to configure an account key in pg_azure_storage. Containers set to access level "Private (no anonymous access)" or "Blob (anonymous read access for blobs only)" will require an access key.
+
+## List container contents
+
+There's a demonstration Azure Blob Storage account and container pre-created for this how-to. The container's name is `github`, and it's in the `pgquickstart` account. We can easily see which files are present in the container by using the `azure_storage.blob_list(account, container)` function.
+
+```sql
+SELECT path, bytes, pg_size_pretty(bytes), content_type
+ FROM azure_storage.blob_list('pgquickstart','github');
+```
+
+```
+-[ RECORD 1 ]--+-
+path | events.csv.gz
+bytes | 41691786
+pg_size_pretty | 40 MB
+content_type | application/x-gzip
+-[ RECORD 2 ]--+-
+path | users.csv.gz
+bytes | 5382831
+pg_size_pretty | 5257 kB
+content_type | application/x-gzip
+```
+
+You can filter the output either by using a regular SQL `WHERE` clause, or by using the `prefix` parameter of the `blob_list` UDF. The latter will filter the returned rows on the Azure Blob Storage side.
++
+> [!NOTE]
+>
+> Listing container contents requires an account and access key or a container with enabled anonymous access.
++
+```sql
+SELECT * FROM azure_storage.blob_list('pgquickstart','github','e');
+```
+
+```
+-[ RECORD 1 ]-+
+path | events.csv.gz
+bytes | 41691786
+last_modified | 2022-10-12 18:49:51+00
+etag | 0x8DAAC828B970928
+content_type | application/x-gzip
+content_encoding |
+content_hash | 473b6ad25b7c88ff6e0a628889466aed
+```
+
+```sql
+SELECT *
+ FROM azure_storage.blob_list('pgquickstart','github')
+ WHERE path LIKE 'e%';
+```
+
+```
+-[ RECORD 1 ]-+
+path | events.csv.gz
+bytes | 41691786
+last_modified | 2022-10-12 18:49:51+00
+etag | 0x8DAAC828B970928
+content_type | application/x-gzip
+content_encoding |
+content_hash | 473b6ad25b7c88ff6e0a628889466aed
+```
+
+## Load data from ABS
+
+### Load data with the COPY command
+
+Start by creating a sample schema.
+
+```sql
+CREATE TABLE github_users
+(
+ user_id bigint,
+ url text,
+ login text,
+ avatar_url text,
+ gravatar_id text,
+ display_login text
+);
+
+CREATE TABLE github_events
+(
+ event_id bigint,
+ event_type text,
+ event_public boolean,
+ repo_id bigint,
+ payload jsonb,
+ repo jsonb,
+ user_id bigint,
+ org jsonb,
+ created_at timestamp
+);
+
+CREATE INDEX event_type_index ON github_events (event_type);
+CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops);
+
+SELECT create_distributed_table('github_users', 'user_id');
+SELECT create_distributed_table('github_events', 'user_id');
+```
+
+Loading data into the tables becomes as simple as calling the `COPY` command.
+
+```sql
+-- download users and store in table
+
+COPY github_users
+FROM 'https://pgquickstart.blob.core.windows.net/github/users.csv.gz';
+
+-- download events and store in table
+
+COPY github_events
+FROM 'https://pgquickstart.blob.core.windows.net/github/events.csv.gz';
+```
+
+Notice how the extension recognized that the URLs provided to the copy command are from Azure Blob Storage, the files we pointed were gzip compressed and that was also automatically handled for us.
+
+The `COPY` command supports more parameters and formats. In the above example, the format and compression were auto-selected based on the file extensions. You can however provide the format directly similar to the regular `COPY` command.
+
+```sql
+COPY github_users
+FROM 'https://pgquickstart.blob.core.windows.net/github/users.csv.gz'
+WITH (FORMAT 'csv');
+```
+
+Currently the extension supports the following file formats:
+
+|format|description|
+||--|
+|csv|Comma-separated values format used by PostgreSQL COPY|
+|tsv|Tab-separated values, the default PostgreSQL COPY format|
+|binary|Binary PostgreSQL COPY format|
+|text|A file containing a single text value (for example, large JSON or XML)|
+
+### Load data with blob_get()
+
+The `COPY` command is convenient, but limited in flexibility. Internally COPY uses the `blob_get` function, which you can use directly to manipulate data in much more complex scenarios.
+
+```sql
+SELECT *
+ FROM azure_storage.blob_get(
+ 'pgquickstart', 'github',
+ 'users.csv.gz', NULL::github_users
+ )
+ LIMIT 3;
+```
+
+```
+-[ RECORD 1 ]-+--
+user_id | 21
+url | https://api.github.com/users/technoweenie
+login | technoweenie
+avatar_url | https://avatars.githubusercontent.com/u/21?
+gravatar_id |
+display_login | technoweenie
+-[ RECORD 2 ]-+--
+user_id | 22
+url | https://api.github.com/users/macournoyer
+login | macournoyer
+avatar_url | https://avatars.githubusercontent.com/u/22?
+gravatar_id |
+display_login | macournoyer
+-[ RECORD 3 ]-+--
+user_id | 38
+url | https://api.github.com/users/atmos
+login | atmos
+avatar_url | https://avatars.githubusercontent.com/u/38?
+gravatar_id |
+display_login | atmos
+```
+
+> [!NOTE]
+>
+> In the above query, the file is fully fetched before `LIMIT 3` is applied.
+
+With this function, you can manipulate data on the fly in complex queries, and do imports as `INSERT FROM SELECT`.
+
+```sql
+INSERT INTO github_users
+ SELECT user_id, url, UPPER(login), avatar_url, gravatar_id, display_login
+ FROM azure_storage.blob_get('pgquickstart', 'github', 'users.csv.gz', NULL::github_users)
+ WHERE gravatar_id IS NOT NULL;
+```
+
+```
+INSERT 0 264308
+```
+
+In the above command, we filtered the data to accounts with a `gravatar_id` present and upper cased their logins on the fly.
+
+#### Options for blob_get()
+
+In some situations, you may need to control exactly what `blob_get` attempts to do by using the `decoder`, `compression` and `options` parameters.
+
+Decoder can be set to `auto` (default) or any of the following values:
+
+|format|description|
+||--|
+|csv|Comma-separated values format used by PostgreSQL COPY|
+|tsv|Tab-separated values, the default PostgreSQL COPY format|
+|binary|Binary PostgreSQL COPY format|
+|text|A file containing a single text value (for example, large JSON or XML)|
+
+`compression` can be either `auto` (default), `none` or `gzip`.
+
+Finally, the `options` parameter is of type `jsonb`. There are four utility functions that help building values for it.
+Each utility function is designated for the decoder matching its name.
+
+|decoder|options function |
+|-||
+|csv |`options_csv_get` |
+|tsv |`options_tsv` |
+|binary |`options_binary` |
+|text |`options_copy` |
+
+By looking at the function definitions, you can see which parameters are supported by which decoder.
+
+`options_csv_get` - delimiter, null_string, header, quote, escape, force_not_null, force_null, content_encoding
+`options_tsv` - delimiter, null_string, content_encoding
+`options_copy` - delimiter, null_string, header, quote, escape, force_quote, force_not_null, force_null, content_encoding.
+`options_binary` - content_encoding
+
+Knowing the above, we can discard recordings with null `gravatar_id` during parsing.
+
+```sql
+INSERT INTO github_users
+ SELECT user_id, url, UPPER(login), avatar_url, gravatar_id, display_login
+ FROM azure_storage.blob_get('pgquickstart', 'github', 'users.csv.gz', NULL::github_users,
+ options := azure_storage.options_csv_get(force_not_null := ARRAY['gravatar_id']));
+```
++
+```
+INSERT 0 264308
+```
+
+## Access private storage
+
+1. Obtain your account name and access key
+
+ Without an access key, we won't be allowed to list containers that are set to Private or Blob access levels.
+
+ ```sql
+ SELECT * FROM azure_storage.blob_list('mystorageaccount','privdatasets');
+ ```
+
+ ```
+ ERROR: azure_storage: missing account access key
+ HINT: Use SELECT azure_storage.account_add('<account name>', '<access key>')
+ ```
+
+ In your storage account, open **Access keys**. Copy the **Storage account name** and copy the **Key** from **key1** section (you have to select **Show** next to the key first).
+
+ :::image type="content" source="media/howto-ingestion/azure-blob-storage-account-key.png" alt-text="Screenshot of Security + networking > Access keys section of an Azure Blob Storage page in the Azure portal." border="true":::
+
+1. Adding an account to pg_azure_storage
+
+ ```sql
+ SELECT azure_storage.account_add('mystorageaccount', 'SECRET_ACCESS_KEY');
+ ```
+
+ Now you can list containers set to Private and Blob access levels for that storage but only as the `citus` user, which has the `azure_storage_admin` role granted to it. If you create a new user named `support`, it won't be allowed to access container contents by default.
+
+ ```sql
+ SELECT * FROM azure_storage.blob_list('pgabs','dataverse');
+ ```
+
+ ```
+ ERROR: azure_storage: current user support is not allowed to use storage account pgabs
+ ```
+
+1. Allow the `support` user to use a specific Azure Blob Storage account
+
+ Granting the permission is as simple as calling `account_user_add`.
+
+ ```sql
+ SELECT * FROM azure_storage.account_user_add('mystorageaccount', 'support');
+ ```
+
+ We can see the allowed users in the output of `account_list`, which shows all accounts with access keys defined.
+
+ ```sql
+ SELECT * FROM azure_storage.account_list();
+ ```
+
+ ```
+ account_name | allowed_users
+ +
+ mystorageaccount | {support}
+ (1 row)
+ ```
+
+ If you ever decide, that the user should no longer have access. Just call `account_user_remove`.
+
+
+ ```sql
+ SELECT * FROM azure_storage.account_user_remove('mystorageaccount', 'support');
+ ```
+
+## Next steps
+
+Congratulations, you just learned how to load data into Azure Cosmos DB for PostgreSQL directly from Azure Blob Storage.
+
+Learn how to create a [real-time dashboard](tutorial-design-database-realtime.md) with Azure Cosmos DB for PostgreSQL.
cosmos-db Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-distribute-tables.md
Previously updated : 08/11/2022 Last updated : 10/14/2022 # Create and distribute tables
SELECT create_distributed_table('github_events', 'user_id');
We're ready to fill the tables with sample data. For this quickstart, we'll use a dataset previously captured from the GitHub API.
-Run the following commands to download example CSV files and load them into the
-database tables. (The `curl` command downloads the files, and comes
-pre-installed in the Azure Cloud Shell.)
+We're going to use the pg_azure_storage extension, to load the data directly from a public container in Azure Blob Storage. First we need to create the extension in our database:
-```
+```sql
+SELECT * FROM create_extension('azure_storage');
+```
+
+Run the following commands to have the database fetch the example CSV files and load them into the
+database tables.
+
+```sql
-- download users and store in table
-\COPY github_users FROM PROGRAM 'curl https://examples.citusdata.com/users.csv' WITH (FORMAT CSV)
+COPY github_users FROM 'https://pgquickstart.blob.core.windows.net/github/users.csv.gz';
-- download events and store in table
-\COPY github_events FROM PROGRAM 'curl https://examples.citusdata.com/events.csv' WITH (FORMAT CSV)
+COPY github_events FROM 'https://pgquickstart.blob.core.windows.net/github/events.csv.gz';
```
+Notice how the extension recognized that the URLs provided to the copy command are from Azure Blob Storage, the files we pointed were gzip compressed and that was also automatically handled for us.
+ We can review details of our distributed tables, including their sizes, with the `citus_tables` view:
cost-management-billing Reporting Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/reporting-get-started.md
Title: Get started with Cost Management + Billing reporting - Azure
description: This article helps you to get started with Cost Management + Billing to understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs. Previously updated : 07/26/2022 Last updated : 10/18/2022
While cost analysis offers a rich, interactive experience for analyzing and surf
Need to go beyond the basics with Power BI? The Cost Management connector for Power BI lets you choose the data you need to help you seamlessly integrate costs with your own datasets or easily build out more complete dashboards and reports to meet your organization's needs. For more information about the connector, see [Connect to Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
-## Usage details and exports
+## Cost details and exports
If you're looking for raw data to automate business processes or integrate with other systems, start by exporting data to a storage account. Scheduled exports allow you to automatically publish your raw cost data to a storage account on a daily, weekly, or monthly basis. With special handling for large datasets, scheduled exports are the most scalable option for building first-class cost data integration. For more information, see [Create and manage exported data](tutorial-export-acm-data.md).
-If you need more fine-grained control over your data requests, the Usage Details API offers a bit more flexibility to pull raw data the way you need it. For more information, see the [Usage Details REST API](/rest/api/consumption/usage-details/list).
- :::image type="content" source="./media/reporting-get-started/exports.png" alt-text="Screenshot showing the list of exports." lightbox="./media/reporting-get-started/exports.png" :::
+If you need more fine-grained control over your data requests, the Cost Details API offers a bit more flexibility to pull raw data the way you need it. For more information, see [Cost Details API](../automate/usage-details-best-practices.md#cost-details-api).
+ ## Invoices and credits Cost analysis is a great tool for reviewing estimated, unbilled charges or for tracking historical cost trends, but it may not show your total billed amount because credits, taxes, and other refunds and charges not available in Cost Management. To estimate your projected bill at the end of the month, start in cost analysis to understand your forecasted costs, then review any available credit or prepaid commitment balance from **Credits** or **Payment methods** for your billing account or billing profile within the Azure portal. To review your final billed charges after the invoice is available, see **Invoices** for your billing account or billing profile.
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
The following examples illustrate how billing periods could end:
* Enterprise Agreement (EA) subscriptions ΓÇô If the billing month ends on March 31, estimated charges are updated up to 72 hours later. In this example, by midnight (UTC) April 4. * Pay-as-you-go subscriptions ΓÇô If the billing month ends on May 15, then the estimated charges might get updated up to 72 hours later. In this example, by midnight (UTC) May 19.
-Once cost and usage data becomes available in Cost Management, it will be retained for at least seven years. Only the last 13 months is available from the portal. For historical data before 13 months, please use [Exports](tutorial-export-acm-data.md) or the [UsageDetails API](/rest/api/consumption/usage-details/list).
+Once cost and usage data becomes available in Cost Management, it will be retained for at least seven years. Only the last 13 months is available from the portal. For historical data before 13 months, please use [Exports](tutorial-export-acm-data.md) or the [Cost Details API](../automate/usage-details-best-practices.md#cost-details-api).
### Rerated data
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | | EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). | | EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). - Self-service reservation transfers are supported. |
-| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
+| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). |
| MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | | MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | | MCA - individual | EA | ΓÇó For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó Self-service reservation transfers are supported. |
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| MCA - Enterprise | MOSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | | MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | | MCA - Enterprise | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
-| MCA - Enterprise | MPA | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). |
+| MCA - Enterprise | MPA | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). |
| Previous Azure offer in CSP | Previous Azure offer in CSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | | Previous Azure offer in CSP | MPA | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). | | MPA | EA | ΓÇó Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product.<br><br> ΓÇó Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. |
cost-management-billing Transfer Subscriptions Subscribers Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md
Title: Transfer Azure subscriptions between subscribers and CSPs description: Learn how you can transfer Azure subscriptions between subscribers and CSPs. -+ Previously updated : 09/23/2022 Last updated : 10/19/2022 # Transfer Azure subscriptions between subscribers and CSPs
-This article provides high-level steps used to transfer Azure subscriptions to and from Cloud Solution Provider (CSP) partners and their customers. The information here is intended for the Azure subscriber to help them coordinate with their partner. Information that Microsoft partners use for the transfer process is documented at [Learn how to transfer a customer's Azure subscriptions to another partner](/partner-center/switch-azure-subscriptions-to-a-different-partner).
+This article provides high-level steps used to transfer Azure subscriptions to and from Cloud Solution Provider (CSP) partners and their customers. This information is intended for the Azure subscriber to help them coordinate with their partner. Information that Microsoft partners use for the transfer process is documented at [Transfer subscriptions under and Azure plan from one partner to another](azure-plan-subscription-transfer-partners.md).
-Before you start a transfer request, you should download or export any cost and billing information that you want to keep. Billing and utilization information doesn't transfer with the subscription. For more information about exporting cost management data, see [Create and manage exported data](../costs/tutorial-export-acm-data.md). For more information about downloading your invoice and usage data, see [Download or view your Azure billing invoice and daily usage data](download-azure-invoice-daily-usage-date.md).
+Download or export cost and billing information that you want to keep before you start a transfer request. Billing and utilization information doesn't transfer with the subscription. For more information about exporting cost management data, see [Create and manage exported data](../costs/tutorial-export-acm-data.md). For more information about downloading your invoice and usage data, see [Download or view your Azure billing invoice and daily usage data](download-azure-invoice-daily-usage-date.md).
-## Transfer EA subscriptions to a CSP partner
+## Transfer EA or MCA enterprise subscriptions to a CSP partner
-CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure subscriptions for their customers that have a Direct Enterprise Agreement (EA). Subscription transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.
+CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure subscriptions for their customers. The customers must have a Direct Enterprise Agreement (EA) or a Microsoft account team (Microsoft Customer Agreement enterprise). Subscription transfers are allowed only for customers who have accepted an MCA and purchased an Azure plan with the CSP Program.
When the request is approved, the CSP can then provide a combined invoice to their customers. To learn more about CSPs transferring subscriptions, see [Get billing ownership of Azure subscriptions for your MPA account](mpa-request-ownership.md). >[!IMPORTANT]
-> After transfering an EA subscription to a CSP partner, any quota increases previously applied to the EA subscription will be reset to the default value. If additional quota is required after the subscription transfer, have your CSP provider submit a [quota increase](../../azure-portal/supportability/regional-quota-requests.md) request.
+> After transfering an EA or MCA enterprise subscription to a CSP partner, any quota increases previously applied to the EA subscription will be reset to the default value. If additional quota is required after the subscription transfer, have your CSP provider submit a [quota increase](../../azure-portal/supportability/regional-quota-requests.md) request.
## Other subscription transfers to a CSP partner
-To transfer any other Azure subscriptions to a CSP partner, the subscriber needs to move resources from source subscriptions to CSP subscriptions. Use the following guidance to move resources between subscriptions.
+To transfer any other Azure subscriptions that aren't supported for billing transfer to MPA as documented in the [Azure subscription transfer hub](subscription-transfer.md#product-transfer-support) article, the subscriber needs to move resources from source subscriptions to CSP subscriptions. Use the following guidance to move resources between subscriptions.
1. Establish a [reseller relationship](/partner-center/request-a-relationship-with-a-customer) with the customer. Review the [CSP Regional Authorization Overview](/partner-center/regional-authorization-overview) to ensure both customer and Partner tenant are within the same authorized regions. 1. Work with your CSP partner to create target Azure CSP subscriptions.
To transfer any other Azure subscriptions to a CSP partner, the subscriber needs
> [!IMPORTANT] > - Moving Azure resources between subscriptions might result in service downtime, based on resources in the subscriptions.
-## Transfer CSP subscription to other offer
+## Transfer CSP subscription to other offers
-To transfer any other subscriptions from a CSP Partner to any other Azure offer, the subscriber needs to move resources between source CSP subscriptions and target subscriptions. This is work done by a partner and a customer - it is not work done by a Microsoft representative.
+It's possible to transfer other subscriptions from a CSP Partner to other Azure offers that aren't supported for billing transfer from MPA as documented in the [Azure subscription transfer hub](subscription-transfer.md#product-transfer-support) article. However, the subscriber needs to manually move resources between source CSP subscriptions and target subscriptions. All work done by a partner and a customer - it isn't work done by a Microsoft representative.
1. The customer creates target Azure subscriptions. 1. Ensure that the source and target subscriptions are in the same Azure Active Directory (Azure AD) tenant. For more information about changing an Azure AD tenant, see [Associate or add an Azure subscription to your Azure Active Directory tenant](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
- Note that the change directory option isn't supported for the CSP subscription. For example, you're transferring from a CSP to a pay-as-you-go subscription. You need change the directory of the pay-as-you-go subscription to match the directory.
+ The change directory option isn't supported for the CSP subscription. For example, you're transferring from a CSP to a pay-as-you-go subscription. You need to change the directory of the pay-as-you-go subscription to match the directory.
> [!IMPORTANT] > - When you associate a subscription to a different directory, users that have roles assigned using [Azure RBAC](../../role-based-access-control/role-assignments-portal.md) lose their access. Classic subscription administrators, including Service Administrator and Co-Administrators, also lose access.
To transfer any other subscriptions from a CSP Partner to any other Azure offer,
> - Moving Azure resources between subscriptions might result in service downtime, based on resources in the subscriptions. ## Next steps+ - [Get billing ownership of Azure subscriptions for your MPA account](mpa-request-ownership.md). - Read about how to [Manage accounts and subscriptions with Azure Billing](../index.yml).
cost-management-billing Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/discount-application.md
The benefit is first applied to the product that has the greatest savings plan d
A savings plan discount only applies to resources associated with Enterprise Agreement, Microsoft Partner Agreement, and Microsoft Customer Agreements. Resources that run in a subscription with other offer types don't receive the discount.
+## Savings plans and VM reserved instances
+
+If you have both dynamic and stable workloads, you likely will have both Azure savings plans and VM reserved instances. Since reservation benefits are more restrictive than savings plans, and usually have greater discounts, Azure applies reservation benefits first.
+
+For example, VM *X* has the highest savings plan discount of all savings plan-eligible resources you used in a particular hour. If you have an available VM reservation that's compatible with *X*, the reservation is consumed instead of the savings plan. The approach reduces the possibility of waste and it ensures that youΓÇÖre always getting the best benefit.
+
+## Savings plan and Azure consumption discounts
+
+In most situations, an Azure savings plan provides the best combination of flexibility and pricing. If you're operating under an Azure consumption discount (ACD), in rare occasions, you may have some pay-as-you-go rates that are lower than the savings plan rate. In these cases, Azure uses the lower of the two rates.
+
+For example, VM *X* has the highest savings plan discount of all savings plan-eligible resources you used in a particular hour. If you have an ACD rate that is lower than the savings plan rate, the ACD rate is applied to your hourly usage. The result is decremented from your hourly commitment. The approach ensures you always get the best available rate.
+ ## Benefit allocation window With an Azure savings plan, you get significant and flexible discounts off your pay-as-you-go rates in exchange for a one or three-year spend commitment. When you use an Azure resource, usage details are periodically reported to the Azure billing system. The billing system is tasked with quickly applying your savings plan in the most beneficial manner possible. The plan benefits are applied to usage that has the largest discount percentage first. For the application to be most effective, the billing system needs visibility to your usage in a timely manner.
-The Azure savings plan benefit application operates under a best fit benefit model. When your benefit application is evaluated for a given hour, the billing system incorporates usage arriving up to 48 hours after the given hour. During the sliding 48-hour window, you may see changes to charges, including the possibility of savings plan utilization that's greater than 100%. This situation happens because the system is constantly working to provide the best possible benefit application. Keep the 48-hour window in mind when you inspect your usage.
+The Azure savings plan benefit application operates under a best fit benefit model. When your benefit application is evaluated for a given hour, the billing system incorporates usage arriving up to 48 hours after the given hour. During the sliding 48-hour window, you may see changes to charges, including the possibility of savings plan utilization that's greater than 100%. The situation happens because the system is constantly working to provide the best possible benefit application. Keep the 48-hour window in mind when you inspect your usage.
+
+## Utilize multiple savings plans
+
+Azure's intent is to always maximize the benefit you receive from savings plans. When you have multiple savings plans with the different term lengths, Azure applies the benefits from the three year plan first. The intent is to ensure that the best rates are applied first. If you have multiple savings plans that have different benefit scopes, Azure applies benefits from the more restrictively scoped plan first. The intent is to reduce the possibility of waste.
## When the savings plan term expires
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
npm run build export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxx
- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`. - `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template.
-If you would like to stop/ start only the updated triggers, instead use the below command (currently this capability is in preview and the functionality will be transparently merged into the above command during GA):
-```dos
-npm run build-preview export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput
-```
-- `RootFolder` is a mandatory field that represents where the Data Factory resources are located.-- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`.-- `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template.
+The ability to stop/start only the updated triggers is now generally available and is merged into the command shown above.
+ > [!NOTE] > The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline. - ### Validate Run `npm run build validate <rootFolder> <factoryId>` to validate all the resources of a given folder. Here's an example:
Follow these steps to get started:
{ "scripts":{ "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index",
- "build-preview":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index --preview"
}, "dependencies":{ "@microsoft/azure-data-factory-utilities":"^1.0.0"
data-factory Solution Templates Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-templates-introduction.md
Previously updated : 09/22/2022 Last updated : 10/18/2022 # Templates
After checking the **My templates** box in the **Template gallery** page, you ca
> [!NOTE] > To use the My Templates feature, you have to enable GIT integration. Both Azure DevOps GIT and GitHub are supported.+
+### Community Templates
+
+Community members are now welcome to contribute to the Template Gallery. You will be able to see these templates when you filter by **Contributor**.
++
+To learn how you can contribute to the template gallery, please read our [introduction](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-azure-data-factory-community-templates/ba-p/3650989) and [instructions](https://github.com/Azure/Azure-DataFactory/tree/main/community%20templates).
+
+> [!NOTE]
+> Community template submissions will be reviewed by the Azure Data Factory team. If your submission, does not meet our guidelines or quality checks, we will not merge your template into the gallery.
++
databox-online Azure Stack Edge Gpu Configure Gpu Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-configure-gpu-modules.md
Previously updated : 03/12/2021 Last updated : 10/19/2022 # Configure and run a module on GPU on Azure Stack Edge Pro device [!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ Your Azure Stack Edge Pro device contains one or more Graphics Processing Unit (GPU). GPUs are a popular choice for AI computations as they offer parallel processing capabilities and are faster at image rendering than Central Processing Units (CPUs). For more information on the GPU contained in your Azure Stack Edge Pro device, go to [Azure Stack Edge Pro device technical specifications](azure-stack-edge-gpu-technical-specifications-compliance.md). This article describes how to configure and run a module on the GPU on your Azure Stack Edge Pro device. In this article, you will use a publicly available container module **Digits** written for Nvidia T4 GPUs. This procedure can be used to configure any other modules published by Nvidia for these GPUs. - ## Prerequisites Before you begin, make sure that:
databox-online Azure Stack Edge Gpu Deploy Compute Module Simple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-compute-module-simple.md
Previously updated : 02/22/2021 Last updated : 10/19/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
[!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ This tutorial describes how to run a compute workload using an IoT Edge module on your Azure Stack Edge Pro GPU device. After you configure the compute, the device will transform the data before sending it to Azure. This procedure can take around 10 to 15 minutes to complete.
databox-online Azure Stack Edge Gpu Deploy Sample Module Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-sample-module-marketplace.md
Previously updated : 02/22/2021 Last updated : 10/19/2022
[!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ This article describes how to deploy a Graphics Processing Unit (GPU) enabled IoT Edge module from Azure Marketplace on your Azure Stack Edge Pro device. In this article, you learn how to:
databox-online Azure Stack Edge Gpu Deploy Sample Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-sample-module.md
Previously updated : 06/28/2022 Last updated : 10/19/2022
[!INCLUDE [applies-to-gpu-pro-pro2-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-pro-2-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ This article describes how to deploy a GPU enabled IoT Edge module on your Azure Stack Edge Pro GPU device. In this article, you learn how to:
databox-online Azure Stack Edge Gpu Modify Fpga Modules Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-modify-fpga-modules-gpu.md
Previously updated : 02/22/2021 Last updated : 10/18/2021
[!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
+> [!NOTE]
+> We strongly recommend that you deploy the latest IoT Edge version in a Linux VM. The managed IoT Edge on Azure Stack Edge uses an older version of IoT Edge runtime that doesnΓÇÖt have the latest features and patches. For instructions, see how to [Deploy an Ubuntu VM](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For more information on other supported Linux distributions that can run IoT Edge, see [Azure IoT Edge supported systems ΓÇô Container engines](../iot-edge/support.md#linux-containers).
+ This article details the changes needed for a docker-based IoT Edge module that runs on Azure Stack Edge Pro FPGA so it can run on a Kubernetes-based IoT Edge platform on Azure Stack Edge Pro GPU device. ## About IoT Edge implementation
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Title: 'Azure DDoS Protection SKU Comparison'
+ Title: 'About Azure DDoS Protection SKU Comparison'
description: Learn about the available SKUs for Azure DDoS Protection. Previously updated : 10/13/2022 Last updated : 10/19/2022
devops-project Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/overview.md
# Overview of DevOps Starter
+[!IMPORTANT] DevOps Starter will be retired on March 31, 2023. [Learn more](/azure/devops-project/retirement-and-migration).
+ DevOps Starter makes it easy to get started on Azure using either GitHub actions or Azure DevOps. It helps you launch your favorite app on the Azure service of your choice in just a few quick steps from the Azure portal. DevOps Starter sets up everything you need for developing, deploying, and monitoring your application. You can use the DevOps Starter dashboard to monitor code commits, builds, and deployments, all from a single view in the Azure portal.
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
The role definition ID is the string identifier for the existing role definition
The subscription ID is obtained by using `subscription().subscriptionId` template function.
-You need to get the role definition for the `DevTest Labs User` built-in role. To get the GUID for the [DevTest Labs User](../role-based-access-control/built-in-roles.md#devtest-labs-user) role, you can use the [Role Assignments REST API](/rest/api/authorization/roleassignments) or the [Get-AzRoleDefinition](/powershell/module/az.resources/get-azroledefinition) cmdlet.
+You need to get the role definition for the `DevTest Labs User` built-in role. To get the GUID for the [DevTest Labs User](../role-based-access-control/built-in-roles.md#devtest-labs-user) role, you can use the [Role Assignments REST API](/rest/api/authorization/role-assignments) or the [Get-AzRoleDefinition](/powershell/module/az.resources/get-azroledefinition) cmdlet.
```powershell $dtlUserRoleDefId = (Get-AzRoleDefinition -Name "DevTest Labs User").Id
education-hub Access Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/access-education-hub.md
Previously updated : 06/30/2020 Last updated : 10/19/2022
To access the Azure Education Hub, you should have already received an email not
> [!IMPORTANT] > Confirm that you are signing-in with an Organizational/Work Account (like your institution's @domain.edu). If so, select this option on the left-side of the window first. This will take you to a different login screen.
- :::image type="content" source="media/access-education-hub/sign-in.png" alt-text="Organization sign-in dialog box." border="false":::
+ :::image type="content" source="media/access-education-hub/modern-sign-in.png" alt-text="Organization sign-in dialog box." border="false":::
1. After you're signed in, you'll be directed to the Azure portal. To find the Education Hub, go to the **All Services** menu and search for **Education**. The first time you log in, the Get Started page is displayed.
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you are remote and do not have fiber connectivity or you want to explore othe
| **[Data Foundry](https://www.datafoundry.com/services/cloud-connect)** | Megaport | Dallas | | **[Epsilon Telecommunications Limited](https://www.epsilontel.com/solutions/cloud-connect/)** | Equinix | London, Singapore, Washington DC | | **[Eurofiber](https://eurofiber.nl/microsoft-azure/)** | Equinix | Amsterdam |
-| **[Exponential E](https://www.exponential-e.com/services/connectivity-services/cloud-connect-exchange)** | Equinix | London |
+| **[Exponential E](https://www.exponential-e.com/services/connectivity-services/)** | Equinix | London |
| **[Fastweb S.p.A](https://www.fastweb.it/grandi-aziende/connessione-voce-e-wifi/scheda-prodotto/rete-privata-virtuale/)** | Equinix | Amsterdam | | **[Fibrenoire](https://www.fibrenoire.ca/en/cloudextn)** | Megaport | Quebec City | | **[FPT Telecom International](https://cloudconnect.vn/en)** |Equinix |Singapore|
firewall Tutorial Firewall Deploy Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal-policy.md
Previously updated : 08/26/2021 Last updated : 10/18/2022 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
Network traffic is subjected to the configured firewall rules when you route you
For this tutorial, you create a simplified single VNet with two subnets for easy deployment.
-For production deployments, a [hub and spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) is recommended, where the firewall is in its own VNet. The workload servers are in peered VNets in the same region with one or more subnets.
- * **AzureFirewallSubnet** - the firewall is in this subnet. * **Workload-SN** - the workload server is in this subnet. This subnet's network traffic goes through the firewall. ![Tutorial network infrastructure](media/tutorial-firewall-deploy-portal/tutorial-network.png)
+For production deployments, a [hub and spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) is recommended, where the firewall is in its own VNet. The workload servers are in peered VNets in the same region with one or more subnets.
+ In this tutorial, you learn how to: > [!div class="checklist"]
First, create a resource group to contain the resources needed to deploy the fir
The resource group contains all the resources for the tutorial. 1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Add**.
-4. For **Subscription**, select your subscription.
-1. For **Resource group name**, enter *Test-FW-RG*.
-1. For **Region**, select a region. All other resources that you create must be in the same region.
+1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page, then select **Add**. Enter or select the following values:
+
+ | Setting | Value |
+ | -- | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Enter *Test-FW-RG*. |
+ | Region | Select a region. All other resources that you create must be in the same region. |
+
1. Select **Review + create**. 1. Select **Create**.
This VNet will have two subnets.
1. On the Azure portal menu or from the **Home** page, select **Create a resource**. 1. Select **Networking**. 1. Search for **Virtual network** and select it.
-1. Select **Create**.
-1. For **Subscription**, select your subscription.
-1. For **Resource group**, select **Test-FW-RG**.
-1. For **Name**, type **Test-FW-VN**.
-1. For **Region**, select the same location that you used previously.
+1. Select **Create**, then enter or select the following values:
+
+ | Setting | Value |
+ | -- | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Name | Enter *Test-FW-VN*. |
+ | Region | Select the same location that you used previously. |
+ 1. Select **Next: IP addresses**. 1. For **IPv4 Address space**, accept the default **10.0.0.0/16**. 1. Under **Subnet**, select **default**.
This VNet will have two subnets.
Next, create a subnet for the workload server. 1. Select **Add subnet**.
-4. For **Subnet name**, type **Workload-SN**.
-5. For **Subnet address range**, type **10.0.2.0/24**.
-6. Select **Add**.
-7. Select **Review + create**.
-8. Select **Create**.
+1. For **Subnet name**, type **Workload-SN**.
+1. For **Subnet address range**, type **10.0.2.0/24**.
+1. Select **Add**.
+1. Select **Review + create**.
+1. Select **Create**.
### Create a virtual machine Now create the workload virtual machine, and place it in the **Workload-SN** subnet. 1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Select **Windows Server 2016 Datacenter**.
-4. Enter these values for the virtual machine:
-
- |Setting |Value |
- |||
- |Resource group |**Test-FW-RG**|
- |Virtual machine name |**Srv-Work**|
- |Region |Same as previous|
- |Image|Windows Server 2016 Datacenter|
- |Administrator user name |Type a user name|
- |Password |Type a password|
-
-4. Under **Inbound port rules**, **Public inbound ports**, select **None**.
-6. Accept the other defaults and select **Next: Disks**.
-7. Accept the disk defaults and select **Next: Networking**.
-8. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**.
-9. For **Public IP**, select **None**.
-11. Accept the other defaults and select **Next: Management**.
-12. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
-13. Review the settings on the summary page, and then select **Create**.
+1. Select **Windows Server 2019 Datacenter**.
+1. Enter or select these values for the virtual machine:
+
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Virtual machine name | Enter *Srv-Work*.|
+ | Region | Select the same location that you used previously. |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+
+1. Under **Inbound port rules**, **Public inbound ports**, select **None**.
+1. Accept the other defaults and select **Next: Disks**.
+1. Accept the disk defaults and select **Next: Networking**.
+1. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**.
+1. For **Public IP**, select **None**.
+1. Accept the other defaults and select **Next: Management**.
+1. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+1. Review the settings on the summary page, and then select **Create**.
1. After the deployment completes, select the **Srv-Work** resource and note the private IP address for later use. ## Deploy the firewall and policy
Deploy the firewall into the VNet.
3. Select **Firewall** and then select **Create**. 4. On the **Create a Firewall** page, use the following table to configure the firewall:
- |Setting |Value |
- |||
- |Subscription |\<your subscription\>|
- |Resource group |**Test-FW-RG** |
- |Name |**Test-FW01**|
- |Region |Select the same location that you used previously|
- |Firewall management|**Use a Firewall Policy to manage this firewall**|
- |Firewall policy|**Add new**:<br>**fw-test-pol**<br>your selected region
- |Choose a virtual network |**Use existing**: **Test-FW-VN**|
- |Public IP address |**Add new**:<br>**Name**: **fw-pip**|
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Name | Enter *Test-FW01*. |
+ | Region | Select the same location that you used previously. |
+ | Firewall management | Select **Use a Firewall Policy to manage this firewall**. |
+ | Firewall policy | Select **Add new**, and enter *fw-test-pol*. <br> Select the same region that you used previously.
+ | Choose a virtual network | Select **Use existing**, and then select **Test-FW-VN**. |
+ | Public IP address | Select **Add new**, and enter *fw-pip* for the **Name**. |
5. Accept the other default values, then select **Review + create**. 6. Review the summary, and then select **Create** to create the firewall.
Deploy the firewall into the VNet.
For the **Workload-SN** subnet, configure the outbound default route to go through the firewall. 1. On the Azure portal menu, select **All services** or search for and select *All services* from any page.
-2. Under **Networking**, select **Route tables**.
-3. Select **Add**.
-5. For **Subscription**, select your subscription.
-6. For **Resource group**, select **Test-FW-RG**.
-7. For **Region**, select the same location that you used previously.
-4. For **Name**, type **Firewall-route**.
+1. Under **Networking**, select **Route tables**.
+1. Select **Create**, then enter or select the following values:
+
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Test-FW-RG**. |
+ | Region | Select the same location that you used previously. |
+ | Name | Enter *Firewall-route*. |
+ 1. Select **Review + create**. 1. Select **Create**. After deployment completes, select **Go to resource**.
-1. On the Firewall-route page, select **Subnets** and then select **Associate**.
+1. On the **Firewall-route** page, select **Subnets** and then select **Associate**.
1. Select **Virtual network** > **Test-FW-VN**. 1. For **Subnet**, select **Workload-SN**. Make sure that you select only the **Workload-SN** subnet for this route, otherwise your firewall won't work correctly.-
-13. Select **OK**.
-14. Select **Routes** and then select **Add**.
-15. For **Route name**, type **fw-dg**.
-16. For **Address prefix**, type **0.0.0.0/0**.
-17. For **Next hop type**, select **Virtual appliance**.
-
+1. Select **OK**.
+1. Select **Routes** and then select **Add**.
+1. For **Route name**, enter *fw-dg*.
+1. For **Address prefix**, enter *0.0.0.0/0*.
+1. For **Next hop type**, select **Virtual appliance**.
Azure Firewall is actually a managed service, but virtual appliance works in this situation.
-18. For **Next hop address**, type the private IP address for the firewall that you noted previously.
-19. Select **OK**.
+1. For **Next hop address**, enter the private IP address for the firewall that you noted previously.
+1. Select **OK**.
## Configure an application rule This is the application rule that allows outbound access to `www.google.com`.
-1. Open the **Test-FW-RG**, and select the **fw-test-pol** firewall policy.
+1. Open the **Test-FW-RG** resource group, and select the **fw-test-pol** firewall policy.
1. Select **Application rules**. 1. Select **Add a rule collection**.
-1. For **Name**, type **App-Coll01**.
-1. For **Priority**, type **200**.
+1. For **Name**, enter *App-Coll01*.
+1. For **Priority**, enter *200*.
1. For **Rule collection action**, select **Allow**.
-1. Under **Rules**, for **Name**, type **Allow-Google**.
+1. Under **Rules**, for **Name**, enter *Allow-Google*.
1. For **Source type**, select **IP address**.
-1. For **Source**, type **10.0.2.0/24**.
-1. For **Protocol:port**, type **http, https**.
+1. For **Source**, enter *10.0.2.0/24*.
+1. For **Protocol:port**, enter *http, https*.
1. For **Destination Type**, select **FQDN**.
-1. For **Destination**, type **`www.google.com`**
+1. For **Destination**, enter *`www.google.com`*
1. Select **Add**. Azure Firewall includes a built-in rule collection for infrastructure FQDNs that are allowed by default. These FQDNs are specific for the platform and can't be used for other purposes. For more information, see [Infrastructure FQDNs](infrastructure-fqdns.md).
This is the network rule that allows outbound access to two IP addresses at port
1. Select **Network rules**. 2. Select **Add a rule collection**.
-3. For **Name**, type **Net-Coll01**.
-4. For **Priority**, type **200**.
+3. For **Name**, enter *Net-Coll01*.
+4. For **Priority**, enter *200*.
5. For **Rule collection action**, select **Allow**. 1. For **Rule collection group**, select **DefaultNetworkRuleCollectionGroup**.
-1. Under **Rules**, for **Name**, type **Allow-DNS**.
+1. Under **Rules**, for **Name**, enter *Allow-DNS*.
1. For **Source type**, select **IP Address**.
-1. For **Source**, type **10.0.2.0/24**.
+1. For **Source**, enter *10.0.2.0/24*.
1. For **Protocol**, select **UDP**.
-1. For **Destination Ports**, type **53**.
+1. For **Destination Ports**, enter *53*.
1. For **Destination type** select **IP address**.
-1. For **Destination**, type **209.244.0.3,209.244.0.4**.<br>These are public DNS servers operated by CenturyLink.
+1. For **Destination**, enter *209.244.0.3,209.244.0.4*.<br>These are public DNS servers operated by CenturyLink.
2. Select **Add**. ## Configure a DNAT rule
-This rule allows you to connect a remote desktop to the Srv-Work virtual machine through the firewall.
+This rule allows you to connect a remote desktop to the **Srv-Work** virtual machine through the firewall.
1. Select the **DNAT rules**. 2. Select **Add a rule collection**.
-3. For **Name**, type **rdp**.
-1. For **Priority**, type **200**.
+3. For **Name**, enter *rdp*.
+1. For **Priority**, enter *200*.
1. For **Rule collection group**, select **DefaultDnatRuleCollectionGroup**.
-1. Under **Rules**, for **Name**, type **rdp-nat**.
+1. Under **Rules**, for **Name**, enter *rdp-nat*.
1. For **Source type**, select **IP address**.
-1. For **Source**, type **\***.
+1. For **Source**, enter *\**.
1. For **Protocol**, select **TCP**.
-1. For **Destination Ports**, type **3389**.
+1. For **Destination Ports**, enter *3389*.
1. For **Destination Type**, select **IP Address**.
-1. For **Destination**, type the firewall public IP address.
-1. For **Translated address**, type the **Srv-work** private IP address.
-1. For **Translated port**, type **3389**.
+1. For **Destination**, enter the firewall public IP address.
+1. For **Translated address**, enter the **Srv-work** private IP address.
+1. For **Translated port**, enter *3389*.
1. Select **Add**.
For testing purposes in this tutorial, configure the server's primary and second
2. Select the network interface for the **Srv-Work** virtual machine. 3. Under **Settings**, select **DNS servers**. 4. Under **DNS servers**, select **Custom**.
-5. Type **209.244.0.3** in the **Add DNS server** text box, and **209.244.0.4** in the next text box.
+5. Enter *209.244.0.3* in the **Add DNS server** text box, and *209.244.0.4* in the next text box.
6. Select **Save**. 7. Restart the **Srv-Work** virtual machine.
governance Agent Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/agent-notes.md
The guest configuration agent receives improvements on an ongoing basis. To stay
- Known issues - Bug fixes
-For information on release notes for the connected machine agent, please see [What's new with the connected machine agent](/azure/azure-arc/servers/agent-release-notes).
+For information on release notes for the connected machine agent, please see [What's new with the connected machine agent](../../azure-arc/servers/agent-release-notes.md).
## Release notes
az vm extension set --publisher Microsoft.GuestConfiguration --name Configurati
- [Assign your custom policy definition](../policy/assign-policy-portal.md) using Azure portal. - Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
+ [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
resource properties with different needs for compliance.
You use JavaScript Object Notation (JSON) to create a policy assignment. The policy assignment contains elements for: -- display name-- description-- metadata-- enforcement mode-- excluded scopes-- policy definition-- non-compliance messages-- parameters-- identity-- resource selectors (preview)-- overrides (preview)
+- [display name](#display-name-and-description)
+- [description](#display-name-and-description)
+- [metadata](#metadata)
+- [resource selectors (preview)](#resource-selectors-preview)
+- [overrides (preview)](#overrides-preview)
+- [enforcement mode](#enforcement-mode)
+- [excluded scopes](#excluded-scopes)
+- [policy definition](#policy-definition-id)
+- [non-compliance messages](#non-compliance-messages)
+- [parameters](#parameters)
+- [identity](#identity)
For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic parameters:
shows our policy assignment with two additional Azure regions added to the **SDP
Resource selectors have the following properties: - `name`: The name of the resource selector.-- `selectors`: The factor used to determine which subset of resources applicable to the policy assignment should be evaluated for compliance.
- - `kind`: The property of a `selector` that describes what characteristic will narrow down the set of evaluated resources. Each 'kind' can only be used once in a single resource selector. Allowed values are:
- - `resourceLocation`: This is used to select resources based on their type. Can be used in up to 10 resource selectors. Cannot be used in the same resource selector as `resourceWithoutLocation`.
+
+- `selectors`: (Optional) The property used to determine which subset of resources applicable to the policy assignment should be evaluated for compliance.
+
+ - `kind`: The property of a selector that describes what characteristic will narrow down the set of evaluated resources. Each kind can only be used once in a single resource selector. Allowed values are:
+
+ - `resourceLocation`: This is used to select resources based on their type. Cannot be used in the same resource selector as `resourceWithoutLocation`.
+ - `resourceType`: This is used to select resources based on their type.+ - `resourceWithoutLocation`: This is used to select resources at the subscription level which do not have a location. Currently only supports `subscriptionLevelResources`. Cannot be used in the same resource selector as `resourceLocation`.+ - `in`: The list of allowed values for the specified `kind`. Cannot be used with `notIn`. Can contain up to 50 values.+ - `notIn`: The list of not-allowed values for the specified `kind`. Cannot be used with `in`. Can contain up to 50 values.
-A **resource selector** can contain multiple **selectors**. To be applicable to a resource selector, a resource must meet requirements specified by all its selectors. Further, multiple **resource selectors** can be specified in a single assignment. In-scope resources are evaluated when they satisfy any one of these resource selectors.
+A **resource selector** can contain multiple **selectors**. To be applicable to a resource selector, a resource must meet requirements specified by all its selectors. Further, up to 10 **resource selectors** can be specified in a single assignment. In-scope resources are evaluated when they satisfy any one of these resource selectors.
## Overrides (preview)
Let's take a look at an example. Imagine you have a policy initiative named _Cos
} ```
-Note that one override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they are specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list, in cases where the effect is [parameterized](definition-structure.md#parameters).
+Overrides have the following properties:
+- `kind`: The property the assignment will override. The supported kind is `policyEffect`.
+
+- `value`: The new value which will override the existing value. The supported values are [effects](effects.md).
+
+- `selectors`: (Optional) The property used to determine what scope of the policy assignment should take on the override.
+
+ - `kind`: The property of a selector that describes what characteristic will narrow down the scope of the override. Allowed value for `kind: policyEffect` is:
+
+ - `policyDefinitionReferenceId`: This specifies which policy definitions within an initiative assignment should take on the effect override.
+
+ - `in`: The list of allowed values for the specified `kind`. Cannot be used with `notIn`. Can contain up to 50 values.
+
+ - `notIn`: The list of not-allowed values for the specified `kind`. Cannot be used with `in`. Can contain up to 50 values.
+
+Note that one override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they are specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list (in cases where the effect is [parameterized](definition-structure.md#parameters)).
## Enforcement mode
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
Attestations are used by Azure Policy to set compliance states of resources or scopes targeted by [manual policies](effects.md#manual-preview). They also allow users to provide additional metadata or link to evidence which accompanies the attested compliance state. > [!NOTE]
-> In preview, Attestations are available only through the Azure Resource Manager (ARM) API.
+> In preview, Attestations are available only through the [Azure Resource Manager (ARM) API](/rest/api/policy/attestations).
-Below is an example of creating a new attestation resource which sets the compliance state for resources within a desired resource group:
+## Best practices
+
+Attestations can be used to set the compliance state of an individual resource for a given manual policy. This means that each applicable resource requires one attestation per manual policy assignment. For ease of management, manual policies should be designed to target the scope which defines the boundary of resources whose compliance state needs to be attested.
+
+For example, suppose an organization divides teams by resource group, and each team is required to attest to development of procedures for handling resources within that resource group. In this scenario, the conditions of the policy rule should specify that type equals `Microsoft.Resources/resourceGroups`. This way, one attestation is required for the resource group, rather than for each individual resource within. Similarly, if the organization deivides teams by subscriptions, the policy rule should target `Microsoft.Resources/subscriptions`.
+
+Typically, the provided evidence should correspond with relevant scopes of the organizational structure. This pattern prevents the need to duplicate evidence across many attestations. Such duplications would make manual policies difficult to manage, and indicate that the policy definition targets the wrong resource(s).
+
+## Example attestation
+
+Below is an example of creating a new attestation resource which sets the compliance state for a resource group targeted by a manual policy assignment:
```http PUT http://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.PolicyInsights/attestations/{name}?api-version=2019-10-01 ```
-Attestations can be used to set the compliance state of an individual resource or a scope. A resource can have one attestation for an individual manual policy assignment.
## Request body
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Example: Gatekeeper v2 admission control rule to allow only the specified contai
## Manual (preview)
-The new `manual` (preview) effect enables you to define and track your own custom attestation
-resources. Unlike other Policy definitions that actively scan for evaluation, the Manual effect
-allows for manual changes to the compliance state. To change the compliance for a manual policy,
-you'll need to create an attestation for that compliance state.
+The new `manual` (preview) effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope which defines the boundary of resources whose compliance need attesting.
> [!NOTE] > During Public Preview, support for manual policy is available through various Microsoft Defender
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
see [Understand scope in Azure Policy](./scope.md). Azure Policy exemptions only
You use JavaScript Object Notation (JSON) to create a policy exemption. The policy exemption contains elements for: -- display name-- description-- metadata-- policy assignment-- policy definitions within an initiative-- exemption category-- expiration
+- [display name](#display-name-and-description)
+- [description](#display-name-and-description)
+- [metadata](#metadata)
+- [policy assignment](#policy-assignment-id)
+- [policy definitions within an initiative](#policy-definition-ids)
+- [exemption category](#exemption-category)
+- [expiration](#expiration)
+- [resource selectors](#resource-selectors-preview)
+- [assignment scope validation](#assignment-scope-validation-preview)
> [!NOTE] > A policy exemption is created as a child object on the resource hierarchy or the individual
two of the policy definitions in the initiative, the `customOrgPolicy` custom po
"allowedLocations" ], "exemptionCategory": "waiver",
- "expiresOn": "2020-12-31T23:59:00.0000000Z"
+ "expiresOn": "2020-12-31T23:59:00.0000000Z",
+ "assignmentScopeValidation": "Default"
} } ```
format `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
> The policy exemptions isn't deleted when the `expiresOn` date is reached. The object is preserved > for record-keeping, but the exemption is no longer honored.
+## Resource selectors (preview)
+
+Exemptions support an optional property `resourceSelectors`. This property works the same way in exemptions as it does in assignments, allowing for gradual rollout or rollback of an _exemption_ to certain subsets of resources in a controlled manner based on resource type, resource location, or whether the resource has a location. More details about how to use resource selectors can be found in the [assignment structure](assignment-structure.md#resource-selectors-preview). Below is an example exemption JSON which leverages resource selectors. In this example, only resources in `westcentralus` will be exempt from the policy assignment:
+
+```json
+{
+ "properties": {
+ "policyAssignmentId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
+ "policyDefinitionReferenceIds": [
+ "limitSku", "limitType"
+ ],
+ "exemptionCategory": "Waiver",
+ "resourceSelectors": [
+ {
+ "name": "TemporaryMitigation",
+ "selectors": [
+ {
+ "kind": "resourceLocation",
+ "in": [ "westcentralus" ]
+ }
+ ]
+ }
+ ]
+ },
+ "systemData": { ... },
+ "id": "/subscriptions/{subId}/resourceGroups/demoCluster/providers/Microsoft.Authorization/policyExemptions/DemoExpensiveVM",
+ "type": "Microsoft.Authorization/policyExemptions",
+ "name": "DemoExpensiveVM"
+}
+```
+
+Regions can be added or removed from the `resourceLocation` list in the example above. Resource selectors allow for greater flexibility of where and how exemptions can be created and managed.
+
+## Assignment scope validation (preview)
+
+In most scenarios, the exemption scope is validated to ensure it is at or under the policy assignment scope. The optional `assignmentScopeValidation` property can allow an exemption to bypass this validation and be created outside of the assignment scope. This is intended for situations where a subscription needs to be moved from one management group (MG) to another, but the move would be blocked by policy due to properties of resources within the subscription. In this scenario, an exemption could be created for the subscription in its current MG to exempt its resources from a policy assignment on the destination MG. That way, when the subscription is moved into the destination MG, the operation is not blocked because resources are already exempt from the policy assignment in question. The use of this property is illustrated below:
+
+```json
+{
+ "properties": {
+ "policyAssignmentId": "/providers/Microsoft.Management/managementGroups/{mgB}/providers/Microsoft.Authorization/policyAssignments/CostManagement",
+ "policyDefinitionReferenceIds": [
+ "limitSku", "limitType"
+ ],
+ "exemptionCategory": "Waiver",
+ "assignmentScopeValidation": "DoNotValidate",
+ },
+ "systemData": { ... },
+ "id": "/subscriptions/{subIdA}/providers/Microsoft.Authorization/policyExemptions/DemoExpensiveVM",
+ "type": "Microsoft.Authorization/policyExemptions",
+ "name": "DemoExpensiveVM"
+}
+```
+
+Allowed values for `assignmentScopeValidation` are `Default`and `DoNotValidate`. If not specified, the default validation process will occur.
+ ## Required permissions The Azure RBAC permissions needed to manage Policy exemption objects are in the
assignment.
- Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+ [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
Condition(s) in the `if` block of the policy rule are evaluated for applicabilit
> [!NOTE] > Applicability is different from compliance, and the logic used to determine each is different. If a resource is **applicable** that means it is relevant to the policy. If a resource is **compliant** that means it adheres to the policy. Sometimes only certain conditions from the policy rule impact applicability, while all conditions of the policy rule impact compliance state.
-## Applicability logic for Append/Modify/Audit/Deny/DataPlane effects
+## Applicability logic for Append/Modify/Audit/Deny/RP Mode specific effects
Azure Policy evaluates only `type`, `name`, and `kind` conditions in the policy rule `if` expression and treats other conditions as true (or false when negated). If the final evaluation result is true, the policy is applicable. Otherwise, it's not applicable.
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
Azure Policy for Kubernetes supports the following cluster environments:
- [Azure Arc enabled Kubernetes](../../../azure-arc/kubernetes/overview.md) > [!IMPORTANT]
-> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Instructions can be found below for [removal of those add-ons](#remove-the-add-on). The Azure Policy Extension for Azure Arc enabled Kubernetes is in _preview_.
+> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Instructions can be found below for [removal of those add-ons](#remove-the-add-on).
## Overview
similar to the following output:
"identity": null } ```
-## <a name="install-azure-policy-extension-for-azure-arc-enabled-kubernetes"></a>Install Azure Policy Extension for Azure Arc enabled Kubernetes (preview)
+## <a name="install-azure-policy-extension-for-azure-arc-enabled-kubernetes"></a>Install Azure Policy Extension for Azure Arc enabled Kubernetes
[Azure Policy for Kubernetes](./policy-for-kubernetes.md) makes it possible to manage and report on the compliance state of your Kubernetes clusters from one place.
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Hive metastore operation takes much time and thus slow down Hive compilation. In
## Troubleshooting guide
-[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](/azure/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
+[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](./interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
## References
https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/
## Further reading
-* [HDInsight 4.0 Announcement](/azure/hdinsight/hdinsight-version-release.md)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0.md)
+* [HDInsight 4.0 Announcement](./hdinsight-version-release.md)
+* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0.md)
hdinsight Hdinsight Overview Before You Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md
HDInsight have two options to configure the databases in the clusters.
During cluster creation, default configuration will use internal database. Once the cluster is created, customer canΓÇÖt change the database type. Hence, it's recommended to create and use the external database. You can create custom databases for Ambari, Hive, and Ranger.
-For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](/azure/hdinsight/hdinsight-custom-ambari-db.md)
+For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](./hdinsight-custom-ambari-db.md)
## Keep your clusters up to date
As part of the best practices, we recommend you keep your clusters updated on re
HDInsight release happens every 30 to 60 days. It's always good to move to the latest release as early possible. The recommended maximum duration for cluster upgrades is less than six months.
-For more information, see how to [Migrate HDInsight cluster to a newer version](/azure/hdinsight/hdinsight-upgrade-cluster.md)
+For more information, see how to [Migrate HDInsight cluster to a newer version](./hdinsight-upgrade-cluster.md)
## Next steps * [Create Apache Hadoop cluster in HDInsight](./hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md) * [Create Apache Spark cluster - Portal](./spark/apache-spark-jupyter-spark-sql-use-portal.md)
-* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
+* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
This new feature allows you to add more disks in cluster, which will be used as
> The added disks are only configured for node manager local directories. >
-For more information, [see here](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters#configuration--pricing)
+For more information, [see here](./hdinsight-hadoop-provision-linux-clusters.md#configuration--pricing)
**2. Selective logging analysis**
Selective logging analysis is now available on all regions for public preview. Y
1. Selective Logging uses script action to disable/enable tables and their log types. Since it doesn't open any new ports or change any existing security setting hence, there are no security changes. 1. Script Action runs in parallel on all specified nodes and changes the configuration files for disabling/enabling tables and their log types.
-For more information, [see here](/azure/hdinsight/selective-logging-analysis)
+For more information, [see here](./selective-logging-analysis.md)
![Icon_showing_bug_fixes](media/hdinsight-release-notes/icon-for-bugfix.png)
https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/OMSUPGRADE14.
### Known issues
-HDInsight is compatible with Apache HIVE 3.1.2. Due to a bug in this release, the Hive version is shown as 3.1.0 in hive interfaces. However, there's no impact on the functionality.
+HDInsight is compatible with Apache HIVE 3.1.2. Due to a bug in this release, the Hive version is shown as 3.1.0 in hive interfaces. However, there's no impact on the functionality.
hdinsight Manage Clusters Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/manage-clusters-runbooks.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites
-* An existing [Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal).
+* An existing [Azure Automation account](../automation/quickstarts/create-azure-automation-account-portal.md).
* An existing [Azure Storage account](../storage/common/storage-account-create.md), which will be used as cluster storage. ## Install HDInsight modules
When no longer needed, delete the Azure Automation Account that was created to a
## Next steps > [!div class="nextstepaction"]
-> [Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell](hdinsight-administer-use-powershell.md)
+> [Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell](hdinsight-administer-use-powershell.md)
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
For more information about the Quickstart template and the Deploy to Azure butto
Azure provides Azure PowerShell and Azure CLI to speed up your configurations when used in enterprise environments. Deploying MedTech service with Azure PowerShell or Azure CLI can be useful for adding automation so that you can scale your deployment for a large number of developers. This method is more detailed but provides extra speed and efficiency because it allows you to automate your deployment.
-For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](/deploy-08-new-ps-cli.md).
+For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](/azure/healthcare-apis/iot/deploy-08-new-ps-cli).
## Manual deployment The manual deployment method uses Azure portal to implement each deployment task individually. There are no shortcuts. Because you will be able to see all the details of how to complete the sequence of each task, this procedure can be beneficial if you need to customize or troubleshoot your deployment process. This is the most complex method, but it provides valuable technical information and developmental options that will enable you to fine-tune your deployment very precisely.
-For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](/deploy-03-new-manual.md).
+For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](/azure/healthcare-apis/iot/deploy-03-new-manual).
## Deployment architecture overview
For information about granting access to the FHIR service, see [Granting access
In this article, you learned about the different types of deployment for MedTech service. To learn more about MedTech service, see >[!div class="nextstepaction"]
->[What is MedTech service?](/iot-connector-overview.md).
+>[What is MedTech service?](/rest/api/healthcareapis/iot-connectors).
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
Use your device (real or simulated) to send the sample heart rate message shown
This message will get routed to MedTech service, where the message will be transformed into a FHIR Observation resource and stored into FHIR service. > [!IMPORTANT]
-> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](/azure/iot-hub/iot-hub-devguide-messages-construct#anti-spoofing-properties).
+> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties).
> > To learn about IoT Hub device message enrichment and IotJsonPathContentTemplate mappings usage with the MedTech service device mapping, see [How to use IotJsonPathContentTemplate mappings](how-to-use-iot-jsonpath-content-mappings.md).
To learn about the different stages of data flow within MedTech service, see
>[!div class="nextstepaction"] >[MedTech service data flow](iot-data-flow.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
Metric category|Metric name|Metric description|
> [!TIP] >
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
> [!IMPORTANT] >
To learn about the MedTech service frequently asked questions (FAQs), see
> [!div class="nextstepaction"] > [Frequently asked questions about the MedTech service](iot-connector-faqs.md)
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Iot Jsonpath Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-jsonpath-content-mappings.md
With each of these examples, you're provided with:
* An example of what the MedTech service device message will look like after normalization. > [!IMPORTANT]
-> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](/azure/iot-hub/iot-hub-devguide-messages-construct#anti-spoofing-properties).
+> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties).
> [!TIP] > [Visual Studio Code with the Azure IoT Hub extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) is a recommended method for sending IoT device messages to your IoT Hub for testing and troubleshooting.
In this article, you learned how to use IotJsonPathContentTemplate mappings with
>[!div class="nextstepaction"] >[How to use the FHIR destination mapping](how-to-use-fhir-mappings.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-metrics-diagnostics-export.md
# How to configure diagnostic settings for exporting the MedTech service metrics
-In this article, you'll learn how to configure diagnostic settings for the MedTech service to export metrics to different destinations (for example: to [Azure storage](/azure/storage/) or an [Azure event hub](/azure/event-hubs/)) for audit, analysis, or backup.
+In this article, you'll learn how to configure diagnostic settings for the MedTech service to export metrics to different destinations (for example: to [Azure storage](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, or backup.
## Create a diagnostic setting for the MedTech service 1. To enable metrics export for your MedTech service, select **MedTech service** in your workspace under **Services**.
In this article, you'll learn how to configure diagnostic settings for the MedTe
To view the frequently asked questions (FAQs) about the MedTech service, see >[!div class="nextstepaction"]
->[MedTech service FAQs](iot-connector-faqs.md)
+>[MedTech service FAQs](iot-connector-faqs.md)
industrial-iot Overview What Is Industrial Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/overview-what-is-industrial-iot.md
Azure IIoT solutions are built from specific components:
The [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) acts as a central message hub for secure, bi-directional communications between any IoT application and the devices it manages. It's an open and flexible cloud platform as a service (PaaS) that supports open-source SDKs and multiple protocols.
-Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can process your combined data with Microsoft Azure services and tools, for example [Azure Stream Analytics](/azure/stream-analytics), or visualize in your Business Intelligence platform of choice such as [Power BI](https://powerbi.microsoft.com).
+Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can process your combined data with Microsoft Azure services and tools, for example [Azure Stream Analytics](../stream-analytics/index.yml), or visualize in your Business Intelligence platform of choice such as [Power BI](https://powerbi.microsoft.com).
### IoT Edge devices
You can read more about the OPC Publisher or get started with deploying the IIoT
> [!div class="nextstepaction"] > [Deploy the Industrial IoT Platform](tutorial-deploy-industrial-iot-platform.md)
->
+>
iot-develop Iot Device Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/iot-device-selection.md
Please submit an issue!
Other helpful resources include: -- [Overview of Azure IoT device types](/azure/iot-develop/concepts-iot-device-types)-- [Overview of Azure IoT Device SDKs](/azure/iot-develop/about-iot-sdks)-- [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](/azure/iot-develop/quickstart-send-telemetry-iot-hub?pivots=programming-language-ansi-c)-- [AzureRTOS ThreadX Documentation](/azure/rtos/threadx/)
+- [Overview of Azure IoT device types](./concepts-iot-device-types.md)
+- [Overview of Azure IoT Device SDKs](./about-iot-sdks.md)
+- [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](./quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c)
+- [AzureRTOS ThreadX Documentation](/azure/rtos/threadx/)
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
This tutorial shows you how to build a GPU-enabled virtual machine (VM). From th
We'll use the Azure portal, the Azure Cloud Shell, and your VM's command line to: * Build a GPU-capable VM
-* Install the [NVIDIA driver extension](/azure/virtual-machines/extensions/hpccompute-gpu-linux) on the VM
+* Install the [NVIDIA driver extension](../virtual-machines/extensions/hpccompute-gpu-linux.md) on the VM
* Configure a module on an IoT Edge device to allocate work to a GPU ## Prerequisites
We'll use the Azure portal, the Azure Cloud Shell, and your VM's command line to
* Azure IoT Edge device
- If you don't already have an IoT Edge device and need to quickly create one, run the following command. Use the [Azure Cloud Shell](/azure/cloud-shell/overview) located in the Azure portal. Create a new device name for `<DEVICE-NAME>` and replace the IoT `<IOT-HUB-NAME>` with your own.
+ If you don't already have an IoT Edge device and need to quickly create one, run the following command. Use the [Azure Cloud Shell](../cloud-shell/overview.md) located in the Azure portal. Create a new device name for `<DEVICE-NAME>` and replace the IoT `<IOT-HUB-NAME>` with your own.
```azurecli az iot hub device-identity create --device-id <YOUR-DEVICE-NAME> --edge-enabled --hub-name <YOUR-IOT-HUB-NAME>
We'll use the Azure portal, the Azure Cloud Shell, and your VM's command line to
## Create a GPU-optimized virtual machine
-To create a GPU-optimized virtual machine (VM), choosing the right size is important. Not all VM sizes will accommodate GPU processing. In addition, there are different VM sizes for different workloads. For more information, see [GPU optimized virtual machine sizes](/azure/virtual-machines/sizes-gpu) or try the [Virtual machines selector](https://azure.microsoft.com/pricing/vm-selector/).
+To create a GPU-optimized virtual machine (VM), choosing the right size is important. Not all VM sizes will accommodate GPU processing. In addition, there are different VM sizes for different workloads. For more information, see [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) or try the [Virtual machines selector](https://azure.microsoft.com/pricing/vm-selector/).
-Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview) template in GitHub, then configure it to be GPU-optimized.
+Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md) template in GitHub, then configure it to be GPU-optimized.
1. Go to the IoT Edge VM deployment template in GitHub: [Azure/iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.4).
Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](/azure/azure
> > Check which GPU VMs are supported in each region: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=us-central,us-east,us-east-2,us-north-central,us-south-central,us-west-central,us-west,us-west-2,us-west-3&products=virtual-machines). >
- > To check [which region your Azure subscription allows](/azure/azure-resource-manager/troubleshooting/error-sku-not-available?tabs=azure-cli#solution), try this Azure command from the Azure portal. The `N` in `Standard_N` means it's a GPU-enabled VM.
+ > To check [which region your Azure subscription allows](../azure-resource-manager/troubleshooting/error-sku-not-available.md?tabs=azure-cli#solution), try this Azure command from the Azure portal. The `N` in `Standard_N` means it's a GPU-enabled VM.
> ```azurecli > az vm list-skus --location <YOUR-REGION> --size Standard_N --all --output table > ```
Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](/azure/azure
1. Select the **Review + create** button at the bottom, then the **Create** button. Deployment can take up one minute to complete. ## Install the NVIDIA extension
-Now that we have a GPU-optimized VM, let's install the [NVIDIA extension](/azure/virtual-machines/extensions/hpccompute-gpu-linux) on the VM using the Azure portal.
+Now that we have a GPU-optimized VM, let's install the [NVIDIA extension](../virtual-machines/extensions/hpccompute-gpu-linux.md) on the VM using the Azure portal.
1. Open your VM in the Azure portal and select **Extensions + applications** from the left menu.
Now that we have a GPU-optimized VM, let's install the [NVIDIA extension](/azure
:::image type="content" source="media/configure-connect-verify-gpu/nvidia-driver-installed.png" alt-text="Screenshot of the NVIDIA driver table."::: > [!NOTE]
-> The NVIDIA extension is a simplified way to install the NVIDIA drivers, but you may need more customization. For more information about custom installations on N-series VMs, see [Install NVIDIA GPU drivers on N-series VMs running Linux](/azure/virtual-machines/linux/n-series-driver-setup).
+> The NVIDIA extension is a simplified way to install the NVIDIA drivers, but you may need more customization. For more information about custom installations on N-series VMs, see [Install NVIDIA GPU drivers on N-series VMs running Linux](../virtual-machines/linux/n-series-driver-setup.md).
## Enable a module with GPU acceleration
az group list
## Next steps
-This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try the learning path for [NVIDIA DeepStream development with Microsoft Azure](/training/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
+This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try the learning path for [NVIDIA DeepStream development with Microsoft Azure](/training/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
iot-edge How To Provision Devices At Scale Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-symmetric.md
Have the following information ready:
1. Update the values of `id_scope`, `registration_id`, and `symmetric_key` with your DPS and device information.
- The symmetric key parameter can accept a value of an inline key, a file URI, or a PKCS#11 URI. Uncomment just one symmetric key line, based on which format you're using.
+ The symmetric key parameter can accept a value of an inline key, a file URI, or a PKCS#11 URI. Uncomment just one symmetric key line, based on which format you're using. When using an inline key, use a base64-encoded key like the example. When using a file URI, your file should contain the raw bytes of the key.
If you use any PKCS#11 URIs, find the **PKCS#11** section in the config file and provide information about your PKCS#11 configuration.
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Previously updated : 08/30/2022 Last updated : 10/18/2022
To build and deploy your module image, you need Docker to build the module image
> You can use a local Docker registry for prototype and testing purposes instead of a cloud registry. - Install the [Azure CLI](/cli/azure/install-azure-cli)++ - Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) in order to set up your local development environment to debug, run, and test your IoT Edge solution. If you haven't already done so, install [Python (3.6/3.7/3.8) and Pip3](https://www.python.org/) and then install the IoT Edge Dev Tool (iotedgedev) by running this command in your terminal. ```cmd
To build and deploy your module image, you need Docker to build the module image
> > For more information setting up your development machine, see [iotedgedev development setup](https://github.com/Azure/iotedgedev/blob/main/docs/environment-setup/manual-dev-machine-setup.md). + Install prerequisites specific to the language you're developing in: # [C](#tab/c)
iot-edge Troubleshoot Iot Edge For Linux On Windows Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-common-errors.md
The following section addresses the common errors when installing the EFLOW MSI
## Provisioning and IoT Edge runtime The following section addresses the common errors when provisioning the EFLOW virtual machine and interact with the IoT Edge runtime. Ensure you have an understanding of the following EFLOW concepts:-- [What is Azure IoT Hub Device Provisioning Service?](/azure/iot-dps/about-iot-dps)
+- [What is Azure IoT Hub Device Provisioning Service?](../iot-dps/about-iot-dps.md)
- [Understand the Azure IoT Edge runtime and its architecture](./iot-edge-runtime.md) - [Troubleshoot your IoT Edge device](./troubleshoot.md)
The following section addresses the common errors related to EFLOW networking an
Do you think that you found a bug in the IoT Edge for Linux on Windows? [Submit an issue](https://github.com/Azure/iotedge-eflow/issues) so that we can continue to improve.
-If you have more questions, create a [Support request](https://portal.azure.com/#create/Microsoft.Support) for help.
+If you have more questions, create a [Support request](https://portal.azure.com/#create/Microsoft.Support) for help.
iot-hub-device-update Device Update Configure Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configure-repo.md
Following this document, learn how to configure a package repository using [OSCo
You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure Portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started: - Create a Device Update account and instance in your IoT Hub. See [how to create it](create-device-update-account.md).-- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device).
+- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](../iot-edge/how-to-provision-single-device-linux-symmetric.md?preserve-view=true&tabs=azure-portal%2cubuntu&view=iotedge-2020-11#install-iot-edge) or higher is already installed on the device).
- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent.md#manually-prepare-a-device). - Install the OSConfig agent on the device. See [how to](/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom). - Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure)
Follow the below steps to update Azure IoT Edge on Ubuntu Server 18.04 x64 by co
2. Upload your packages to the above configured repository. 3. Create an [APT manifest](device-update-apt-manifest.md) to provide the Device Update agent with the information it needs to download and install the packages (and their dependencies) from the repository. 4. Follow steps from [here](device-update-ubuntu-agent.md#prerequisites) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale.
-5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent.md#monitor-the-update-deployment).
+5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent.md#monitor-the-update-deployment).
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
The **ImportDevicesAsync** method takes two parameters:
SharedAccessBlobPermissions.Read ```
-* A *string* that contains a URI of an [Azure Storage](/azure/storage/) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
+* A *string* that contains a URI of an [Azure Storage](../storage/index.yml) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
```csharp SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Read
To further explore the capabilities of IoT Hub, see:
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
+* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
iot-hub Iot Hub Create Use Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-use-iot-toolkit.md
This article shows you how to use the [Azure IoT Tools for Visual Studio Code](h
- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) installed for Visual Studio Code -- An Azure resource group: [create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal#create-resource-groups) in the Azure portal
+- An Azure resource group: [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) in the Azure portal
## Create an IoT hub without an IoT Project
Now that you've deployed an IoT hub using the Azure IoT Tools for Visual Studio
* [Use the Azure IoT Tools for Visual Studio Code for Azure IoT Hub device management](iot-hub-device-management-iot-toolkit.md)
-* [See the Azure IoT Hub for VS Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki).
+* [See the Azure IoT Hub for VS Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki).
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
For a complete list of options to update an IoT hub, see the [**az iot hub updat
## Register a new device in the IoT hub
-In this section, you create a device identity in the identity registry in your IoT hub. A device can't connect to a hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). This device identity is [IoT Edge](/azure/iot-edge) enabled.
+In this section, you create a device identity in the identity registry in your IoT hub. A device can't connect to a hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). This device identity is [IoT Edge](../iot-edge/index.yml) enabled.
Run the following command to create a device identity. Use your IoT hub name and create a new device ID name in place of `{iothub_name}` and `{device_id}`. This command creates a device identity with default authorization (shared private key).
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
The following list describes the endpoints:
* **Service endpoints**. Each IoT hub exposes a set of endpoints for your solution back end to communicate with your devices. With one exception, these endpoints are only exposed using the [AMQP](https://www.amqp.org/) and AMQP over WebSockets protocols. The direct method invocation endpoint is exposed over the HTTPS protocol.
- * *Receive device-to-cloud messages*. This endpoint is compatible with [Azure Event Hubs](/azure/event-hubs/). A back-end service can use it to read the [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) sent by your devices. You can create custom endpoints on your IoT hub in addition to this built-in endpoint.
+ * *Receive device-to-cloud messages*. This endpoint is compatible with [Azure Event Hubs](../event-hubs/index.yml). A back-end service can use it to read the [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) sent by your devices. You can create custom endpoints on your IoT hub in addition to this built-in endpoint.
* *Send cloud-to-device messages and receive delivery acknowledgments*. These endpoints enable your solution back end to send reliable [cloud-to-device messages](iot-hub-devguide-messages-c2d.md), and to receive the corresponding delivery or expiration acknowledgments.
Other reference topics in this IoT Hub developer guide include:
* [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) * [Quotas and throttling](iot-hub-devguide-quotas-throttling.md) * [IoT Hub MQTT support](iot-hub-mqtt-support.md)
-* [Understand your IoT hub IP address](iot-hub-understand-ip-address.md)
+* [Understand your IoT hub IP address](iot-hub-understand-ip-address.md)
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Device identities can also be exported and imported from an IoT Hub via the Serv
The device data that a given IoT solution stores depends on the specific requirements of that solution. But, as a minimum, a solution must store device identities and authentication keys. Azure IoT Hub includes an identity registry that can store values for each device such as IDs, authentication keys, and status codes. A solution can use other Azure services such as Table storage, Blob storage, or Azure Cosmos DB to store any additional device data.
-*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](/azure/iot-dps).
+*Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](../iot-dps/index.yml).
## Device heartbeat
To try out some of the concepts described in this article, see the following IoT
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](/azure/iot-dps)
+* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
# Read device-to-cloud messages from the built-in endpoint
-By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](/azure/event-hubs/). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
+By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](../event-hubs/index.yml). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
| Property | Description | | - | -- |
You can use the Event Hubs SDKs to read from the built-in endpoint in environmen
For more detail, see the [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial.
-* If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md).
+* If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md).
iot-hub Iot Hub How To Order Connection State Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-order-connection-state-events.md
# Order device connection events from Azure IoT Hub using Azure Cosmos DB
-[Azure Event Grid](/azure/event-grid/overview) helps you build event-based applications and easily integrates IoT events in your business solutions. This article walks you through a setup using Cosmos DB, Logic App, IoT Hub Events, and a simulated Raspberry Pi to collect and store connection and disconnection events of a device.
+[Azure Event Grid](../event-grid/overview.md) helps you build event-based applications and easily integrates IoT events in your business solutions. This article walks you through a setup using Cosmos DB, Logic App, IoT Hub Events, and a simulated Raspberry Pi to collect and store connection and disconnection events of a device.
From the moment your device runs, an order of operations activates:
This sample app will trigger a device connected event.
:::image type="content" source="media/iot-hub-how-to-order-connection-state-events/raspmsg.png" alt-text="Screenshot of what to expect in your output console when you run the Raspberry Pi." lightbox="media/iot-hub-how-to-order-connection-state-events/raspmsg.png":::
-1. You can check your Logic App **Overview** page to check if your logic is being triggered. It'll say **Succeeded** or **Failed**. Checking here let's you know your logic app state if troubleshooting is needed. Expect a 15-30 second delay from when your trigger runs. If you need to troubleshoot your logic app, view this [Troubleshoot errors](/azure/logic-apps/logic-apps-diagnosing-failures?tabs=consumption) article.
+1. You can check your Logic App **Overview** page to check if your logic is being triggered. It'll say **Succeeded** or **Failed**. Checking here let's you know your logic app state if troubleshooting is needed. Expect a 15-30 second delay from when your trigger runs. If you need to troubleshoot your logic app, view this [Troubleshoot errors](../logic-apps/logic-apps-diagnosing-failures.md?tabs=consumption) article.
:::image type="content" source="media/iot-hub-how-to-order-connection-state-events/logic-app-log.jpg" alt-text="Screenshot of the status updates on your logic app Overview page." lightbox="media/iot-hub-how-to-order-connection-state-events/logic-app-log.jpg":::
To remove an Azure Cosmos DB account from the Azure portal, go to your resource
* Learn about what else you can do with [Event Grid](../event-grid/overview.md)
-* Learn how to use Event Grid and Azure Monitor to [Monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
+* Learn how to use Event Grid and Azure Monitor to [Monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
You can use the [IoT Hub Resource](/rest/api/iothub/iothubresource) REST API to
## Prerequisites
-* [Azure PowerShell module](/powershell/azure/install-az-ps) or [Azure Cloud Shell](/azure/cloud-shell/overview)
+* [Azure PowerShell module](/powershell/azure/install-az-ps) or [Azure Cloud Shell](../cloud-shell/overview.md)
* [Postman](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman) or [cURL](https://curl.se/)
To learn more about developing for IoT Hub, see the following articles:
To further explore the capabilities of IoT Hub, see:
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
+* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
In this tutorial, you perform the following tasks:
* Make sure that port 8883 is open in your firewall. The sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
-* Optionally, install [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer). This tool helps you observe the messages as they arrive at your IoT hub.
+* Optionally, install [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer). This tool helps you observe the messages as they arrive at your IoT hub. This article uses Azure IoT Explorer.
# [Azure portal](#tab/portal)
Now that you have a device ID and key, use the sample code to start sending devi
dotnet restore ```
-1. In an editor of your choice, open the `Paramaters.cs` file. This file shows the parameters that are supported by the sample. Only the first three required parameters will be used in this article when running the sample. Review the code in this file. No changes are needed.
+1. In an editor of your choice, open the `Parameters.cs` file. This file shows the parameters that are supported by the sample. Only the `PrimaryConnectionString` parameter will be used in this article when running the sample. Review the code in this file. No changes are needed.
+ 1. Build and run the sample code using the following command:
- * Replace `<myDeviceId>` with the device ID that you assigned when registering the device.
- * Replace `<iotHubUri>` with the hostname of your IoT hub, which takes the format `IOTHUB_NAME.azure-devices.net`.
- * Replace `<deviceKey>` with the device key that you copied from the device identity information.
+ Replace `<myDevicePrimaryConnectionString>` with your primary connection string from your device in your IoT hub.
```cmd
- dotnet run --d <myDeviceId> --u <iotHubUri> --k <deviceKey>
+ dotnet run --PrimaryConnectionString <myDevicePrimaryConnectionString>
``` 1. You should start to see messages printed to output as they are sent to IoT Hub. Leave this program running for the duration of the tutorial.
Now, use that connection string to configure IoT Explorer for your IoT hub.
1. Select **Save**. 1. Once you connect to your IoT hub, you should see a list of devices. Select the device ID that you created for this tutorial. 1. Select **Telemetry**.
-1. Select **Start**.
+1. With your device still running, select **Start**. If you're device is not running you won't see telemetry.
![Start monitoring device telemetry in IoT Explorer.](./media/tutorial-routing/iot-explorer-start-monitoring-telemetry.png)
Now, use that connection string to configure IoT Explorer for your IoT hub.
![View messages arriving at IoT hub on the built-in endpoint.](./media/tutorial-routing/iot-explorer-view-messages.png)
-Watch the incoming messages for a few moments to verify that you see three different types of messages: normal, storage, and critical.
+ Watch the incoming messages for a few moments to verify that you see three different types of messages: normal, storage, and critical. After seeing this, you can stop your device.
These messages are all arriving at the default built-in endpoint for your IoT hub. In the next sections, we're going to create a custom endpoint and route some of these messages to storage based on the message properties. Those messages will stop appearing in IoT Explorer because messages only go to the built-in endpoint when they don't match any other routes in IoT hub.
Create an Azure Storage account and a container within that account, which will
1. In the storage account menu, select **Containers** from the **Data storage** section.
-1. Select **Container** to create a new container.
+1. Select **+ Container** to create a new container.
![Create a storage container](./media/tutorial-routing/create-storage-container.png)
Now set up the routing for the storage account. In this section you define a new
1. Select **Message Routing** from the **Hub settings** section of the menu.
-1. In the **Routes** tab, select **Add**.
+1. In the **Routes** tab, select **+ Add**.
![Add a new message route.](./media/tutorial-routing/add-route.png)
-1. Select **Add endpoint** next to the **Endpoint** field, then select **Storage** from the dropdown menu.
+1. Select **+ Add endpoint** next to the **Endpoint** field, then select **Storage** from the dropdown menu.
![Add a new endpoint for a route.](./media/tutorial-routing/add-storage-endpoint.png)
Once the route is created in IoT Hub and enabled, it will immediately start rout
### Monitor the built-in endpoint with IoT Explorer
-Return to the IoT Explorer session on your development machine. Recall that the IoT Explorer monitors the built-in endpoint for your IoT hub. That means that now you should be seeing only the messages that are *not* being routed by the custom route we created. Watch the incoming messages for a few moments and you should only see messages where `level` is set to `normal` or `critical`.
+Return to the IoT Explorer session on your development machine. Recall that the IoT Explorer monitors the built-in endpoint for your IoT hub. That means that now you should be seeing only the messages that are *not* being routed by the custom route we created.
+
+Start the sample again by running the code. Watch the incoming messages for a few moments and you should only see messages where `level` is set to `normal` or `critical`.
### View messages in the storage container
Verify that the messages are arriving in the storage container.
![Find routed messages in storage.](./media/tutorial-routing/view-messages-in-storage.png)
-1. Download the JSON file and confirm that it contains messages from your device that have the `level` property set to `storage`.
+1. Select the JSON file, then select **Download** to download the JSON file. Confirm that the file contains messages from your device that have the `level` property set to `storage`.
+
+1. Stop running the sample.
## Clean up resources
key-vault Integrate Databricks Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/integrate-databricks-blob-storage.md
az keyvault secret set --vault-name contosoKeyVault10 --name storageKey --value
## Create an Azure Databricks workspace and add Key Vault secret scope
-This section can't be completed through the command line. Follow this [guide](/azure/key-vault/general/integrate-databricks-blob-storage#create-an-azure-databricks-workspace-and-add-a-secret-scope). You'll need to access the [Azure portal](https://portal.azure.com/#home) to:
+This section can't be completed through the command line. You'll need to access the [Azure portal](https://portal.azure.com/#home) to:
1. Create your Azure Databricks resource 1. Launch your workspace
This section can't be completed through the command line. Follow this [guide](/a
## Access your blob container from Azure Databricks workspace
-This section can't be completed through the command line. Follow this [guide](/azure/key-vault/general/integrate-databricks-blob-storage#access-your-blob-container-from-azure-databricks). You'll need to use the Azure Databricks workspace to:
+This section can't be completed through the command line. You'll need to use the Azure Databricks workspace to:
1. Create a **New Cluster** 1. Create a **New Notebook**
key-vault Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/versions.md
Private endpoints now available in preview. Azure Private Link Service enables y
New features and integrations released this year: - Integration with Azure Functions. For an example scenario leveraging [Azure Functions](../../azure-functions/index.yml) for key vault operations, see [Automate the rotation of a secret](../secrets/tutorial-rotation.md). -- [Integration with Azure Databricks](/azure/key-vault/general/integrate-databricks-blob-storage). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
+- [Integration with Azure Databricks](./integrate-databricks-blob-storage.md). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
- [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md). ## 2016
First preview version (version 2014-12-08-preview) was announced on January 8, 2
- [About keys, secrets, and certificates](about-keys-secrets-certificates.md) - [Keys](../keys/index.yml) - [Secrets](../secrets/index.yml)-- [Certificates](../certificates/index.yml)
+- [Certificates](../certificates/index.yml)
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/whats-new.md
Private endpoints now available in preview. Azure Private Link Service enables y
New features and integrations released this year: - Integration with Azure Functions. For an example scenario leveraging [Azure Functions](../../azure-functions/index.yml) for key vault operations, see [Automate the rotation of a secret](../secrets/tutorial-rotation.md).-- [Integration with Azure Databricks](/azure/key-vault/general/integrate-databricks-blob-storage). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
+- [Integration with Azure Databricks](./integrate-databricks-blob-storage.md). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope)
- [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md). ## 2016
First preview version (version 2014-12-08-preview) was announced on January 8, 2
## Next steps
-If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
+If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
key-vault Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/access-control.md
The following table shows the endpoints for the management and data planes.
## Management plane and Azure RBAC
-In the management plane, you use Azure RBAC to authorize the operations a caller can execute. In the Azure RBAC model, each Azure subscription has an instance of Azure Active Directory. You grant access to users, groups, and applications from this directory. Access is granted to manage resources in the Azure subscription that use the Azure Resource Manager deployment model. To grant access, use the [Azure portal](https://portal.azure.com/), the [Azure CLI](/cli/azure/install-classic-cli), [Azure PowerShell](/powershell/azureps-cmdlets-docs), or the [Azure Resource Manager REST APIs](/rest/api/authorization/roleassignments).
+In the management plane, you use Azure RBAC to authorize the operations a caller can execute. In the Azure RBAC model, each Azure subscription has an instance of Azure Active Directory. You grant access to users, groups, and applications from this directory. Access is granted to manage resources in the Azure subscription that use the Azure Resource Manager deployment model. To grant access, use the [Azure portal](https://portal.azure.com/), the [Azure CLI](/cli/azure/install-classic-cli), [Azure PowerShell](/powershell/azureps-cmdlets-docs), or the [Azure Resource Manager REST APIs](/rest/api/authorization/role-assignments).
You create a key vault in a resource group and manage access by using Azure Active Directory. You grant users or groups the ability to manage the key vaults in a resource group. You grant the access at a specific scope level by assigning appropriate Azure roles. To grant access to a user to manage key vaults, you assign a predefined `key vault Contributor` role to the user at a specific scope. The following scopes levels can be assigned to an Azure role:
lab-services Azure Polices For Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/azure-polices-for-lab-services.md
Azure Policy helps you manage and prevent IT issues by applying policy definitio
1. Lab Services should require non-admin user for labs 1. Lab Services should restrict allowed virtual machine SKU sizes
-For a full list of built-in policies, including policies for Lab Services, see [Azure Policy built-in policy definitions](/azure/governance/policy/samples/built-in-policies#lab-services).
+For a full list of built-in policies, including policies for Lab Services, see [Azure Policy built-in policy definitions](../governance/policy/samples/built-in-policies.md#lab-services).
This policy enforces that all [shutdown options](how-to-configure-auto-shutdown-
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs will show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when all shutdown options are not enabled for a lab. |
+|**Audit**|Labs will show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when all shutdown options are not enabled for a lab. |
|**Deny**|Lab creation will fail if all shutdown options are not enabled. | ## Lab Services should not allow template virtual machines for labs
This policy can be used to restrict [customization of lab templates](tutorial-se
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs will show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when a template virtual machine is used for a lab.|
+|**Audit**|Labs will show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when a template virtual machine is used for a lab.|
|**Deny**|Lab creation to fail if ΓÇ£create a template virtual machineΓÇ¥ option is used for a lab.| ## Lab Services requires non-admin user for labs
During the policy assignment, the lab administrator can choose the following eff
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when non-admin accounts are not used while creating the lab.|
+|**Audit**|Labs show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when non-admin accounts are not used while creating the lab.|
|**Deny**|Lab creation will fail if ΓÇ£Give lab users a non-admin account on their virtual machinesΓÇ¥ is not checked while creating a lab.| ## Lab Services should restrict allowed virtual machine SKU sizes
During the policy assignment, the Lab Administrator can choose the following eff
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when a non-allowed SKU is used while creating the lab.|
+|**Audit**|Labs show on the [compliance dashboard](../governance/policy/assign-policy-portal.md#identify-non-compliant-resources) as non-compliant when a non-allowed SKU is used while creating the lab.|
|**Deny**|Lab creation will fail if SKU chosen while creating a lab is not allowed as per the policy assignment.| ## Custom policies
Learn how to create custom policies:
See the following articles: - [How to use the Lab Services should restrict allowed virtual machine SKU sizes Azure policy](how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md)-- [Built-in Policies](/azure/governance/policy/samples/built-in-policies#lab-services)-- [What is Azure policy?](/azure/governance/policy/overview)
+- [Built-in Policies](../governance/policy/samples/built-in-policies.md#lab-services)
+- [What is Azure policy?](../governance/policy/overview.md)
lab-services How To Connect Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-vnet-injection.md
Before you configure advanced networking for your lab plan, complete the followi
1. [Create a subnet](../virtual-network/virtual-network-manage-subnet.md) for the virtual network. 1. [Delegate the subnet](#delegate-the-virtual-network-subnet-for-use-with-a-lab-plan) to **Microsoft.LabServices/labplans**. 1. [Create a network security group (NSG)](../virtual-network/manage-network-security-group.md).
-1. [Create an inbound rule to allow traffic from SSH and RDP ports](/azure/virtual-network/manage-network-security-group).
+1. [Create an inbound rule to allow traffic from SSH and RDP ports](../virtual-network/manage-network-security-group.md).
1. [Associate the NSG to the delegated subnet](#associate-delegated-subnet-with-nsg). Now that the prerequisites have been completed, you can [use advanced networking to connect your virtual network during lab plan creation](#connect-the-virtual-network-during-lab-plan-creation).
See the following articles:
- As an admin, [attach a compute gallery to a lab plan](how-to-attach-detach-shared-image-gallery.md). - As an admin, [configure automatic shutdown settings for a lab plan](how-to-configure-auto-shutdown-lab-plans.md).-- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
+- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
lab-services How To Use Restrict Allowed Virtual Machine Sku Sizes Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md
Now you have a lab plan resource ID, you can use it to exclude the lab plan as y
## Next steps See the following articles: - [WhatΓÇÖs new with Azure Policy for Lab Services?](azure-polices-for-lab-services.md)-- [Built-in Policies](/azure/governance/policy/samples/built-in-policies#lab-services)-- [What is Azure policy?](/azure/governance/policy/overview)-
+- [Built-in Policies](../governance/policy/samples/built-in-policies.md#lab-services)
+- [What is Azure policy?](../governance/policy/overview.md)
lab-services Reliability In Azure Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/reliability-in-azure-lab-services.md
Last updated 08/18/2022
# What is reliability in Azure Lab Services?
-This article describes reliability support in Azure Lab Services, and covers regional resiliency with availability zones. For a more detailed overview of reliability in Azure, see [Azure resiliency](/azure/availability-zones/overview).
+This article describes reliability support in Azure Lab Services, and covers regional resiliency with availability zones. For a more detailed overview of reliability in Azure, see [Azure resiliency](../availability-zones/overview.md).
## Availability zone support
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones allow the services to fail over to the other availability zones to provide continuity in service with minimal interruption. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones allow the services to fail over to the other availability zones to provide continuity in service with minimal interruption. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](../availability-zones/az-overview.md).
Azure availability zones-enabled services are designed to provide the right level of resiliency and flexibility. They can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal, with instances pinned to a specific zone. You can also combine these approaches. For more information on zonal vs. zone-redundant architecture, see [Build solutions with availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
Currently, the service is not zonal. That is, you canΓÇÖt configure a lab or the
There are no increased SLAs available for availability in Azure Lab Services. For the monthly uptime SLAs for Azure Lab Services, see [SLA for Azure Lab Services](https://azure.microsoft.com/support/legal/sla/lab-services/v1_0/).
-The Azure Lab Services infrastructure uses Azure Cosmos DB storage. The Azure Cosmos DB storage region is the same as the region where the lab plan is located. All the regional Azure Cosmos DB accounts are single region. In the zone-redundant regions listed in this article, the Azure Cosmos DB accounts are single region with Availability Zones. In the other regions, the accounts are single region without Availability Zones. For high availability capabilities for these account types, see [SLAs for Azure Cosmos DB](/azure/cosmos-db/high-availability#slas).
+The Azure Lab Services infrastructure uses Azure Cosmos DB storage. The Azure Cosmos DB storage region is the same as the region where the lab plan is located. All the regional Azure Cosmos DB accounts are single region. In the zone-redundant regions listed in this article, the Azure Cosmos DB accounts are single region with Availability Zones. In the other regions, the accounts are single region without Availability Zones. For high availability capabilities for these account types, see [SLAs for Azure Cosmos DB](../cosmos-db/high-availability.md#slas).
### Zone down experience
In the event of a zone outage in these regions, you can still perform the follow
- Configure lab schedules - Create/manage labs and VMs in regions unaffected by the zone outage.
-Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB region. For more information, see [Region outages](/azure/cosmos-db/high-availability#region-outages).
+Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB region. For more information, see [Region outages](../cosmos-db/high-availability.md#region-outages).
For regions not listed, access to the Azure Lab Services infrastructure is not guaranteed when there is a zone outage in the region containing the lab plan. You will only be able to perform the following tasks:
Azure Lab Services does not provide regional failover support. If you want to pr
### Outage detection, notification, and management
-Azure Lab Services does not provide any service-specific signals about an outage, but is dependent on Azure communications that inform customers about outages. For more information on service health, see [Resource health overview](/azure/service-health/resource-health-overview).
+Azure Lab Services does not provide any service-specific signals about an outage, but is dependent on Azure communications that inform customers about outages. For more information on service health, see [Resource health overview](../service-health/resource-health-overview.md).
## Next steps > [!div class="nextstepaction"]
-> [Resiliency in Azure](/azure/availability-zones/overview)
+> [Resiliency in Azure](../availability-zones/overview.md)
lab-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot.md
This article provides several common reasons why an educator might not be able t
Possible issues: -- The Azure Compute Gallery is not connected to the lab plan. To connect an Azure Compute Gallery, see [Attach or detach a compute gallery](/azure/lab-services/how-to-attach-detach-shared-image-gallery).
+- The Azure Compute Gallery is not connected to the lab plan. To connect an Azure Compute Gallery, see [Attach or detach a compute gallery](./how-to-attach-detach-shared-image-gallery.md).
- The image is not enabled by the administrator. This applies to both Marketplace images and Azure Compute Gallery images. To enable images, see [Specify marketplace images for labs](specify-marketplace-images.md). -- The image in the attached Azure Compute Gallery is not replicated to the same location as the lab plan. For more information, see [Store and share images in an Azure Compute Gallery](/azure/virtual-machines/shared-image-galleries).
+- The image in the attached Azure Compute Gallery is not replicated to the same location as the lab plan. For more information, see [Store and share images in an Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).
- Image sizes greater than 127GB or with multiple disks are not supported.
lab-services Tutorial Create Lab With Advanced Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md
An Azure account with an active subscription. [Create an account for free](https
[!INCLUDE [resource group definition](../../includes/resource-group.md)]
-The following steps show how to use the Azure portal to [create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal). For simplicity, we'll put all resources for this tutorial in the same resource group.
+The following steps show how to use the Azure portal to [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md). For simplicity, we'll put all resources for this tutorial in the same resource group.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Resource groups**.
The following steps show how to use the Azure portal to create a virtual network
1. One the **IP Addresses** tab, create a subnet that will be used by the labs. 1. Select **+ Add subnet** 1. For **Subnet name**, enter **labservices-subnet**.
- 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 will have enough IP addresses for 251 lab VMs. (Five IP addresses are reserved by Azure for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](/azure/virtual-network/virtual-network-manage-subnet).
+ 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 will have enough IP addresses for 251 lab VMs. (Five IP addresses are reserved by Azure for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](../virtual-network/virtual-network-manage-subnet.md).
1. Select **OK**. 1. Select **Review + Create**.
The following steps show how to use the Azure portal to create a virtual network
## Delegate subnet to Azure Lab Services
-In this section, we'll configure the subnet to be used with Azure Lab Services. To tell Azure Lab Services that a subnet may be used, the subnet must be [delegated to the service](/azure/virtual-network/manage-subnet-delegation).
+In this section, we'll configure the subnet to be used with Azure Lab Services. To tell Azure Lab Services that a subnet may be used, the subnet must be [delegated to the service](../virtual-network/manage-subnet-delegation.md).
1. Open the **MyVirtualNetwork** resource. 1. Select the **Subnets** item on the left menu.
If you're not going to continue to use this application, delete the virtual netw
## Next steps >[!div class="nextstepaction"]
->[Add students to the labs](how-to-configure-student-usage.md)
+>[Add students to the labs](how-to-configure-student-usage.md)
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
az vm create \
* Limit of 100 IP addresses in the backend pool for IP based LBs * The backend resources must be in the same virtual network as the load balancer for IP based LBs * A load balancer with IP based Backend Pool canΓÇÖt function as a Private Link service
- * [Private endpoint resources](/azure/private-link/private-endpoint-overview) can't be placed in a IP based backend pool
+ * [Private endpoint resources](../private-link/private-endpoint-overview.md) can't be placed in a IP based backend pool
* ACI containers aren't currently supported by IP based LBs * Load balancers or services such as Application Gateway canΓÇÖt be placed in the backend pool of the load balancer * Inbound NAT Rules canΓÇÖt be specified by IP address
In this article, you learned about Azure Load Balancer backend pool management a
Learn more about [Azure Load Balancer](load-balancer-overview.md).
-Review the [REST API](/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backend pool management.
+Review the [REST API](/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backend pool management.
load-balancer Move Across Regions Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-powershell.md
The following steps show how to prepare the internal load balancer for the move
}, ``` For more information on the differences between basic and standard sku load balancers, see [Azure Standard Load Balancer overview](./load-balancer-overview.md)
+
+ * **Availability zone**. You can change the zone(s) of the load balancer's frontend by changing the zone property. If the zone property isn't specified, the frontend will be created as no-zone. You can specify a single zone to create a zonal frontend or all 3 zones for a zone-redundant frontend.
+ ```json
+ "frontendIPConfigurations": [
+ {
+ "name": "myfrontendIPinbound",
+ "id": "[concat(resourceId('Microsoft.Network/loadBalancers', parameters('loadBalancers_myLoadBalancer_name')), '/frontendIPConfigurations/myfrontendIPinbound')]"
+ "type": "Microsoft.Network/loadBalancers/frontendIPConfigurations",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "privateIPAddress": "10.0.0.1",
+ "privateIPAllocationMethod": "Static",
+ "subnet": {
+ "id": "[concat(resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name')), '/subnet-1')]"
+ },
+ "privateIPAddressVersion": "IPv4"
+ },
+ "zones": [
+ "1",
+ "2",
+ "3"
+ ]
+ }
+ ],
+ ```
* **Load balancing rules** - You can add or remove load balancing rules in the configuration by adding or removing entries to the **loadBalancingRules** section of the **\<resource-group-name>.json** file:
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
The script migrates the following from the Basic load balancer to the Standard l
- Inbound NAT Rules: - All NAT rules will be migrated to the new Standard load balancer - Outbound Rules:
- - Basic load balancers don't support configured outbound rules. The script will create an outbound rule in the Standard load balancer to preserve the outbound behavior of the Basic load balancer. For more information about outbound rules, see [Outbound rules](/azure/load-balancer/outbound-rules).
+ - Basic load balancers don't support configured outbound rules. The script will create an outbound rule in the Standard load balancer to preserve the outbound behavior of the Basic load balancer. For more information about outbound rules, see [Outbound rules](./outbound-rules.md).
- Network security group - Basic load balancer doesn't require a network security group to allow outbound connectivity. In case there's no network security group associated with the virtual machine scale set, a new network security group will be created to preserve the same functionality. This new network security group will be associated to the virtual machine scale set backend pool member network interfaces. It will allow the same load balancing rules ports and protocols and preserve the outbound connectivity. - Backend pools:
The script migrates the following from the Basic load balancer to the Standard l
- If there's a virtual machine scale set using Rolling Upgrade policy, the script will update the virtual machine scale set upgrade policy to "Manual" during the migration process and revert it back to "Rolling" after the migration is completed. >[!NOTE]
-> Network security group are not configured as part of Internal Load Balancer upgrade. To learn more about NSGs, see [Network security groups](/azure/virtual-network/network-security-groups-overview)
+> Network security group are not configured as part of Internal Load Balancer upgrade. To learn more about NSGs, see [Network security groups](../virtual-network/network-security-groups-overview.md)
### What happens if my upgrade fails mid-migration? The module is designed to accommodate failures, either due to unhandled errors or unexpected script termination. The failure design is a 'fail forward' approach, where instead of attempting to move back to the Basic load balancer, you should correct the issue causing the failure (see the error output or log file), and retry the migration again, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters. For public load balancers, because the Public IP Address SKU has been updated to Standard, moving the same IP back to a Basic load balancer won't be possible. The basic failure recovery procedure is: 1. Address the cause of the migration failure. Check the log file `Start-AzBasicLoadBalancerUpgrade.log` for details
- 1. [Remove the new Standard load balancer](/azure/load-balancer/update-load-balancer-with-vm-scale-set) (if created). Depending on which stage of the migration failed, you may have to remove the Standard load balancer reference from the virtual machine scale set network interfaces (IP configurations) and health probes in order to remove the Standard load balancer and try again.
+ 1. [Remove the new Standard load balancer](./update-load-balancer-with-vm-scale-set.md) (if created). Depending on which stage of the migration failed, you may have to remove the Standard load balancer reference from the virtual machine scale set network interfaces (IP configurations) and health probes in order to remove the Standard load balancer and try again.
1. Locate the basic load balancer state backup file. This will either be in the directory where the script was executed, or at the path specified with the `-RecoveryBackupPath` parameter during the failed execution. The file will be named: `State_<basicLBName>_<basicLBRGName>_<timestamp>.json` 1. Rerun the migration script, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters instead of -BasicLoadBalancerName or passing the Basic load balancer over the pipeline ## Next steps
-[Learn about Azure Load Balancer](load-balancer-overview.md)
+[Learn about Azure Load Balancer](load-balancer-overview.md)
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-In logic app workflows, some triggers and actions support using a managed identity for authenticating access to resources protected by Azure Active Directory (Azure AD). When you use a managed identity to authenticate your connection, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information. For more information, see [What are managed identities for Azure resources?](/active-directory/managed-identities-azure-resources/overview.md).
+In logic app workflows, some triggers and actions support using a managed identity for authenticating access to resources protected by Azure Active Directory (Azure AD). When you use a managed identity to authenticate your connection, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
Azure Logic Apps supports the [*system-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md) and the [*user-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md). The following list describes some differences between these identity types:
logic-apps Logic Apps Create Logic Apps From Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-logic-apps-from-templates.md
Title: Create logic app workflows faster with prebuilt templates
-description: Quickly build logic app workflows with prebuilt templates in Azure Logic Apps.
+description: Quickly build logic app workflows with prebuilt templates in Azure Logic Apps, and find out about available templates.
ms.suite: integration Previously updated : 08/01/2022 Last updated : 10/12/2022
+#Customer intent: As an Azure Logic Apps developer, I want to build a logic app workflow from a template so that I can reduce development time.
-# Create logic app workflows from prebuilt templates
+# Create a logic app workflow from a prebuilt template
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-To get you started creating workflows more quickly,
-Logic Apps provides templates, which are prebuilt
-logic apps that follow commonly used patterns.
-Use these templates as provided or edit them to fit your scenario.
+To get you started creating workflows quickly, Azure Logic Apps provides templates, which are prebuilt logic app workflows that follow commonly used patterns.
+
+This how-to guide shows how to use these templates as provided or edit them to fit your scenario.
Here are some template categories: | Template type | Description | | - | -- |
-| Enterprise cloud templates | For integrating Azure Blob, Dynamics CRM, Salesforce, Box, and includes other connectors for your enterprise cloud needs. For example, you can use these templates to organize business leads or back up your corporate file data. |
-| Personal productivity templates | Improve personal productivity by setting daily reminders, turning important work items into to-do lists, and automating lengthy tasks down to a single user approval step. |
-| Consumer cloud templates | For integrating social media services such as Twitter, Slack, and email. Useful for strengthening social media marketing initiatives. These templates also include tasks such as cloud copying, which increases productivity by saving time on traditionally repetitive tasks. |
-| Enterprise integration pack templates | For configuring VETER (validate, extract, transform, enrich, route) pipelines, receiving an X12 EDI document over AS2 and transforming to XML, and handling X12, EDIFACT, and AS2 messages. |
-| Protocol pattern templates | For implementing protocol patterns such as request-response over HTTP and integrations across FTP and SFTP. Use these templates as provided, or build on them for complex protocol patterns. |
+| Enterprise cloud | For integrating Azure Blob Storage, Dynamics CRM, Salesforce, and Box. Also includes other connectors for your enterprise cloud needs. For example, you can use these templates to organize business leads or back up your corporate file data. |
+| Personal productivity | For improving personal productivity. You can use these templates to set daily reminders, turn important work items into to-do lists, and automate lengthy tasks down to a single user-approval step. |
+| Consumer cloud | For integrating social media services such as Twitter, Slack, and email. Useful for strengthening social media marketing initiatives. These templates also include tasks such as cloud copying, which increases productivity by saving time on traditionally repetitive tasks. |
+| Enterprise integration pack | For configuring validate, extract, transform, enrich, and route (VETER) pipelines. Also for receiving an X12 EDI document over AS2 and transforming it to XML, and for handling X12, EDIFACT, and AS2 messages. |
+| Protocol pattern | For implementing protocol patterns such as request-response over HTTP and integrations across FTP and SFTP. Use these templates as provided, or build on them for complex protocol patterns. |
|||
-If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/) before you begin. For more information about building a logic app, see [Create a logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+## Prerequisites
-## Create logic apps from templates
+- An Azure account and subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A basic understanding of how to build a logic app workflow. For more information, see [Create a Consumption logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-1. If you haven't already, sign in to the
-[Azure portal](https://portal.azure.com "Azure portal").
+## Create a logic app workflow from a template
-2. From the main Azure menu, choose
-**Create a resource** > **Enterprise Integration** > **Logic App**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- ![Azure portal, New, Enterprise Integration, Logic App](./media/logic-apps-create-logic-apps-from-templates/azure-portal-create-logic-app.png)
+1. Select **Create a resource** > **Integration** > **Logic App**.
-3. Create your logic app with the settings in the table under this image:
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/azure-portal-create-logic-app.png" alt-text="Screenshot of the Azure portal. Under 'Popular Azure services,' 'Logic App' is highlighted. On the navigation menu, 'Integration' is highlighted.":::
- ![Provide logic app details](./media/logic-apps-create-logic-apps-from-templates/logic-app-settings.png)
+1. In the **Create Logic App** page, enter the following values:
| Setting | Value | Description | | - | -- | -- |
- | **Name** | *your-logic-app-name* | Provide a unique logic app name. |
- | **Subscription** | *your-Azure-subscription-name* | Select the Azure subscription that you want to use. |
- | **Resource group** | *your-Azure-resource-group-name* | Create or select an [Azure resource group](../azure-resource-manager/management/overview.md) for this logic app and to organize all resources associated with this app. |
- | **Location** | *your-Azure-datacenter-region* | Select the datacenter region for deploying your logic app, for example, West US. |
- | **Log Analytics** | **Off** (default) or **On** | Set up [diagnostic logging](../logic-apps/monitor-logic-apps-log-analytics.md) for your logic app by using [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md). Requires that you already have a Log Analytics workspace. |
- ||||
-
-4. When you're ready, select **Pin to dashboard**.
-That way, your logic app automatically appears on
-your Azure dashboard and opens after deployment.
-Choose **Create**.
+ | **Subscription** | <*your-Azure-subscription-name*> | Select the Azure subscription that you want to use. |
+ | **Resource Group** | <*your-Azure-resource-group-name*> | Create or select an [Azure resource group](../azure-resource-manager/management/overview.md) for this logic app resource and its associated resources. |
+ | **Logic App name** | <*your-logic-app-name*> | Provide a unique logic app resource name. |
+ | **Region** | <*your-Azure-datacenter-region*> | Select the datacenter region for deploying your logic app, for example, **West US**. |
+ | **Enable log analytics** | **No** (default) or **Yes** | To set up [diagnostic logging](../logic-apps/monitor-logic-apps-log-analytics.md) for your logic app resource by using [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), select **Yes**. This selection requires that you already have a Log Analytics workspace. |
+ | **Plan type** | **Consumption** or **Standard** | Select **Consumption** to create a Consumption logic app workflow. |
+ | **Zone redundancy** | **Disabled** (default) or **Enabled** | If this option is available, select **Enabled** if you want to protect your logic app resource from a regional failure. But first [check that zone redundancy is available in your Azure region](/azure/logic-apps/set-up-zone-redundancy-availability-zones?tabs=consumption#considerations). |
+ ||||
- > [!NOTE]
- > If you don't want to pin your logic app,
- > you must manually find and open your logic app
- > after deployment so you can continue.
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-settings.png" alt-text="Screenshot of the 'Create Logic App' page. The 'Consumption' plan type is selected, and values are visible in other input fields.":::
- After Azure deploys your logic app, the Logic Apps Designer
- opens and shows a page with an introduction video.
- Under the video, you can find templates for common logic app patterns.
+1. Select **Review + Create**.
-5. Scroll past the introduction video and common triggers to **Templates**.
-Choose a prebuilt template. For example:
+1. Review the values, and then select **Create**.
- ![Choose a logic app template](./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/create-logic-app.png" alt-text="Screenshot of the 'Create Logic App' page. The name, subscription, and other values are visible, and the 'Create' button is highlighted.":::
- > [!TIP]
- > To create your logic app from scratch, choose **Blank Logic App**.
+1. When deployment is complete, select **Go to resource**. The designer opens and shows a page with an introduction video. Under the video, you can find templates for common logic app workflow patterns.
+
+1. Scroll past the introduction video and common triggers to **Templates**. Select a prebuilt template.
+
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png" alt-text="Screenshot of the designer. Under 'Templates,' three templates are visible. One called 'Delete old Azure blobs' is highlighted.":::
- When you select a prebuilt template,
- you can view more information about that template.
- For example:
+ When you select a prebuilt template, you can view more information about that template.
- ![Choose a prebuilt template](./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png" alt-text="Screenshot that shows information about the 'Delete old Azure blobs' template, including a description and a diagram that shows a recurring schedule.":::
-6. To continue with the selected template,
-choose **Use this template**.
+1. To continue with the selected template, select **Use this template**.
-7. Based on the connectors in the template,
-you are prompted to perform any of these steps:
+1. Based on the connectors in the template, you're prompted to perform any of these steps:
- * Sign in with your credentials to systems or services
- that are referenced by the template.
+ * Sign in with your credentials to systems or services that are referenced by the template.
- * Create connections for any services or systems
- referenced by the template. To create a connection,
- provide a name for your connection, and if necessary,
- select the resource that you want to use.
+ * Create connections for any systems or services that are referenced by the template. To create a connection, provide a name for your connection, and if necessary, select the resource that you want to use.
- * If you already set up these connections,
- choose **Continue**.
+ > [!NOTE]
+ > Many templates include connectors that have required properties that are prepopulated. Other templates require that you provide values before you can properly deploy the logic app workflow. If you try to deploy without completing the missing property fields, you get an error message.
- For example:
+1. After you set up your required connections, select **Continue**.
- ![Create connections](./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection.png" alt-text="Screenshot of the designer. A connection for Azure Blob Storage is visible, and the 'Continue' button is highlighted.":::
- When you're done, your logic app opens
- and appears in the Logic Apps Designer.
+ The designer opens and displays your logic app workflow.
> [!TIP]
- > To return to the template viewer, choose **Templates**
- > on the designer toolbar. This action discards any unsaved changes,
- > so a warning message appears to confirm your request.
+ > To return to the template viewer, select **Templates** on the designer toolbar. This action discards any unsaved changes, so a warning message appears to confirm your request.
-8. Continue building your logic app.
+1. Continue building your logic app workflow.
- > [!NOTE]
- > Many templates include connectors that might
- > already have prepopulated required properties.
- > However, some templates might still require that you provide
- > values before you can properly deploy the logic app.
- > If you try to deploy without completing the missing property fields,
- > you get an error message.
+## Update a logic app workflow with a template
-## Update logic apps with templates
+1. In the [Azure portal](https://portal.azure.com), go to your logic app resource.
-1. In the [Azure portal](https://portal.azure.com "Azure portal"),
-find and open your logic app in th Logic App Designer.
+1. On the logic app navigation menu, select **Logic app designer**.
-2. On the designer toolbar, choose **Templates**.
-This action discards any unsaved changes,
-so a warning message appears so you can confirm
-that you want to continue. To confirm, choose **OK**.
-For example:
+1. On the designer toolbar, select **Templates**. This action discards any unsaved changes, so a warning message appears. To confirm that you want to continue, select **OK**.
- ![Choose "Templates"](./media/logic-apps-create-logic-apps-from-templates/logic-app-update-existing-with-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-update-existing-with-template.png" alt-text="Screenshot of the designer. The top part of a logic app workflow is visible. On the toolbar, 'Templates' is highlighted.":::
-3. Scroll past the introduction video and common triggers to **Templates**.
-Choose a prebuilt template. For example:
+1. Scroll past the introduction video and common triggers to **Templates**. Select a prebuilt template.
- ![Choose a logic app template](./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png" alt-text="Screenshot of the designer. Under 'Templates,' three templates are visible. One template called 'Delete old Azure blobs' is highlighted.":::
- When you select a prebuilt template,
- you can view more information about that template.
- For example:
+ When you select a prebuilt template, you can view more information about that template.
- ![Choose a prebuilt template](./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png)
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png" alt-text="Screenshot that shows information about the 'Delete old Azure blobs' template. A description and diagram that shows a recurring schedule are visible.":::
-4. To continue with the selected template,
-choose **Use this template**.
+1. To continue with the selected template, select **Use this template**.
-5. Based on the connectors in the template,
-you are prompted to perform any of these steps:
+1. Based on the connectors in the template, you're prompted to perform any of these steps:
- * Sign in with your credentials to systems or
- services that are referenced by the template.
+ * Sign in with your credentials to systems or services that are referenced by the template.
- * Create connections for any services or systems
- referenced by the template. To create a connection,
- provide a name for your connection, and if necessary,
- select the resource that you want to use.
+ * Create connections for any systems or services that are referenced by the template. To create a connection, provide a name for your connection, and if necessary, select the resource that you want to use.
- * If you already set up these connections,
- choose **Continue**.
+ > [!NOTE]
+ > Many templates include connectors that have required properties that are prepopulated. Other templates require that you provide values before you can properly deploy the logic app workflow. If you try to deploy without completing the missing property fields, you get an error message.
- ![Create connections](./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection.png)
+1. After you set up your required connections, select **Continue**.
- Your logic app now opens and appears in the Logic Apps Designer.
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection-designer.png" alt-text="Screenshot of the designer, with a connection for Azure Blob Storage. The 'Continue' button is highlighted.":::
-8. Continue building your logic app.
+ The designer opens and displays your logic app workflow.
- > [!TIP]
- > If you haven't saved your changes, you can discard your work
- > and return to your previous logic app. On the designer toolbar,
- > choose **Discard**.
+1. Continue building your logic app workflow.
-> [!NOTE]
-> Many templates include connectors that might have
-> already pre-populated required properties.
-> However, some templates might still require that you provide
-> values before you can properly deploy the logic app.
-> If you try to deploy without completing the missing property fields,
-> you get an error message.
+ > [!TIP]
+ > If you haven't saved your changes, you can discard your work and return to your previous workflow. On the designer toolbar, select **Discard**.
-## Deploy logic apps built from templates
+## Deploy a logic app workflow built from a template
-After you make your changes to the template,
-you can save your changes. This action also
-automatically publishes your logic app.
+After you make your changes to the template, you can save your changes. This action automatically publishes your logic app workflow.
-On the designer toolbar, choose **Save**.
+On the designer toolbar, select **Save**.
-![Save and publish your logic app](./media/logic-apps-create-logic-apps-from-templates/logic-app-save.png)
## Get support
-* For questions, visit the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To submit or vote on feature ideas, visit the [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
+* For questions, go to the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
+* To submit or vote on feature ideas, go to the [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
## Next steps
-Learn about building logic apps through examples,
-scenarios, customer stories, and walkthroughs.
+Learn about building logic app workflows through examples, scenarios, customer stories, and walkthroughs.
> [!div class="nextstepaction"] > [Review logic app examples, scenarios, and walkthroughs](../logic-apps/logic-apps-examples-and-scenarios.md)
machine-learning Azure Machine Learning Ci Image Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md
# Azure Machine Learning compute instance image release notes
-In this article, learn about Azure Machine Learning compute instance image releases. Azure Machine Learning maintains host operating system images for [Azure ML compute instance](/azure/machine-learning/concept-compute-instance) and [Data Science Virtual Machines](/azure/machine-learning/data-science-virtual-machine/release-notes). Due to the rapidly evolving needs and package updates, we target to release new images every month.
+In this article, learn about Azure Machine Learning compute instance image releases. Azure Machine Learning maintains host operating system images for [Azure ML compute instance](./concept-compute-instance.md) and [Data Science Virtual Machines](./data-science-virtual-machine/release-notes.md). Due to the rapidly evolving needs and package updates, we target to release new images every month.
-Azure Machine Learning checks and validates any machine learning packages that may require an upgrade. Updates incorporate the latest OS-related patches from Canonical as the original Linux OS publisher. In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available. For details on the patching process, see [Vulnerability Management](/azure/machine-learning/concept-vulnerability-management).
+Azure Machine Learning checks and validates any machine learning packages that may require an upgrade. Updates incorporate the latest OS-related patches from Canonical as the original Linux OS publisher. In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available. For details on the patching process, see [Vulnerability Management](./concept-vulnerability-management.md).
Main updates provided with each image version are described in the below sections.
Version `22.07.22`
Main changes: * `Azcopy` to version `10.16.0`
-* `Blob Fuse` to version `1.4.4`
+* `Blob Fuse` to version `1.4.4`
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-deploy-model-custom-output.md
+
+ Title: "Customize outputs in batch deployments"
+
+description: Learn how authentication works on Batch Endpoints.
++++++ Last updated : 10/10/2022++++
+# Customize outputs in batch deployments
++
+Sometimes you need to execute inference having a higher control of what is being written as output of the batch job. Those cases include:
+
+> [!div class="checklist"]
+> * You need to control how the predictions are being written in the output. For instance, you want to append the prediction to the original data (if data is tabular).
+> * You need to write your predictions in a different file format from the one supported out-of-the-box by batch deployments.
+> * Your model is a generative model that can't write the output in a tabular format. For instance, models that produce images as outputs.
+> * Your model produces multiple tabular files instead of a single one. This is the case for instance of models that perform forecasting considering multiple scenarios.
+
+In any of those cases, Batch Deployments allow you to take control of the output of the jobs by allowing you to write directly to the output of the batch deployment job. In this tutorial, we'll see how to deploy a model to perform batch inference and writes the outputs in `parquet` format by appending the predictions to the original input data.
+
+## Prerequisites
++
+* A model registered in the workspace. In this tutorial, we'll use an MLflow model. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* You must have an endpoint already created. If you don't, follow the instructions at [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md). This example assumes the endpoint is named `heart-classifier-batch`.
+* You must have a compute created where to deploy the deployment. If you don't, follow the instructions at [Create compute](../how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+
+## About this sample
+
+This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+
+The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
++
+## Creating a batch deployment with a custom output
+
+In this example, we are going to create a deployment that can write directly to the output folder of the batch deployment job. The deployment will use this feature to write custom parquet files.
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
+
+# [Azure ML CLI](#tab/cli)
+
+```azurecli
+MODEL_NAME='heart-classifier'
+az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+model_name = 'heart-classifier'
+model = ml_client.models.create_or_update(
+ Model(path='heart-classifier-mlflow/model', type=AssetTypes.MLFLOW_MODEL)
+)
+```
++
+### Creating a scoring script
+
+We need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. We are also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:
+
+1. Reads the input data as CSV files.
+2. Runs an MLflow model `predict` function over the input data.
+3. Appends the predictions to a `pandas.DataFrame` along with the input data.
+4. Writes the data in a file named as the input file, but in `parquet` format.
+
+__batch_driver.py__
+
+```python
+import os
+import mlflow
+import pandas as pd
+from pathlib import Path
+
+def init():
+ global model
+ global output_path
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # It is the path to the model folder
+ # Please provide your model's folder name if there's one:
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+ output_path = os.environ['AZUREML_BI_OUTPUT_PATH']
+ model = mlflow.pyfunc.load_model(model_path)
+
+def run(mini_batch):
+ for file_path in mini_batch:
+ data = pd.read_csv(file_path)
+ pred = model.predict(data)
+
+ data['prediction'] = pred
+
+ output_file_name = Path(file_path).stem
+ output_file_path = os.path.join(output_path, output_file_name + '.parquet')
+ data.to_parquet(output_file_path)
+
+ return mini_batch
+```
+
+Remarks:
+* Notice how the environment variable `AZUREML_BI_OUTPUT_PATH` is used to get access to the output path of the deployment job.
+* The `init()` function is populating a global variable called `output_path` that can be used later to know where to write.
+* The `run` method returns a list of the processed files. It is required for the `run` function to return a `list` or a `pandas.DataFrame` object.
+
+> [!WARNING]
+> Take into account that all the batch executors will have write access to this path at the same time. This means that you need to account for concurrency. In this case, we are ensuring each executor writes its own file by using the input file name as the name of the output folder.
+
+### Creating the deployment
+
+Follow the next steps to create a deployment using the previous scoring script:
+
+1. First, let's create an environment where the scoring script can be executed:
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./heart-classifier-mlflow/environment/conda.yaml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+2. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, in this case we are going to indicate a scoring script and environment since we want to customize how inference is executed.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-parquet
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ environment:
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
+ conda_file: ./heart-classifier-mlflow/environment/conda.yaml
+ code_configuration:
+ code: ./heart-classifier-custom/code/
+ scoring_script: batch_driver_parquet.py
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: summary_only
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ az ml batch-endpoint create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-parquet",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./heart-classifier-mlflow/code/",
+ scoring_script="batch_driver_parquet.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.SUMMARY_ONLY,
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!IMPORTANT]
+ > Notice that now `output_action` is set to `SUMMARY_ONLY`.
+
+3. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-dataset
+ ```
+
+ Then, create the data asset:
+
+ ```azurecli
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "resources/heart-dataset/"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+1. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ deployment_name=deployment.name,
+ input=input,
+ )
+ ```
+
+1. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+## Analyzing the outputs
+
+The job generates a named output called `score` where all the generated files are placed. Since we wrote into the directory directly, one file per each input file, then we can expect to have the same number of files. In this particular example we decided to name the output files the same as the inputs, but they will have a parquet extension.
+
+> [!NOTE]
+> Notice that a file `predictions.csv` is also included in the output folder. This file contains the summary of the processed files.
+
+You can download the results of the job by using the job name:
+
+# [Azure ML CLI](#tab/cli)
+
+To download the predictions, use the following command:
+
+```azurecli
+az ml job download --name $JOB_NAME --output-name score --download-path ./
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+```
++
+Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
+
+```python
+import pandas as pd
+import glob
+
+output_files = glob.glob("named-outputs/score/*.parquet")
+score = pd.concat((pd.read_parquet(f) for f in output_files))
+```
+
+The output looks as follows:
+
+| age | sex | ... | thal | prediction |
+|--||--||--|
+| 63 | 1 | ... | fixed | 0 |
+| 67 | 1 | ... | normal | 1 |
+| 67 | 1 | ... | reversible | 0 |
+| 37 | 1 | ... | normal | 0 |
++
+## Next steps
+
+* [Using batch deployments for image file processing](how-to-image-processing-batch.md)
+* [Using batch deployments for NLP processing](how-to-nlp-processing-batch.md)
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-image-processing-batch.md
+
+ Title: "Image processing tasks with batch deployments"
+
+description: Learn how to deploy a model in batch endpoints that process images
++++++ Last updated : 10/10/2022++++
+# Image processing tasks with batch deployments
++
+Batch Endpoints can be used for processing tabular data, but also any other file type like images. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that classifies images according to the ImageNet taxonomy.
+
+## Prerequisites
++
+* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md). This example assumes the endpoint is named `imagenet-classifier-batch`.
+* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](../how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+
+## About the model used in the sample
+
+The model we are going to work with was built using TensorFlow along with the RestNet architecture ([Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)). This model has the following constrains that are important to keep in mind for deployment:
+
+* It works with images of size 244x244 (tensors of `(224, 224, 3)`).
+* It requires inputs to be scaled to the range `[0,1]`.
+
+A sample of this model can be downloaded from `https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip`.
+
+## Image classification with batch deployments
+
+In this example, we are going to learn how to deploy a deep learning model that can classify a given image according to the [taxonomy of ImageNet](https://image-net.org/).
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models so we need to register it. You can skip this step if the model you are trying to deploy is already registered.
+
+1. Downloading a copy of the model:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ wget https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip
+ mkdir -p imagenet-classifier
+ unzip model.zip -d imagenet-classifier
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ import os
+ from zipfile import ZipFile
+
+ os.mkdirs("imagenet-classifier", exits_ok=True)
+ with ZipFile(file, 'r') as zip:
+ model_path = zip.extractall(path="imagenet-classifier")
+ ```
+
+2. Register the model:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ MODEL_NAME='imagenet-classifier'
+ az ml model create --name $MODEL_NAME --type "custom_model" --path "imagenet-classifier/model"
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ model_name = 'imagenet-classifier'
+ model = ml_client.models.create_or_update(
+ Model(path=model_path, type=AssetTypes.CUSTOM_MODEL)
+ )
+ ```
+
+### Creating a scoring script
+
+We need to create a scoring script that can read the images provided by the batch deployment and return the scores of the model. The following script does the following:
+
+> [!div class="checklist"]
+> * Indicates an `init` function that load the model using `keras` module in `tensorflow`.
+> * Indicates a `run` function that is executed for each mini-batch the batch deployment provides.
+> * The `run` function read one image of the file at a time
+> * The `run` method resizes the images to the expected sizes for the model.
+> * The `run` method rescales the images to the range `[0,1]` domain, which is what the model expects.
+> * It returns the classes and the probabilities associated with the predictions.
+
+__imagenet_scorer.py__
+
+```python
+import os
+import numpy as np
+import pandas as pd
+import tensorflow as tf
+from os.path import basename
+from PIL import Image
+from tensorflow.keras.models import load_model
++
+def init():
+ global model
+ global input_width
+ global input_height
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+ input_width = 244
+ input_height = 244
+
+def run(mini_batch):
+ results = []
+
+ for image in mini_batch:
+ data = Image.open(image).resize((input_width, input_height)) # Read and resize the image
+ data = np.array(data)/255.0 # Normalize
+ data_batch = tf.expand_dims(data, axis=0) # create a batch of size (1, 244, 244, 3)
+
+ # perform inference
+ pred = model.predict(data_batch)
+
+ # Compute probabilities, classes and labels
+ pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
+ pred_class = tf.math.argmax(pred, axis=-1).numpy()
+
+ results.append([basename(image), pred_class[0], pred_prob])
+
+ return pd.DataFrame(results)
+```
+
+> [!TIP]
+> Although images are provided in mini-batches by the deployment, this scoring script processes one image at a time. This is a common pattern as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions). However, there are certain cases where doing so enables high throughput in the scoring task. This is the case for instance of batch deployments over a GPU hardware where we want to achieve high GPU utilization. See [High throughput deployments](#high-throughput-deployments) for an example of a scoring script that takes advantage of it.
+
+> [!NOTE]
+> If you are trying to deploy a generative model (one that generates files), please read how to author a scoring script as explained at [Deployment of models that produces multiple files](how-to-deploy-model-custom-output.md).
+
+### Creating the deployment
+
+One the scoring script is created, it's time to create a batch deployment for it. Follow the following steps to create it:
+
+1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so we can reutilize this environment. We are just going to add a couple of dependencies in a `conda.yml` file.
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./imagenet-classifier/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest",
+ )
+ ```
+
+1. Now, let create the deployment.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `imagenet-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: imagenet-classifier-batch
+ name: imagenet-classifier-resnetv2
+ description: A ResNetV2 model architecture for performing ImageNet classification in batch
+ model: azureml:imagenet-classifier@latest
+ compute: azureml:cpu-cluster
+ environment:
+ image: mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest
+ conda_file: ./imagenet-classifier/environment/conda.yml
+ code_configuration:
+ code: ./imagenet-classifier/code/
+ scoring_script: imagenet_scorer.py
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 1
+ mini_batch_size: 5
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ az ml batch-endpoint create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment with the indicated environment and scoring script use the following code:
+
+ ```python
+ deployment = BatchDeployment(
+ name="imagenet-classifier-resnetv2",
+ description="A ResNetV2 model architecture for performing ImageNet classification in batch",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./imagenet-classifier/code/",
+ scoring_script="imagenet_scorer.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=1,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ```
+
+1. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of 1000 images from the original ImageNet dataset. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's download the associated sample data:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
+ unzip imagenet-1000.zip -d /tmp/imagenet-1000
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ !wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
+ !unzip imagenet-1000.zip -d /tmp/imagenet-1000
+ ```
+
+2. Now, let's create the data asset from the data just downloaded
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __imagenet-sample-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: imagenet-sample-unlabeled
+ description: A sample of 1000 images from the original ImageNet dataset.
+ type: uri_folder
+ path: /tmp/imagenet-1000
+ ```
+
+ Then, create the data asset:
+
+ ```azurecli
+ az ml data create -f imagenet-sample-unlabeled.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "/tmp/imagenet-1000"
+ dataset_name = "imagenet-sample-unlabeled"
+
+ imagenet_sample = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="A sample of 1000 images from the original ImageNet dataset",
+ name=dataset_name,
+ )
+ ml_client.data.create_or_update(imagenet_sample)
+ ```
+
+3. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:imagenet-sample-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=imagenet_sample.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+
+ > [!TIP]
+ > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
+
+4. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+5. Once the deployment is finished, we can download the predictions:
+
+ # [Azure ML CLI](#tab/cli)
+
+ To download the predictions, use the following command:
+
+ ```azurecli
+ az ml job download --name $JOB_NAME --output-name score --download-path ./
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+ ```
+
+6. The output predictions will look like the following. Notice that the predictions have been combined with the labels for the convenience of the reader. To know more about how to achieve this see the associated notebook.
+
+ ```python
+ import pandas as pd
+ score = pd.read_csv("named-outputs/score/predictions.csv", header=None, names=['file', 'class', 'probabilities'], sep=' ')
+ score['label'] = score['class'].apply(lambda pred: imagenet_labels[pred])
+ score
+ ```
+
+ | file | class | probabilities | label |
+ |--|-|| -|
+ | n02088094_Afghan_hound.JPEG | 161 | 0.994745 | Afghan hound |
+ | n02088238_basset | 162 | 0.999397 | basset |
+ | n02088364_beagle.JPEG | 165 | 0.366914 | bluetick |
+ | n02088466_bloodhound.JPEG | 164 | 0.926464 | bloodhound |
+ | ... | ... | ... | ... |
+
+
+## High throughput deployments
+
+As mentioned before, the deployment we just created processes one image a time, even when the batch deployment is providing a batch of them. In most cases this is the best approach as it simplifies how the models execute and avoids any possible out-of-memory problems. However, in certain others we may want to saturate as much as possible the utilization of the underlying hardware. This is the case GPUs for instance.
+
+On those cases, we may want to perform inference on the entire batch of data. That implies loading the entire set of images to memory and sending them directly to the model. The following example uses `TensorFlow` to read batch of images and score them all at once. It also uses `TensorFlow` ops to do any data preprocessing so the entire pipeline will happen on the same device being used (CPU/GPU).
+
+> [!WARNING]
+> Some models have a non-linear relationship with the size of the inputs in terms of the memory consumption. Batch again (as done in this example) or decrease the size of the batches created by the batch deployment to avoid out-of-memory exceptions.
+
+__imagenet_scorer_batch.py__
+
+```python
+import os
+import numpy as np
+import pandas as pd
+import tensorflow as tf
+from tensorflow.keras.models import load_model
+
+def init():
+ global model
+ global input_width
+ global input_height
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+ input_width = 244
+ input_height = 244
+
+def decode_img(file_path):
+ file = tf.io.read_file(file_path)
+ img = tf.io.decode_jpeg(file, channels=3)
+ img = tf.image.resize(img, [input_width, input_height])
+ return img/255.
+
+def run(mini_batch):
+ images_ds = tf.data.Dataset.from_tensor_slices(mini_batch)
+ images_ds = images_ds.map(decode_img).batch(64)
+
+ # perform inference
+ pred = model.predict(images_ds)
+
+ # Compute probabilities, classes and labels
+ pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
+ pred_class = tf.math.argmax(pred, axis=-1).numpy()
+
+ return pd.DataFrame([mini_batch, pred_prob, pred_class], columns=['file', 'probability', 'class'])
+```
+
+Remarks:
+* Notice that this script is constructing a tensor dataset from the mini-batch sent by the batch deployment. This dataset is preprocessed to obtain the expected tensors for the model using the `map` operation with the function `decode_img`.
+* The dataset is batched again (16) send the data to the model. Use this parameter to control how much information you can load into memory and send to the model at once. If running on a GPU, you will need to carefully tune this parameter to achieve the maximum utilization of the GPU just before getting an OOM exception.
+* Once predictions are computed, the tensors are converted to `numpy.ndarray`.
++
+## Considerations for MLflow models processing images
+
+MLflow models in Batch Endpoints support reading images as input data. Remember that MLflow models don't require a scoring script. Have the following considerations when using them:
+
+> [!div class="checklist"]
+> * Image files supported includes: `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp` and `.gif`.
+> * MLflow models should expect to recieve a `np.ndarray` as input that will match the dimensions of the input image. In order to support multiple image sizes on each batch, the batch executor will invoke the MLflow model once per image file.
+> * MLflow models are highly encouraged to include a signature, and if they do it must be of type `TensorSpec`. Inputs are reshaped to match tensor's shape if available. If no signature is available, tensors of type `np.uint8` are inferred.
+> * For models that include a signature and are expected to handle variable size of images, then include a signature that can guarantee it. For instance, the following signature will allow batches of 3 channeled images. Specify the signature when you register the model with `mlflow.<flavor>.log_model(..., signature=signature)`.
+
+```python
+import numpy as np
+import mlflow
+from mlflow.models.signature import ModelSignature
+from mlflow.types.schema import Schema, TensorSpec
+
+input_schema = Schema([
+ TensorSpec(np.dtype(np.uint8), (-1, -1, -1, 3)),
+])
+signature = ModelSignature(inputs=input_schema)
+```
+
+For more information about how to use MLflow models in batch deployments read [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## Next steps
+
+* [Using MLflow models in batch deployments](how-to-mlflow-batch.md)
+* [NLP tasks with batch deployments](how-to-nlp-processing-batch.md)
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
+
+ Title: "Using MLflow models in batch deployments"
+
+description: Learn how to deploy MLflow models in batch deployments
++++++ Last updated : 10/10/2022++++
+# Using MLflow models in batch deployments
++
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure ML for both batch inference using batch endpoints. Azure Machine Learning supports no-code deployment of models created and logged with MLflow. This means that you don't have to provide a scoring script or an environment.
+
+For no-code-deployment, Azure Machine Learning
+
+* Provides a MLflow base image/curated environment that contains the required dependencies to run an Azure Machine Learning Batch job.
+* Creates a batch job pipeline with a scoring script for you that can be used to process data using parallelization.
+
+> [!NOTE]
+> For more information about the supported file types in batch endpoints with MLflow, view [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
+
+## Prerequisites
++
+* You must have a MLflow model. If your model is not in MLflow format and you want to use this feature, you can [convert your custom ML model to MLflow format](../how-to-convert-custom-model-to-mlflow.md).
+
+## About this example
+
+This example shows how you can deploy an MLflow model to a batch endpoint to perform batch predictions. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+
+The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
++
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: `azureml-examples/sdk/python/endpoints/batch/mlflow-for-batch-tabular.ipynb`.
+
+## Steps
+
+Follow these steps to deploy an MLflow model to a batch endpoint for running batch inference over new data:
+
+1. First, let's connect to Azure Machine Learning workspace where we are going to work on.
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az account set --subscription <subscription>
+ az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+ 1. Import the required libraries:
+
+ ```python
+ from azure.ai.ml import MLClient, Input
+ from azure.ai.ml.entities import BatchEndpoint, BatchDeployment, Model, AmlCompute, Data, BatchRetrySettings
+ from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
+ from azure.identity import DefaultAzureCredential
+ ```
+
+ 2. Configure workspace details and get a handle to the workspace:
+
+ ```python
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ ```
+
+
+2. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ MODEL_NAME='heart-classifier'
+ az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ model_name = 'heart-classifier'
+ model_local_path = "heart-classifier-mlflow/model"
+ model = ml_client.models.create_or_update(
+ Model(name=model_name, path=model_local_path, type=AssetTypes.MLFLOW_MODEL)
+ )
+ ```
+
+3. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure ML compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an AzureML compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a compute definition `YAML` like the following one:
+
+ __cpu-cluster.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json
+ name: cluster-cpu
+ type: amlcompute
+ size: STANDARD_DS3_v2
+ min_instances: 0
+ max_instances: 2
+ idle_time_before_scale_down: 120
+ ```
+
+ Create the compute using the following command:
+
+ ```bash
+ az ml compute create -f cpu-cluster.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new compute cluster where to create the deployment, use the following script:
+
+ ```python
+ compute_name = "cpu-cluster"
+ if not any(filter(lambda m : m.name == compute_name, ml_client.compute.list())):
+ compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=2)
+ ml_client.begin_create_or_update(compute_cluster)
+ ```
+
+4. Now it is time to create the batch endpoint and deployment. Let's start with the endpoint first. Endpoints only require a name and a description to be created:
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json
+ name: heart-classifier-batch
+ description: A heart condition classifier for batch inference
+ auth_mode: aad_token
+ ```
+
+ Then, create the endpoint with the following command:
+
+ ```bash
+ ENDPOINT_NAME='heart-classifier-batch'
+ az ml batch-endpoint create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new endpoint, use the following script:
+
+ ```python
+ endpoint = BatchEndpoint(
+ name="heart-classifier-batch",
+ description="A heart condition classifier for batch inference",
+ )
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+5. Now, let create the deployment. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, you can specify them if you want to customize how the deployment does inference.
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-mlflow
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```bash
+ az ml batch-endpoint create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-mlflow",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!NOTE]
+ > `scoring_script` and `environment` auto generation only supports `pyfunc` model flavor. To use a different flavor, see [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+6. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-dataset
+ ```
+
+ Then, create the data asset:
+
+ ```bash
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "resources/heart-dataset/"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+2. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get installation instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+
+ > [!TIP]
+ > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
+
+3. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+## Analyzing the outputs
+
+Output predictions are generated in the `predictions.csv` file as indicated in the deployment configuration. The job generates a named output called `score` where this file is placed. Only one file is generated per batch job.
+
+The file is structured as follows:
+
+* There is one row per each data point that was sent to the model. For tabular data, this means that one row is generated for each row in the input files and hence the number of rows in the generated file (`predictions.csv`) equals the sum of all the rows in all the processed files. For other data types, there is one row per each processed file.
+* Two columns are indicated:
+ * The file name where the data was read from. In tabular data, use this field to know which prediction belongs to which input data. For any given file, predictions are returned in the same order they appear in the input file so you can rely on the row number to match the corresponding prediction.
+ * The prediction associated with the input data. This value is returned "as-is" it was provided by the model's `predict().` function.
++
+You can download the results of the job by using the job name:
+
+# [Azure ML CLI](#tab/cli)
+
+To download the predictions, use the following command:
+
+```bash
+az ml job download --name $JOB_NAME --output-name score --download-path ./
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+```
++
+Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
+
+```python
+import pandas as pd
+from ast import literal_eval
+
+with open('named-outputs/score/predictions.csv', 'r') as f:
+ pd.DataFrame(literal_eval(f.read().replace('\n', ',')), columns=['file', 'prediction'])
+```
+
+> [!WARNING]
+> The file `predictions.csv` may not be a regular CSV file and can't be read correctly using `pandas.read_csv()` method.
+
+The output looks as follows:
+
+| file | prediction |
+| -- | -- |
+| heart-unlabeled-0.csv | 0 |
+| heart-unlabeled-0.csv | 1 |
+| ... | 1 |
+| heart-unlabeled-3.csv | 0 |
+
+> [!TIP]
+> Notice that in this example the input data was tabular data in `CSV` format and there were 4 different input files (heart-unlabeled-0.csv, heart-unlabeled-1.csv, heart-unlabeled-2.csv and heart-unlabeled-3.csv).
+
+## Considerations when deploying to batch inference
+
+Azure Machine Learning supports no-code deployment for batch inference in [managed endpoints](../concept-endpoints.md). This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion.
+
+### How work is distributed on workers
+
+Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets](../v1/how-to-create-register-datasets.md#filedataset) or [URI folders](../reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
+
+> [!WARNING]
+> Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
+
+> [!WARNING]
+> Batch deployments will call the `predict` function of the MLflow model once per file. For CSV files containing multiple rows, this may impose a memory pressure in the underlying compute. When sizing your compute, take into account not only the memory consumption of the data being read but also the memory footprint of the model itself. This is specially true for models that processes text, like transformer-based models where the memory consumption is not linear with the size of the input. If you encouter several out-of-memory exceptions, consider splitting the data in smaller files with less rows or implement batching at the row level inside of the model/scoring script.
+
+### File's types support
+
+The following data types are supported for batch inference when deploying MLflow models without an environment and a scoring script:
+
+| File extension | Type returned as model's input | Signature requirement |
+| :- | :- | :- |
+| `.csv` | `pd.DataFrame` | `ColSpec`. If not provided, columns typing is not enforced. |
+| `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.gif` | `np.ndarray` | `TensorSpec`. Input is reshaped to match tensors shape if available. If no signature is available, tensors of type `np.uint8` are inferred. For additional guidance read [Considerations for MLflow models processing images](how-to-image-processing-batch.md#considerations-for-mlflow-models-processing-images). |
+
+> [!WARNING]
+> Be advised that any unsupported file that may be present in the input data will make the job to fail. You will see an error entry as follows: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.parquet'. File type 'parquet' is not supported."*.
+
+> [!TIP]
+> If you like to process a different file type, or execute inference in a different way that batch endpoints do by default you can always create the deploymnet with a scoring script as explained in [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+### Signature enforcement for MLflow models
+
+Input's data types are enforced by batch deployment jobs while reading the data using the available MLflow model signature. This means that your data input should comply with the types indicated in the model signature. If the data can't be parsed as expected, the job will fail with an error message similar to the following one: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.csv'. Exception: invalid literal for int() with base 10: 'value'"*.
+
+> [!TIP]
+> Signatures in MLflow models are optional but they are highly encouraged as they provide a convenient way to early detect data compatibility issues. For more information about how to log models with signatures read [Logging models with a custom signature, environment or samples](../how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
+
+You can inspect the model signature of your model by opening the `MLmodel` file associated with your MLflow model. For more details about how signatures work in MLflow see [Signatures in MLflow](../concept-mlflow-models.md#signatures).
+
+### Flavor support
+
+Batch deployments only support deploying MLflow models with a `pyfunc` flavor. If you need to deploy a different flavor, see [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+## Using MLflow models with a scoring script
+
+MLflow models can be deployed to batch endpoints without indicating a scoring script in the deployment definition. However, you can opt in to indicate this file (usually referred as the *batch driver*) to customize how inference is executed.
+
+You will typically select this workflow when:
+> [!div class="checklist"]
+> * You need to process a file type not supported by batch deployments MLflow deployments.
+> * You need to customize the way the model is run, for instance, use an specific flavor to load it with `mlflow.<flavor>.load()`.
+> * You need to do pre/pos processing in your scoring routine when it is not done by the model itself.
+> * The output of the model can't be nicely represented in tabular data. For instance, it is a tensor representing an image.
+> * You model can't process each file at once because of memory constrains and it needs to read it in chunks.
+
+> [!IMPORTANT]
+> If you choose to indicate an scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
+
+### Steps
+
+Use the following steps to deploy an MLflow model with a custom scoring script.
+
+1. Create a scoring script:
+
+ __batch_driver.py__
+
+ ```python
+ import os
+ import mlflow
+ import pandas as pd
+
+ def init():
+ global model
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # It is the path to the model folder
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+ model = mlflow.pyfunc.load(model_path)
+
+ def run(mini_batch):
+ resultList = []
+
+ for file_path in mini_batch:
+ data = pd.read_csv(file_path)
+ pred = model.predict(data)
+
+ df = pd.DataFrame(pred, columns=['predictions'])
+ df['file'] = os.path.basename(file_path)
+ resultList.extend(df.values)
+
+ return resultList
+ ```
+
+1. Let's create an environment where the scoring script can be executed:
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./heart-classifier-mlflow/environment/conda.yaml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+1. Let's create the deployment now:
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-custom
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ environment:
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
+ conda_file: ./heart-classifier-mlflow/environment/conda.yaml
+ code_configuration:
+ code: ./heart-classifier-custom/code/
+ scoring_script: batch_driver.py
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```bash
+ az ml batch-endpoint create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-custom",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./heart-classifier-mlflow/code/",
+ scoring_script="batch_driver.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+1. At this point, our batch endpoint is ready to be used.
+
+## Next steps
+
+* [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md)
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-nlp-processing-batch.md
+
+ Title: "NLP tasks with batch deployments"
+
+description: Learn how to use batch deployments to process text and output results.
++++++ Last updated : 10/10/2022++++
+# NLP tasks with batch deployments
++
+Batch Endpoints can be used for processing tabular data, but also any other file type like text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
+
+## Prerequisites
++
+* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md). This example assumes the endpoint is named `text-summarization-batch`.
+* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](../how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+
+## About the model used in the sample
+
+The model we are going to work with was built using the popular library transformers from HuggingFace along with [a pre-trained model from Facebook with the BART architecture](https://huggingface.co/facebook/bart-large-cnn). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation](https://arxiv.org/abs/1910.13461). This model has the following constrains that are important to keep in mind for deployment:
+
+* It can work with sequences up to 1024 tokens.
+* It is trained for summarization of text in English.
+* We are going to use TensorFlow as a backend.
+
+Due to the size of the model, it hasn't been included in this repository. Instead, you can generate a local copy using:
+
+```python
+from transformers import pipeline
+
+model = pipeline("summarization", model="facebook/bart-large-cnn")
+model_local_path = 'bart-text-summarization/model'
+summarizer.save_pretrained(model_local_path)
+```
+
+A local copy of the model will be placed at `bart-text-summarization/model`. We will use it during the course of this tutorial.
+
+## NLP tasks with batch deployments
+
+In this example, we are going to learn how to deploy a deep learning model based on the BART architecture that can perform text summarization over text in English. The text will be placed in CSV files for convenience.
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models. In this case, we need to publish the model we have just downloaded from HuggingFace. You can skip this step if the model you are trying to deploy is already registered.
+
+# [Azure ML CLI](#tab/cli)
+
+```bash
+MODEL_NAME='bart-text-summarization'
+az ml model create --name $MODEL_NAME --type "custom_model" --path "bart-text-summarization/model"
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+model_name = 'bart-text-summarization'
+model = ml_client.models.create_or_update(
+ Model(path='bart-text-summarization/model', type=AssetTypes.CUSTOM_MODEL)
+)
+```
++
+### Creating a scoring script
+
+We need to create a scoring script that can read the CSV files provided by the batch deployment and return the scores of the model with the summary. The following script does the following:
+
+> [!div class="checklist"]
+> * Indicates an `init` function that load the model using `transformers`. Notice that the tokenizer of the model is loaded separately to account for the limitation in the sequence lenghs of the model we are currently using.
+> * Indicates a `run` function that is executed for each mini-batch the batch deployment provides.
+> * The `run` function read the entire batch using the `datasets` library. The text we need to summarize is on the column `text`.
+> * The `run` method iterates over each of the rows of the text and run the prediction. Since this is a very expensive model, running the prediction over entire files will result in an out-of-memory exception. Notice that the model is not execute with the `pipeline` object from `transformers`. This is done to account for long sequences of text and the limitation of 1024 tokens in the underlying model we are using.
+> * It returns the summary of the provided text.
+
+__transformer_scorer.py__
+
+```python
+import os
+import numpy as np
+from transformers import pipeline, AutoTokenizer, TFBartForConditionalGeneration
+from datasets import load_dataset
+
+def init():
+ global model
+ global tokenizer
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # Change "model" to the name of the folder used by you model, or model file name.
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ tokenizer = AutoTokenizer.from_pretrained(model_path, truncation=True, max_length=1024)
+ model = TFBartForConditionalGeneration.from_pretrained(model_path)
+
+def run(mini_batch):
+ resultList = []
+
+ ds = load_dataset('csv', data_files={ 'score': mini_batch})
+ for text in ds['score']['text']:
+ # perform inference
+ input_ids = tokenizer.batch_encode_plus([text], truncation=True, padding=True, max_length=1024)['input_ids']
+ summary_ids = model.generate(input_ids, max_length=130, min_length=30, do_sample=False)
+ summaries = [tokenizer.decode(s, skip_special_tokens=True, clean_up_tokenization_spaces=False) for s in summary_ids]
+
+ # Get results:
+ resultList.append(summaries[0])
+
+ return resultList
+```
+
+> [!TIP]
+> Although files are provided in mini-batches by the deployment, this scoring script processes one row at a time. This is a common pattern when dealing with expensive models (like transformers) as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions).
++
+### Creating the deployment
+
+One the scoring script is created, it's time to create a batch deployment for it. Follow the following steps to create it:
+
+1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so we can reutilize this environment. We are just going to add a couple of dependencies in a `conda.yml` file including the libraries `transformers` and `datasets`.
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./bart-text-summarization/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest",
+ )
+ ```
+
+2. Now, let create the deployment.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `text-summarization-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: text-summarization-batch
+ name: text-summarization-hfbart
+ description: A text summarization deployment implemented with HuggingFace and BART architecture
+ model: azureml:bart-text-summarization@latest
+ compute: azureml:cpu-cluster
+ environment:
+ image: mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest
+ conda_file: ./bart-text-summarization/environment/conda.yml
+ code_configuration:
+ code: ./bart-text-summarization/code/
+ scoring_script: transformer_scorer.py
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 1
+ mini_batch_size: 1
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 3000
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```bash
+ az ml batch-endpoint create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment with the indicated environment and scoring script use the following code:
+
+ ```python
+ deployment = BatchDeployment(
+ name="text-summarization-hfbart",
+ description="A text summarization deployment implemented with HuggingFace and BART architecture",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./bart-text-summarization/code/",
+ scoring_script="imagenet_scorer.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=1,
+ mini_batch_size=1,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=3000),
+ logging_level="info",
+ )
+
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!IMPORTANT]
+ > You will notice in this deployment a high value in `timeout` in the parameter `retry_settings`. The reason for it is due to the nature of the model we are running. This is a very expensive model and inference on a single row may take up to 60 seconds. The `timeout` parameters controls how much time the Batch Deployment should wait for the scoring script to finish processing each mini-batch. Since our model runs predictions row by row, processing a long file may take time. Also notice that the number of files per batch is set to 1 (`mini_batch_size=1`). This is again related to the nature of the work we are doing. Processing one file at a time per batch is expensive enough to justify it. You will notice this being a pattern in NLP processing.
+
+3. At this point, our batch endpoint is ready to be used.
+
+## Considerations when deploying models that process text
+
+As mentioned in some of the notes along this tutorial, processing text may have some peculiarities that require specific configuration for batch deployments. Take the following consideration when designing the batch deployment:
+
+> [!div class="checklist"]
+> * Some NLP models may be very expensive in terms of memory and compute time. If this is the case, consider decreasing the number of files included on each mini-batch. In the example above, the number was taken to the minimum, 1 file per batch. While this may not be your case, take into consideration how many files your model can score at each time. Have in mind that the relationship between the size of the input and the memory footprint of your model may not be linear for deep learning models.
+> * If your model can't even handle one file at a time (like in this example), consider reading the input data in rows/chunks. Implement batching at the row level if you need to achieve higher throughput or hardware utilization.
+> * Set the `timeout` value of your deployment accordly to how expensive your model is and how much data you expect to process. Remember that the `timeout` indicates the time the batch deployment would wait for your scoring script to run for a given batch. If your batch have many files or files with many rows, this will impact the right value of this parameter.
+
+## Considerations for MLflow models that process text
+
+MLflow models in Batch Endpoints support reading CSVs as input data, which may contain long sequences of text. The same considerations mentioned above apply to MLflow models. However, since you are not required to provide a scoring script for your MLflow model deployment, some of the recommendation there may be harder to achieve.
+
+* Only `CSV` files are supported for MLflow deployments processing text. You will need to author a scoring script if you need to process other file types like `TXT`, `PARQUET`, etc. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script) for details.
+* Batch deployments will call your MLflow model's predict function with the content of an entire file in as Pandas dataframe. If your input data contains many rows, chances are that running a complex model (like the one presented in this tutorial) will result in an out-of-memory exception. If this is your case, you can consider:
+ * Customize how your model runs predictions and implement batching. To learn how to customize MLflow model's inference, see [Logging custom models](../how-to-log-mlflow-models.md?#logging-custom-models).
+ * Author a scoring script and load your model using `mlflow.<flavor>.load_model()`. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script) for details.
++
machine-learning How To Use Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-batch-azure-data-factory.md
+
+ Title: "Invoking batch endpoints from Azure Data Factory"
+
+description: Learn how to use Azure Data Factory to invoke Batch Endpoints.
++++++ Last updated : 10/10/2022++++
+# Invoking batch endpoints from Azure Data Factory
++
+Big data requires a service that can orchestrate and operationalize processes to refine these enormous stores of raw data into actionable business insights. [Azure Data Factory](../../data-factory/introduction.md) is a managed cloud service that's built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.
+
+Azure Data Factory allows the creation of pipelines that can orchestrate multiple data transformations and manage them as a single unit. Batch endpoints are an excellent candidate to become a step in such processing workflow. In this example, learn how to use batch endpoints in Azure Data Factory activities by relying on the Web Invoke activity and the REST API.
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* An Azure Data Factory resource created and configured. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](../../data-factory/quickstart-create-data-factory-portal.md) to create one.
+* After creating it, browse to the data factory in the Azure portal:
+
+ :::image type="content" source="./../../data-factory/media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot of the home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+* Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
+
+## Authenticating against batch endpoints
+
+Azure Data Factory can invoke the REST APIs of batch endpoints by using the [Web Invoke](../../data-factory/control-flow-web-activity.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
+
+You can use a service principal or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to authenticate against Batch Endpoints. We recommend using a managed identity as it simplifies the use of secrets.
+
+> [!IMPORTANT]
+> When your data is stored in cloud locations instead of Azure Machine Learning Data Stores, the identity of the compute is used to read the data instead of the identity used to invoke the endpoint.
+
+# [Using a Managed Identity](#tab/mi)
+
+1. You can use Azure Data Factory managed identity to communicate with Batch Endpoints. In this case, you only need to make sure that your Azure Data Factory resource was deployed with a managed identity.
+2. If you don't have an Azure Data Factory resource or it was already deployed without a managed identity, please follow the following steps to create it: [Managed identity for Azure Data Factory](../../data-factory/data-factory-service-identity.md#system-assigned-managed-identity).
+
+ > [!WARNING]
+ > Notice that changing the resource identity once deployed is not possible in Azure Data Factory. Once the resource is created, you will need to recreate it if you need to change the identity of it.
+
+3. Once deployed, grant access for the managed identity of the resource you created to your Azure Machine Learning workspace as explained at [Grant access](../../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
+ 2. Permissions to read in any cloud location (storage account) indicated as a data input.
+
+# [Using a Service Principal](#tab/sp)
+
+1. Create a service principal following the steps at [Register an application with Azure AD and create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Take note of the `client secret` generated.
+1. Take note of the `client ID` and the `tenant id` as explained at [Get tenant and app ID values for signing in](../../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Grant access for the service principal you created to your workspace as explained at [Grant access](../../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
++
+## About the pipeline
+
+We are going to create a pipeline in Azure Data Factory that can invoke a given batch endpoint over some data. The pipeline will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Deploy models with REST for batch scoring](../how-to-deploy-batch-with-rest.md).
+
+The pipeline will look as follows:
+
+# [Using a Managed Identity](#tab/mi)
++
+It is composed of the following activities:
+
+* __Run Batch-Endpoint__: It's a Web Activity that uses the batch endpoint URI to invoke it. It passes the input data URI where the data is located and the expected output file.
+* __Wait for job__: It's a loop activity that checks the status of the created job and waits for its completion, either as **Completed** or **Failed**. This activity, in turns, uses the following activities:
+ * __Check status__: It's a Web Activity that queries the status of the job resource that was returned as a response of the __Run Batch-Endpoint__ activity.
+ * __Wait__: It's a Wait Activity that controls the polling frequency of the job's status. We set a default of 120 (2 minutes).
+
+The pipeline requires the following parameters to be configured:
+
+| Parameter | Description | Sample value |
+| | -|- |
+| `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+| `api_version` | The API version to use with REST API calls. Defaults to `2020-09-01-preview` | `2020-09-01-preview` |
+| `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` |
+| `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |
+| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. | `azureml://datastores/azureml/paths/batch/predictions.csv` |
+
+# [Using a Service Principal](#tab/sp)
++
+It is composed of the following activities:
+
+* __Authorize__: It's a Web Activity that uses the service principal created in [Authenticating against batch endpoints](#authenticating-against-batch-endpoints) to obtain an authorization token. This token will be used to invoke the endpoint later.
+* __Run Batch-Endpoint__: It's a Web Activity that uses the batch endpoint URI to invoke it. It passes the input data URI where the data is located and the expected output file.
+* __Wait for job__: It's a loop activity that checks the status of the created job and waits for its completion, either as **Completed** or **Failed**. This activity, in turns, uses the following activities:
+ * __Authorize Management__: It's a Web Activity that uses the service principal created in [Authenticating against batch endpoints](#authenticating-against-batch-endpoints) to obtain an authorization token to be used for job's status query.
+ * __Check status__: It's a Web Activity that queries the status of the job resource that was returned as a response of the __Run Batch-Endpoint__ activity.
+ * __Wait__: It's a Wait Activity that controls the polling frequency of the job's status. We set a default of 120 (2 minutes).
+
+The pipeline requires the following parameters to be configured:
+
+| Parameter | Description | Sample value |
+| | -|- |
+| `tenant_id` | Tenant ID where the endpoint is deployed | `00000000-0000-0000-00000000` |
+| `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` |
+| `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` |
+| `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+| `api_version` | The API version to use with REST API calls. Defaults to `2020-09-01-preview` | `2020-09-01-preview` |
+| `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` |
+| `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |
+| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. | `azureml://datastores/azureml/paths/batch/predictions.csv` |
+++
+> [!WARNING]
+> Remember that `endpoint_output_uri` should be the path to a file that doesn't exist yet. Otherwise, the job will fail with the error *the path already exists*.
+
+> [!IMPORTANT]
+> The input data URI can be a path to an Azure Machine Learning data store, data asset, or a cloud URI. Depending on the case, further configuration may be required to ensure the deployment can read the data properly. See [Accessing storage services](../how-to-identity-based-service-authentication.md#accessing-storage-services) for details.
+
+## Steps
+
+To create this pipeline in your existing Azure Data Factory, follow these steps:
+
+1. Open Azure Data Factory Studio and under __Factory Resources__ click the plus sign.
+2. Select __Pipeline__ > __Import from pipeline template__
+3. You will be prompted to select a `zip` file. Uses [the following template if using managed identities](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-MI.zip) or [the following one if using a service principal](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-SP.zip).
+4. A preview of the pipeline will show up in the portal. Click __Use this template__.
+5. The pipeline will be created for you with the name __Run-BatchEndpoint__.
+6. Configure the parameters of the batch deployment you are using:
+
+ # [Using a Managed Identity](#tab/mi)
+
+ :::image type="content" source="./media/how-to-use-batch-adf/pipeline-params-mi.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline.":::
+
+ # [Using a Service Principal](#tab/sp)
+
+ :::image type="content" source="./media/how-to-use-batch-adf/pipeline-params.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline.":::
+
+
+
+ > [!WARNING]
+ > Ensure that your batch endpoint has a default deployment configured before submitting a job to it. The created pipeline will invoke the endpoint and hence a default deployment needs to be created and configured.
+
+ > [!TIP]
+ > For best reusability, use the created pipeline as a template and call it from within other Azure Data Factory pipelines by leveraging the [Execute pipeline activity](../../data-factory/control-flow-execute-pipeline-activity.md). In that case, do not configure the parameters in the created pipeline but pass them when you are executing the pipeline.
+ >
+ > :::image type="content" source="./media/how-to-use-batch-adf/pipeline-run.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline when invoked from another pipeline.":::
+
+7. Your pipeline is ready to be used.
++
+## Limitations
+
+When calling Azure Machine Learning batch deployments consider the following limitations:
+
+* __Data inputs__:
+ * Only Azure Machine Learning data stores or Azure Storage Accounts (Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2) are supported as inputs. If your input data is in another source, use the Azure Data Factory Copy activity before the execution of the batch job to sink the data to a compatible store.
+ * Ensure the deployment has the required access to read the input data depending on the type of input you are using. See [Accessing storage services](../how-to-identity-based-service-authentication.md#accessing-storage-services) for details.
+* __Data outputs__:
+ * Only registered Azure Machine Learning data stores are supported.
+ * Only Azure Blob Storage Accounts are supported for outputs. For instance, Azure Data Lake Storage Gen2 isn't supported as output in batch deployment jobs. If you need to output the data to a different location/sink, use the Azure Data Factory Copy activity after the execution of the batch job.
+
+## Considerations when reading and writing data
+
+When reading and writing data, take into account the following considerations:
+
+* Batch endpoint jobs don't explore nested folders and hence can't work with nested folder structures. If your data is distributed in multiple folders, notice that you will have to flatten the structure.
+* Make sure that your scoring script provided in the deployment can handle the data as it is expected to be fed into the job. If the model is MLflow, read the limitation in terms of the file type supported by the moment at [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* Batch endpoints distribute and parallelize the work across multiple workers at the file level. Make sure that each worker node has enough memory to load the entire data file at once and send it to the model. Such is especially true for tabular data.
+* When estimating the memory consumption of your jobs, take into account the model memory footprint too. Some models, like transformers in NLP, don't have a liner relationship between the size of the inputs and the memory consumption. On those cases, you may want to consider further partitioning your data into multiple files to allow a greater degree of parallelization with smaller files.
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-event-grid-batch.md
+
+ Title: "Invoking batch endpoints from Event Grid events in storage"
+
+description: Learn how to use batch endpoints to be automatically triggered when new files are generated in storage.
++++++ Last updated : 10/10/2022++++
+# Invoking batch endpoints from Event Grid events in storage
++
+Event Grid is a fully managed service that enables you to easily manage events across many different Azure services and applications. It simplifies building event-driven and serverless applications. In this tutorial we are going to learn how to create a Logic App that can subscribe to the Event Grid event associated with new files created in a storage account and trigger a batch endpoint to process the given file.
+
+The workflow will work in the following way:
+
+1. It will be triggered when a new blob is created in a specific storage account.
+2. Since the storage account can contain multiple data assets, event filtering will be applied to only react to events happening in a specific folder inside of it. Further filtering can be done is needed.
+3. It will get an authorization token to invoke batch endpoints using the credentials from a Service Principal.
+4. It will trigger the batch endpoint (default deployment) using the newly created file as input.
+
+> [!IMPORTANT]
+> The proposed Logic App will create a batch deployment job for each file that triggers the event of *blog created*. However, keep in mind that batch deployments distribute the work at the file level. Since this execution is specifying only one file, then, there will not be any parallelization happening in the deployment. Instead, you will be taking advantage of the capability of batch deployments of executing multiple scoring jobs under the same compute cluster. If you need to run jobs on folders, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* This example assumes that your batch deployment runs in a compute cluster called `cpu-cluster`.
+* The Logic App we are creating will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Deploy models with REST for batch scoring](../how-to-deploy-batch-with-rest.md).
+
+## Authenticating against batch endpoints
+
+Logic Apps can invoke the REST APIs of batch endpoints by using the [HTTP](../../connectors/connectors-native-http.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
+
+We recommend to using a service principal for authentication and interaction with batch endpoints in this scenario.
+
+1. Create a service principal following the steps at [Register an application with Azure AD and create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Take note of the `client secret` generated.
+1. Take note of the `client ID` and the `tenant id` as explained at [Get tenant and app ID values for signing in](../../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Grant access for the service principal you created to your workspace as explained at [Grant access](../../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
+
+## Enabling data access
+
+We will be using cloud URIs provided by event grid to indicate the input data to send to the deployment job. When reading data from cloud locations, batch deployments use the identity of the compute to gain access instead of the identity used to submit the job. In order to ensure the identity of the compute does have read access to the underlying data, we will need to assign to it a user assigned managed identity. Follow these steps to ensure data access:
+
+1. Create a [managed identity resource](../../active-directory/managed-identities-azure-resources/overview.md):
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ IDENTITY=$(az identity create -n azureml-cpu-cluster-idn --query id -o tsv)
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ # Use the Azure CLI to create the managed identity. Then copy the value of the variable IDENTITY into a Python variable
+ identity="/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/azureml-cpu-cluster-idn"
+ ```
+
+1. Update the compute cluster to use the managed identity we created:
+
+ > [!NOTE]
+ > This examples assumes you have a compute cluster created named `cpu-cluster` and it is used for the default deployment in the endpoint.
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml compute update --name cpu-cluster --identity-type user_assigned --user-assigned-identities $IDENTITY
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import AmlCompute, ManagedIdentityConfiguration
+ from azure.ai.ml.constants import ManagedServiceIdentityType
+
+ compute_name = "cpu-cluster"
+ compute_cluster = ml_client.compute.get(name=compute_name)
+
+ compute_cluster.identity.type = ManagedServiceIdentityType.USER_ASSIGNED
+ compute_cluster.identity.user_assigned_identities = [
+ ManagedIdentityConfiguration(resource_id=identity)
+ ]
+
+ ml_client.compute.begin_create_or_update(compute_cluster)
+ ```
+
+1. Go to the [Azure portal](https://portal.azure.com) and ensure the managed identity has the right permissions to read the data. To access storage services, you must have at least [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../../storage/blobs/assign-azure-role-data-access.md).
+
+## Create a Logic App
+
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
+
+1. On the Azure home page, select **Create a resource**.
+
+1. On the Azure Marketplace menu, select **Integration** > **Logic App**.
+
+ ![Screenshot that shows Azure Marketplace menu with "Integration" and "Logic App" selected.](./../../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/create-new-logic-app-resource.png)
+
+1. On the **Create Logic App** pane, on the **Basics** tab, provide the following information about your logic app resource.
+
+ ![Screenshot showing Azure portal, logic app creation pane, and info for new logic app resource.](./../../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/create-logic-app-settings.png)
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. This example uses **Pay-As-You-Go**. |
+ | **Resource Group** | Yes | **LA-TravelTime-RG** | The [Azure resource group](../../azure-resource-manager/management/overview.md) where you create your logic app resource and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+ | **Name** | Yes | **LA-TravelTime** | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+
+1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Consumption** to show only the settings for a Consumption logic app workflow, which runs in multi-tenant Azure Logic Apps.
+
+ The **Plan type** property also specifies the billing model to use.
+
+ | Plan type | Description |
+ |--|-|
+ | **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](../../logic-apps/logic-apps-pricing.md#standard-pricing). |
+ | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](../../logic-apps/logic-apps-pricing.md#consumption-pricing). |
+
+1. Now continue with the following selections:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Region** | Yes | **West US** | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <br><br>**Note**: If your subscription is associated with an integration service environment, this list includes those environments. |
+ | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. Change this option only when you want to enable diagnostic logging. For this tutorial, keep the default selection. |
+
+1. When you're done, select **Review + create**. After Azure validates the information about your logic app resource, select **Create**.
+
+1. After Azure deploys your app, select **Go to resource**.
+
+ Azure opens the workflow template selection pane, which shows an introduction video, commonly used triggers, and workflow template patterns.
+
+1. Scroll down past the video and common triggers sections to the **Templates** section, and select **Blank Logic App**.
+
+ ![Screenshot that shows the workflow template selection pane with "Blank Logic App" selected.](./../../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/select-logic-app-template.png)
++
+## Configure the workflow parameters
+
+This Logic App will use parameters to store specific pieces of information that you will need to run the batch deployment.
+
+1. On the workflow designer, under the tool bar, select the option __Parameters__ and configure them as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/parameters.png" alt-text="Screenshot of all the parameters required in the workflow.":::
+
+1. To create a parameter, use the __Add parameter__ option:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/parameter.png" alt-text="Screenshot showing how to add one parameter in designer.":::
+
+1. Create the following parameters.
+
+ | Parameter | Description | Sample value |
+ | | -|- |
+ | `tenant_id` | Tenant ID where the endpoint is deployed | `00000000-0000-0000-00000000` |
+ | `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` |
+ | `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` |
+ | `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+
+ > [!IMPORTANT]
+ > `endpoint_uri` is the URI of the endpoint you are trying to execute. The endpoint must have a default deployment configured.
+
+ > [!TIP]
+ > Use the values configured at [Authenticating against batch endpoints](#authenticating-against-batch-endpoints).
+
+## Add the trigger
+
+We want to trigger the Logic App each time a new file is created in a given folder (data asset) of a Storage Account. The Logic App will also use the information of the event to invoke the batch endpoint and passing the specific file to be processed.
+
+1. On the workflow designer, under the search box, select **Built-in**.
+
+1. In the search box, enter **event grid**, and select the trigger named **When a resource event occurs**.
+
+1. Configure the trigger as follows:
+
+ | Property | Value | Description |
+ |-|-|-|
+ | **Subscription** | Your subscription name | The subscription where the Azure Storage Account is placed. |
+ | **Resource Type** | `Microsoft.Storage.StorageAccounts` | The resource type emitting the events. |
+ | **Resource Name** | Your storage account name | The name of the Storage Account where the files will be generated. |
+ | **Event Type Item** | `Microsoft.Storage.BlobCreated` | The event type. |
+
+1. Click on __Add new parameter__ and select __Prefix Filter__. Add the value `/blobServices/default/containers/<container_name>/blobs/<path_to_data_folder>`.
+
+ > [!IMPORTANT]
+ > __Prefix Filter__ allows Event Grid to only notify the workflow when a blob is created in the specific path we indicated. In this case, we are assumming that files will be created by some external process in the folder `<path_to_data_folder>` inside the container `<container_name>` in the selected Storage Account. Configure this parameter to match the location of your data. Otherwise, the event will be fired for any file created at any location of the Storage Account. See [Event filtering for Event Grid](../../event-grid/event-filtering.md) for more details.
+
+ The trigger will look as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/create-trigger.png" alt-text="Screenshot of the trigger activity of the Logic App.":::
+
+## Configure the actions
+
+1. Click on __+ New step__.
+
+1. On the workflow designer, under the search box, select **Built-in** and then click on __HTTP__:
+
+1. Configure the action as follows:
+
+ | Property | Value | Notes |
+ |-|-|-|
+ | **Method** | `POST` | The HTTP method |
+ | **URI** | `concat('https://login.microsoftonline.com/', parameters('tenant_id'), '/oauth2/token')` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
+ | **Headers** | `Content-Type` with value `application/x-www-form-urlencoded` | |
+ | **Body** | `concat('grant_type=client_credentials&client_id=', parameters('client_id'), '&client_secret=', parameters('client_secret'), '&resource=https://ml.azure.com')` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
+
+ The action will look as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/authorize.png" alt-text="Screenshot of the authorize activity of the Logic App.":::
+
+1. Click on __+ New step__.
+
+1. On the workflow designer, under the search box, select **Built-in** and then click on __HTTP__:
+
+1. Configure the action as follows:
+
+ | Property | Value | Notes |
+ |-|-|-|
+ | **Method** | `POST` | The HTTP method |
+ | **URI** | `endpoint_uri` | Click on __Add dynamic context__, then select it under `parameters`. |
+ | **Headers** | `Content-Type` with value `application/json` | |
+ | **Headers** | `Authorization` with value `concat('Bearer ', body('Authorize')['access_token'])` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
+
+1. In the parameter __Body__, click on __Add dynamic context__, then __Expression__, to enter the following expression:
+
+ ```fx
+ replace('{
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFile",
+ "Uri" : "<JOB_INPUT_URI>"
+ }
+ }
+ }
+ }', '<JOB_INPUT_URI>', triggerBody()?[0]['data']['url'])
+ ```
+
+ The action will look as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/invoke.png" alt-text="Screenshot of the invoke activity of the Logic App.":::
+
+ > [!NOTE]
+ > Notice that this last action will trigger the batch deployment job, but it will not wait for its completion. Logic Apps are not long running applications. If you need to wait for the job to complete, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+
+1. Click on __Save__.
+
+1. The Logic App is ready to be executed and it will trigger automatically each time a new file is created under the indicated path. You will notice the app has successfully received the event by checking the __Run history__ of it:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/invoke-history.png" alt-text="Screenshot of the invoke history of the Logic App.":::
+
+## Next steps
+
+* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md)
machine-learning How To Use Low Priority Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-low-priority-batch.md
+
+ Title: "Using low priority VMs in batch deployments"
+
+description: Learn how to use low priority VMs to save costs when running batch jobs.
++++++ Last updated : 10/10/2022++++
+# Using low priority VMs in batch deployments
++
+Azure Batch Deployments supports low priority VMs to reduce the cost of batch inference workloads. Low priority VMs enable a large amount of compute power to be used for a low cost. Low priority VMs take advantage of surplus capacity in Azure. When you specify low priority VMs in your pools, Azure can use this surplus, when available.
+
+The tradeoff for using them is that those VMs may not always be available to be allocated, or may be preempted at any time, depending on available capacity. For this reason, they are most suitable for batch and asynchronous processing workloads where the job completion time is flexible and the work is distributed across many VMs.
+
+Low priority VMs are offered at a significantly reduced price compared with dedicated VMs. For pricing details, see [Azure Machine Learning pricing](https://azure.microsoft.com/pricing/details/machine-learning/).
+
+## How batch deployment works with low priority VMs
+
+Azure Machine Learning Batch Deployments provides several capabilities that make it easy to consume and benefit from low priority VMs:
+
+- Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs. Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.
+- Batch deployment jobs automatically seek the target number of VMs in the available compute cluster based on the number of tasks to submit. If VMs are preempted or unavailable, batch deployment jobs attempt to replace the lost capacity by queuing the failed tasks to the cluster.
+- When a job is interrupted, it is resubmitted to run again. Rescheduling is done at job level, regardless of the progress. No checkpointing capability is provided.
+- Low priority VMs have a separate vCPU quota that differs from the one for dedicated VMs. Low-priority cores per region have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families. See [Azure Machine Learning compute quotas](../how-to-manage-quotas.md#azure-machine-learning-compute).
+
+## Considerations and use cases
+
+Many batch workloads are a good fit for low priority VMs. However, this may introduce further execution delays when deallocation of VMs occurs. If there is flexibility in the time jobs have to complete, then potential drops in capacity can be tolerated at expenses of running with a lower cost.
+
+## Creating batch deployments with low priority VMs
+
+Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs.
+
+> [!NOTE]
+> Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.
+
+You can create a low priority Azure Machine Learning compute cluster as follows:
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a compute definition `YAML` like the following one:
+
+ __low-pri-cluster.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json
+ name: low-pri-cluster
+ type: amlcompute
+ size: STANDARD_DS3_v2
+ min_instances: 0
+ max_instances: 2
+ idle_time_before_scale_down: 120
+ tier: low_priority
+ ```
+
+ Create the compute using the following command:
+
+ ```bash
+ az ml compute create -f low-pri-cluster.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new compute cluster with low priority VMs where to create the deployment, use the following script:
+
+ ```python
+ compute_name = "low-pri-cluster"
+ compute_cluster = AmlCompute(
+ name=compute_name,
+ description="Low priority compute cluster",
+ min_instances=0,
+ max_instances=2,
+ tier='LowPriority'
+ )
+
+ ml_client.begin_create_or_update(compute_cluster)
+ ```
+
+
+
+Once you have the new compute created, you can create or update your deployment to use the new cluster:
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create or update a deployment under the new compute cluster, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ compute: azureml:low-pri-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```bash
+ az ml batch-endpoint create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create or update a deployment under the new compute cluster, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+## View and monitor node deallocation
+
+New metrics are available in the [Azure portal](https://portal.azure.com) for low priority VMs to monitor low priority VMs. These metrics are:
+
+- Preempted nodes
+- Preempted cores
+
+To view these metrics in the Azure portal
+
+1. Navigate to your Azure Machine Learning workspace in the [Azure portal](https://portal.azure.com).
+2. Select **Metrics** from the **Monitoring** section.
+3. Select the metrics you desire from the **Metric** list.
++
+## Limitations
+
+- Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.
+- Rescheduling is done at the job level, regardless of the progress. No checkpointing capability is provided.
+
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
--+++ Last updated 03/15/2022
Classification is a type of supervised learning in which models learn using trai
The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection.
-See an example of classification and automated machine learning in this Python notebook: [Bank Marketing](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing.ipynb).
+See an example of classification and automated machine learning in this Python notebook: [Bank Marketing](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing.ipynb).
### Regression
Similar to classification, regression tasks are also a common supervised learnin
Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc.
-See an example of regression and automated machine learning for predictions in these Python notebooks: [Hardware Performance](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb).
+See an example of regression and automated machine learning for predictions in these Python notebooks: [Hardware Performance](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb).
### Time-series forecasting
Advanced forecasting configuration includes:
* configurable lags * rolling window aggregate features
-See an example of forecasting and automated machine learning in this Python notebook: [Energy Demand](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb).
+See an example of forecasting and automated machine learning in this Python notebook: [Energy Demand](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb).
### Computer vision
How-to articles provide additional detail into what functionality automated ML o
### Jupyter notebook samples
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs).
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs.
### Python SDK reference
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
Compute instances get the latest VM images at the time of provisioning. Microsof
1. Recreate a compute instance to get the latest OS image (recommended) * Data and customizations such as installed packages that are stored on the instanceΓÇÖs OS and temporary disks will be lost.
- * [Store notebooks under "User files"](/azure/machine-learning/concept-compute-instance#accessing-files) to persist them when recreating your instance.
- * [Mount data using datasets and datastores](/azure/machine-learning/v1/concept-azure-machine-learning-architecture#datasets-and-datastores) to persist files when recreating your instance.
+ * [Store notebooks under "User files"](./concept-compute-instance.md#accessing-files) to persist them when recreating your instance.
+ * [Mount data using datasets and datastores](./v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores) to persist files when recreating your instance.
* See [Compute Instance release notes](azure-machine-learning-ci-image-release-notes.md) for details on image releases. 1. Alternatively, regularly update OS and python packages.
You may install and run additional scanning software on compute instance to scan
* [Trivy](https://github.com/aquasecurity/trivy) may be used to discover OS and python package level vulnerabilities. * [ClamAV](https://www.clamav.net/) may be used to discover malware and comes pre-installed on compute instance. * Defender for Server agent installation is currently not supported.
-* Consider using [customization scripts](/azure/machine-learning/how-to-customize-compute-instance) for automation.
+* Consider using [customization scripts](./how-to-customize-compute-instance.md) for automation.
### Compute clusters
For code-based training experiences, you control which Azure Machine Learning en
* [Azure Machine Learning Base Images Repository](https://github.com/Azure/AzureML-Containers) * [Data Science Virtual Machine release notes](./data-science-virtual-machine/release-notes.md) * [AzureML Python SDK Release Notes](./azure-machine-learning-release-notes.md)
-* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
+* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
machine-learning Reference Ubuntu Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-ubuntu-vm.md
Azure Machine Learning is a fully managed cloud service that enables you to buil
After you sign in to Azure Machine Learning studio, you can use an experimentation canvas to build a logical flow for the machine learning algorithms. You also have access to a Jupyter notebook that is hosted on Azure Machine Learning and can work seamlessly with the experiments in Azure Machine Learning studio.
-Operationalize the machine learning models that you have built by wrapping them in a web service interface. Operationalizing machine learning models enables clients written in any language to invoke predictions from those models. For more information, see the [Machine Learning documentation](/azure/machine-learning/).
+Operationalize the machine learning models that you have built by wrapping them in a web service interface. Operationalizing machine learning models enables clients written in any language to invoke predictions from those models. For more information, see the [Machine Learning documentation](../index.yml).
You can also build your models in R or Python on the VM, and then deploy them in production on Azure Machine Learning. We have installed libraries in R (**AzureML**) and Python (**azureml**) to enable this functionality.
You can exit Rattle and R. Now you can modify the generated R script. Or, use th
## Next steps
-Have additional questions? Consider creating a [support ticket](https://azure.microsoft.com/support/create-ticket/).
+Have additional questions? Consider creating a [support ticket](https://azure.microsoft.com/support/create-ticket/).
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
AzureML Kubernetes compute supports two kinds of Kubernetes cluster:
With a simple cluster extension deployment on AKS or Arc Kubernetes cluster, Kubernetes cluster is seamlessly supported in AzureML to run training or inference workload. It's easy to enable and use an existing Kubernetes cluster for AzureML workload with the following simple steps:
-1. Prepare an [Azure Kubernetes Service cluster](/azure/aks/learn/quick-kubernetes-deploy-cli) or [Arc Kubernetes cluster](/azure/azure-arc/kubernetes/quickstart-connect-cluster).
+1. Prepare an [Azure Kubernetes Service cluster](../aks/learn/quick-kubernetes-deploy-cli.md) or [Arc Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md).
1. [Deploy the AzureML extension](how-to-deploy-kubernetes-extension.md). 1. [Attach Kubernetes cluster to your Azure ML workspace](how-to-attach-kubernetes-to-workspace.md). 1. Use the Kubernetes compute target from CLI v2, SDK v2, and the Studio UI.
For any AzureML example, you only need to update the compute target name to your
* Explore model deployment with online endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes) * Explore batch endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch) * Explore training job samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs)
-* Explore model deployment with online endpoint samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes)
+* Explore model deployment with online endpoint samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes)
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
Title: Set up AutoML for time-series forecasting
description: Set up Azure Machine Learning automated ML to train time-series forecasting models with the Azure Machine Learning Python SDK. --+++
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md
Title: Featurization with automated machine learning description: Learn the data featurization settings in Azure Machine Learning and how to customize those features for your automated ML experiments.---+++
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
For more examples on how to do include AutoML in your pipelines, please check ou
## Next steps
-+ Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
++ Learn more about [how and where to deploy a model](./how-to-deploy-managed-online-endpoints.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Following is a sample policy to default a shutdown schedule at 10 PM PST.
## Assign managed identity (preview)
-You can assign a system- or user-assigned [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
+You can assign a system- or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
You can create compute instance with managed identity from Azure ML Studio:
To create a compute instance, you'll need permissions for the following actions:
* [Access the compute instance terminal](how-to-access-terminal.md) * [Create and manage files](how-to-manage-files.md) * [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)
-* [Submit a training job](v1/how-to-set-up-training-targets.md)
+* [Submit a training job](v1/how-to-set-up-training-targets.md)
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
You should already have a resource group in Azure with [Azure Machine Learning](
:::image type="content" source="media/how-to-devops-machine-learning/machine-learning-select-variables.png" alt-text="Screenshot of variables option in pipeline edit. ":::
-1. Create a new variable, `Subscription_ID`, and select the checkbox **Keep this value secret**. Set the value to your [Azure portal subscription ID](/azure/azure-portal/get-subscription-tenant-id).
+1. Create a new variable, `Subscription_ID`, and select the checkbox **Keep this value secret**. Set the value to your [Azure portal subscription ID](../azure-portal/get-subscription-tenant-id.md).
1. Create a new variable for `Resource_Group` with the name of the resource group for Azure Machine Learning (example: `machinelearning`). 1. Create a new variable for `AzureML_Workspace_Name` with the name of your Azure ML workspace (example: `docs-ws`). 1. Select **Save** to save your variables.
steps:
## Clean up resources
-If you're not going to continue to use your pipeline, delete your Azure DevOps project. In Azure portal, delete your resource group and Azure Machine Learning instance.
+If you're not going to continue to use your pipeline, delete your Azure DevOps project. In Azure portal, delete your resource group and Azure Machine Learning instance.
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
print(returned_job.studio_url) # link to naviagate to submitted run in AzureML S
## Next steps
-* Learn more about [how and where to deploy a model](/azure/machine-learning/v1/how-to-deploy-and-where).
-* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
+* Learn more about [how and where to deploy a model](./v1/how-to-deploy-and-where.md).
+* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
[!INCLUDE [v2](../../includes/machine-learning-dev-v2.md)] Get started with [GitHub Actions](https://docs.github.com/en/actions) to train a model on Azure Machine Learning.
-This article will teach you how to create a GitHub Actions workflow that builds and deploys a machine learning model to [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning). You'll train a [scikit-learn](https://scikit-learn.org/) linear regression model on the NYC Taxi dataset.
+This article will teach you how to create a GitHub Actions workflow that builds and deploys a machine learning model to [Azure Machine Learning](./overview-what-is-azure-machine-learning.md). You'll train a [scikit-learn](https://scikit-learn.org/) linear regression model on the NYC Taxi dataset.
GitHub Actions uses a workflow YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
When your resource group and repository are no longer needed, clean up the resou
## Next steps > [!div class="nextstepaction"]
-> [Create production ML pipelines with Python SDK](tutorial-pipeline-python-sdk.md)
+> [Create production ML pipelines with Python SDK](tutorial-pipeline-python-sdk.md)
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
If you accidentally deleted your workspace it is currently not possible to recov
## Next steps
-To learn about repeatable infrastructure deployments with Azure Machine Learning, use an [Azure Resource Manager template](/azure/machine-learning/tutorial-create-secure-workspace-template).
+To learn about repeatable infrastructure deployments with Azure Machine Learning, use an [Azure Resource Manager template](./tutorial-create-secure-workspace-template.md).
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
There are two scenarios in which you can apply identity-based data access in Azu
- Accessing storage services - Training machine learning models
-The identity-based access allows you to use [role-based access controls (RBAC)](/azure/storage/blobs/assign-azure-role-data-access) to restrict which identities, such as users or compute resources, have access to the data.
+The identity-based access allows you to use [role-based access controls (RBAC)](../storage/blobs/assign-azure-role-data-access.md) to restrict which identities, such as users or compute resources, have access to the data.
### Accessing storage services
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
You can create registries in AzureML studio using the following steps:
> [!TIP] > Specifying the Azure Storage Account type and SKU is only available from the Azure CLI.
-Azure storage offers several types of storage accounts with different features and pricing. For more information, see the [Types of storage accounts](/azure/storage/common/storage-account-overview#types-of-storage-accounts) article. Once you identify the optimal storage account SKU that best suites your needs, [find the value for the appropriate SKU type](/rest/api/storagerp/srp_sku_types). In the YAML file, use your selected SKU type as the value of the `storage_account_type` field. This field is under each `location` in the `replication_locations` list.
+Azure storage offers several types of storage accounts with different features and pricing. For more information, see the [Types of storage accounts](../storage/common/storage-account-overview.md#types-of-storage-accounts) article. Once you identify the optimal storage account SKU that best suites your needs, [find the value for the appropriate SKU type](/rest/api/storagerp/srp_sku_types). In the YAML file, use your selected SKU type as the value of the `storage_account_type` field. This field is under each `location` in the `replication_locations` list.
-Next, decide if you want to use an [Azure Blob storage](/azure/storage/blobs/storage-blobs-introduction) account or [Azure Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction). To create Azure Data Lake Storage Gen2, set `storage_account_hns` to `true`. To create Azure Blob Storage, set `storage_account_hns` to `false`. The `storage_account_hns` field is under each `location` in the `replication_locations` list.
+Next, decide if you want to use an [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md) account or [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md). To create Azure Data Lake Storage Gen2, set `storage_account_hns` to `true`. To create Azure Blob Storage, set `storage_account_hns` to `false`. The `storage_account_hns` field is under each `location` in the `replication_locations` list.
> [!NOTE]
->The `hns` portion of `storage_account_hns` refers to the [hierarchical namespace](/azure/storage/blobs/data-lake-storage-namespace) capability of Azure Data Lake Storage Gen2 accounts.
+>The `hns` portion of `storage_account_hns` refers to the [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md) capability of Azure Data Lake Storage Gen2 accounts.
Below is an example YAML that demonstrates this advanced storage configuration:
replication_locations:
## Add users to the registry
-Decide if you want to allow users to only use assets (models, environments and components) from the registry or both use and create assets in the registry. Review [steps to assign a role](/azure/role-based-access-control/role-assignments-steps) if you aren't familiar how to manage permissions using [Azure role-based access control](/azure/role-based-access-control/overview).
+Decide if you want to allow users to only use assets (models, environments and components) from the registry or both use and create assets in the registry. Review [steps to assign a role](../role-based-access-control/role-assignments-steps.md) if you aren't familiar how to manage permissions using [Azure role-based access control](../role-based-access-control/overview.md).
### Allow users to use assets from the registry
Microsoft.MachineLearningServices/registries/assets/write | Create assets in reg
Microsoft.MachineLearningServices/registries/assets/delete| Delete assets in registries > [!WARNING]
-> The built-in __Contributor__ and __Owner__ roles allow users to create, update and delete registries. You must create a custom role if you want the user to create and use assets from the registry, but not create or update registries. Review [custom roles](/azure/role-based-access-control/custom-roles) to learn how to create custom roles from permissions.
+> The built-in __Contributor__ and __Owner__ roles allow users to create, update and delete registries. You must create a custom role if you want the user to create and use assets from the registry, but not create or update registries. Review [custom roles](../role-based-access-control/custom-roles.md) to learn how to create custom roles from permissions.
### Allow users to create and manage registries
Microsoft.MachineLearningServices/registries/delete | Allows the user to delete
## Next steps
-* [Learn how to share models, components and environments across workspaces with registries (preview)](./how-to-share-models-pipelines-across-workspaces-with-registries.md)
+* [Learn how to share models, components and environments across workspaces with registries (preview)](./how-to-share-models-pipelines-across-workspaces-with-registries.md)
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/com
-H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ```
-To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/2022-05-01/workspaces/create-or-update), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes:
+To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/2022-10-01/workspaces/create-or-update), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes:
```bash curl -X PUT \
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
description: Monitor online endpoints and create alerts with Application Insights. --+++ Last updated 08/29/2022
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
Title: Troubleshoot automated ML experiments description: Learn how to troubleshoot and resolve issues in your automated machine learning experiments.---+++
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
---+++ Last updated 03/31/2022 #Customer intent: As an ML Deployment Pro, I want to figure out why my batch endpoint doesn't run so that I can fix it.
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
Title: Evaluate AutoML experiment results
description: Learn how to view and evaluate charts and metrics for each of your automated machine learning experiment jobs. --+++ Last updated 04/08/2022
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
description: Learn how to set up AutoML training jobs without a single line of c
---+++ Last updated 11/15/2021
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
---+++ Last updated 03/31/2022- # CLI (v2) batch deployment YAML schema
machine-learning Reference Yaml Endpoint Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-batch.md
---+++ Last updated 10/21/2021- # CLI (v2) batch endpoint YAML schema
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
---+++ Last updated 10/21/2021 #Customer intent: As a non-coding data scientist, I want to use automated machine learning to build a demand forecasting model.
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
---+++ Last updated 12/02/2021
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
description: Create an Azure Machine Learning workspace and required Azure servi
---+++ Last updated 09/06/2022
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
---+++ Last updated 10/21/2021 #Customer intent: As a non-coding data scientist, I want to use automated machine learning techniques so that I can build a classification model.
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
The first thing you have to do to build an application in Azure Machine Learning
$schema: https://azuremlschemas.azureedge.net/latest/workspace.schema.json name: TeamWorkspace location: WestUS2
- friendly_name: team-ml-workspace
+ display_name: team-ml-workspace
description: A workspace for training machine learning models tags: purpose: training
Like workspaces and compute targets, training jobs are defined using resource te
```yml $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
-code:
- local_path: src
+code: src
command: > python train.py
-environment: azureml:AzureML-TensorFlow2.4-Cuda11-OpenMpi4.1.0-py36:1
-compute:
- target: azureml:gpu-cluster
+environment: azureml:AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu:48
+compute: azureml:gpu-cluster
experiment_name: tensorflow-mnist-example description: Train a basic neural network with TensorFlow on the MNIST dataset. ```
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
Title: Azure Managed Grafana limitations
-description: List of known limitations in Azure Managed Grafana
+description: Learn about current limitations in Azure Managed Grafana.
Previously updated : 08/31/2022 Last updated : 10/18/2022 +
Managed Grafana has the following known limitations:
* Querying Azure Data Explorer may take a long time or return 50x errors. To resolve these issues, use a table format instead of a time series, shorten the time duration, or avoid having many panels querying the same data cluster that can trigger throttling.
-* API key usage isn't included in the audit log.
- * Users can be assigned the following Grafana Organization level roles: Admin, Editor, or Viewer. The Grafana Server Admin role isn't available to customers. * Some Data plane APIs require Grafana Server Admin permissions and can't be called by users. This includes the [Admin API](https://grafana.com/docs/grafana/latest/developers/http_api/admin/), the [User API](https://grafana.com/docs/grafana/latest/developers/http_api/user/#user-api) and the [Admin Organizations API](https://grafana.com/docs/grafana/latest/developers/http_api/org/#admin-organizations-api).
migrate How To Test Replicating Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-test-replicating-virtual-machines.md
Title: Tests migrate replicating virtual machines
+ Title: Test migrate replicating virtual machines
description: Learn best practices for testing replicating virtual machines ms. Previously updated : 3/23/2022 Last updated : 10/18/2022+
Last updated 3/23/2022
This article helps you understand how to test replicating virtual machines. Test migration provides a way to test and validate migrations prior to the actual migration. - ## Prerequisites
-Before you get started, you need to perform the following steps:
+Before you get started, perform the following steps:
-- Create the Azure Migrate project.-- Deploy the appliance for your scenario and complete discovery of virtual machines.
+- [Create](create-manage-projects.md) an Azure Migrate project.
+- Deploy the appliance for your scenario and complete the discovery of virtual machines.
- Configure replication for one or more virtual machines that are to be migrated.+ > [!IMPORTANT]
-> You'll need to have at least one replicating virtual machine in the project before you can perform test migration.
+> You'll need at least one replicating virtual machine in the project before you can test the migration.
-To learn how to perform the above, review the following tutorials based on your scenarios
-- [Migrating VMware virtual machines to Azure with the agentless migration method](./tutorial-migrate-vmware.md).-- [Migrating Hyper-V VMs to Azure with Azure Migrate Server Migration](./tutorial-migrate-hyper-v.md)-- [Migrating machines as physical server to Azure with Azure Migrate.](./tutorial-migrate-physical-virtual-machines.md)
+Review the following tutorials based on your environment:
+- [Migrating VMware VMs with agentless migration](./tutorial-migrate-vmware.md).
+- [Migrating Hyper-V VMs to Azure](./tutorial-migrate-hyper-v.md).
+- [Migrating machines as physical servers to Azure](./tutorial-migrate-physical-virtual-machines.md)
## Setting up your test environment
-The requirements for a test environment can vary according to your needs. Azure Migrate gives customers complete flexibility to create their own test environment. An option to select the VNet is given during test migration. You can customize the setting of this VNet to create a test environment according to your need.
+The requirements for a test environment can vary according to your needs. Azure Migrate provides customers complete flexibility to create their own test environment. An option to select the VNet is provided during test migration. You can customize the setting of this VNet to create a test environment according to your requirement.
Furthermore, you can create 1:1 mapping between subnets of the VNet and Network Interface Cards (NICs) on VM, which gives more flexibility in creating the test environment. > [!Note] > Currently, the subnet selection feature is available only for agentless VMware migration scenario.
-The following logic is used for subnet selection for other scenarios (Migration from Hyper-V environment and physical server migration)
+The following logic is used for subnet selection for other scenarios (Migration from Hyper-V environment and physical server migration).
- If a target subnet (other than default) was specified as an input while enabling replication. Azure Migrate prioritizes using a subnet with the same name in the Virtual Network selected for the test migration. - If the subnet with the same name ins't found, then Azure Migrate selects the first subnet available alphabetically that isn't a Gateway/Application Gateway/Firewall/Bastion subnet. For example, - Suppose the target VNet is VNet-alpha and target subnet is Subnet-alpha for a replicating VM. VNet-beta is selected during test migration for this VM, then -
- - If VNet-beta has a subnet named Subnet-alpha, that subnet would be chosen for test migration.
+ - If VNet-beta has a subnet named Subnet-alpha, that subnet would be chosen for test migration.
- If VNet-beta doesn't have a Subnet-alpha, then the next alphabetically available subnet, suppose Subnet-beta, would be chosen if it isn't Gateway / Application Gateway / Firewall / Bastion subnet.
-## Precautions to take selecting the test migration virtual network
+## Precautions to take while selecting the test migration virtual network
-The test environment boundaries would depend on the network setting of the VNet you selected. The tested VM would behave exactly like it's supposed to run after migration. We don't recommend performing a test migration to a production virtual network. Problems such as duplicate VM or DNS entry changes can arise if the VNet selected for test migration has connections open to on premise VNet.
+The test environment boundaries would depend on the network setting of the VNet you selected. The tested VM would behave exactly like it's supposed to run after migration. Performing a test migration to a production virtual network is not recommended. If the VNet selected for test migration has connections open to the on-premises VNet, it may cause issues like duplication of VM or DNS entry changes.
-## Selecting test migration VNet while enabling replication (Agentless VMware migration)
+## Selecting test migration VNet while enabling replication (Agentless VMware migration)
- Select the VNet and subnet for test migration from the Target settings tab. These settings can be overridden later in Compute and Network tab of the replicating VM or while starting test migration of the replicating VM.
+ Select the VNet and subnet for test migration from the **Target settings** tab. These settings can be overridden later in the **Compute and Network** tab of the replicating VM or while starting test migration of the replicating VM.
:::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-during-start-replication-flow.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
-## Changing test migration virtual network and subnet of a replicating machine (Agentless VMware migration)
+## Changing test migration virtual network and subnet of a replicating machine (Agentless VMware migration)
You can change the VNet and subnet of a replicating machine by following the steps below.
You can change the VNet and subnet of a replicating machine by following the ste
:::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-1.png" alt-text="Screenshot shows the contents of replicating machine screen. It contains a list of replicating machine.":::
-2. Select on the Compute and Network option under the general heading.
+2. Select **Compute and Network** option under **General**.
:::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-2.png" alt-text="Screenshot shows the location of network and compute option on the details page of replicating machine.":::
-3. Select the virtual network under the Test migration column. It's important to select the VNet in this drop down for test migration to be able to select subnet for each Network Interface Card (NIC) in the following steps.
+3. Select the virtual network from the **Test migration** column. It's important to select the VNet in this drop down for test migration to be able to select subnet for each Network Interface Card (NIC) in the following steps.
:::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-3.png" alt-text="Screenshot shows where to select VNet in replicating machine's network and compute options.":::
-4. Select on the Network Interface Card's name to check its settings. You can select the subnet for each of the Network Interface Card (NIC) of the VM.
+4. Select the NIC's name to check its settings. You can select the subnet for each of the NICs of the VM.
:::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-4.png" alt-text="Screenshot shows how to select a subnet for each Network Interface Card of replicating machine in the network and compute options of replicating machine.":::
-5. To change the settings, select on the pencil icon. Change the setting for the Network Interface Card (NIC) in the new form. Select OK.
+5. To change the settings, select edit. Change the setting for the NIC in the new form. Select **OK**.
:::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-5.png" alt-text="Screenshot shows the content of the Network Interface Card page after clicking the pencil icon next to Network Interface Card's name in the network and compute screen.":::
-6. Select save. Changes aren't saved until you can see the colored square next to Network Interface Card's (NIC) name.
+6. Select **Save**. Changes aren't saved until you can see the colored square next to the NIC's name.
:::image type="content" source="./media/how-to-test-replicating-virtual-machines/test-migration-subnet-selection-step-6.png" alt-text="Screenshot shows the network and compute options screen of replicating machine and highlights the save button.":::
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Here are some considerations to keep in mind when you use high availability:
* Zone-redundant high availability can be set only when the flexible server is created. * High availability isn't supported in the burstable compute tier. * Restarting the primary database server to pick up static parameter changes also restarts the standby replica.
-* Read replicas aren't supported for HA servers.
* Data-in Replication isn't supported for HA servers. * GTID mode will be turned on as the HA solution uses GTID. Check whether your workload has [restrictions or limitations on replication with GTIDs](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids-restrictions.html). >[!Note]
mysql Concepts Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md
The following are unsupported:
### Scale operations - Decreasing server storage provisioned is not supported.
-### Read replicas
-- Not supported with zone redundant HA configurations (both primary and standby).- ### Server version upgrades - Automated migration between major database engine versions is not supported. If you would like to upgrade the major version, take a [dump and restore](../concepts-migrate-dump-restore.md) it to a server that was created with the new engine version. ### Restoring a server - With point-in-time restore, new servers are created with the same compute and storage configurations as the source server it is based on. The newly restored server's compute can be scaled down after the server is created.-- Restoring a deleted server isn't supported. ## Features available in Single Server but not yet supported in Flexible Server Not all features available in Azure Database for MySQL - Single Server is available in Flexible Server yet. For complete list of feature comparison between single server and flexible server, refer [choosing the right MySQL Server option in Azure documentation.](../select-right-deployment-type.md#comparing-the-mysql-deployment-options-in-azure)
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/select-right-deployment-type.md
The main differences between these options are listed in the following table:
| Gtid support for read replicas | Supported | Supported | User Managed | | Cross-region support (Geo-replication) | Yes | Not supported | User Managed | | Hybrid scenarios | Supported with [Data-in Replication](./concepts-data-in-replication.md)| Supported with [Data-in Replication](../flexible-server/concepts-data-in-replication.md) | User Managed |
-| Gtid support for data-in replication | Supported | Supported | User Managed |
+| Gtid support for data-in replication | Supported | Not Supported | User Managed |
| Data-out replication | Not Supported | In preview | Supported | | [**Backup and Recovery**](../flexible-server/concepts-backup-restore.md) | | | | | Automated backups | Yes | Yes | No |
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
Use the Azure portal to find Dynatrace for Azure application.
| **Property** | **Description** | |--|-| | Subscription | Select the Azure subscription you want to use for creating the Dynatrace resource. You must have owner or contributor access.|
- | Resource group | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution. |
+ | Resource group | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution. |
| Resource name | Specify a name for the Dynatrace resource. This name will be the friendly name of the new Dynatrace environment.| | Location | Select the region. Select the region where the Dynatrace resource in Azure and the Dynatrace environment is created.| | Pricing plan | Select from the list of available plans. |
Use the Azure portal to find Dynatrace for Azure application.
:::image type="content" source="media/dynatrace-create/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs.":::
- - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription.
+ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There's a single activity log for each Azure subscription.
- - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace.
-1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure-monitor/essentials/resource-logs-categories.md).
+1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories).
When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources. To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags:
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
You can filter the list of resources by resource type, resource group name, regi
The column **Logs to Dynatrace** indicates whether the resource is sending logs to Dynatrace. If the resource isn't sending logs, this field indicates why logs aren't being sent. The reasons could be: -- _Resource doesn't support sending logs_ - Only resource types with monitoring log categories can be configured to send logs. See [supported categories](/azure-monitor/essentials/resource-logs-categories.md).-- _Limit of five diagnostic settings reached_ - Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure-monitor/essentials/diagnostic-settings.md).
+- _Resource doesn't support sending logs_ - Only resource types with monitoring log categories can be configured to send logs. See [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+- _Limit of five diagnostic settings reached_ - Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/cli/azure/monitor/diagnostic-settings).
- _Error_ - The resource is configured to send logs to Dynatrace, but is blocked by an error. - _Logs not configured_ - Only Azure resources that have the appropriate resource tags are configured to send logs to Dynatrace. - _Agent not configured_ - Virtual machines without the Dynatrace OneAgent installed don't emit logs to Dynatrace.
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
Select **Next: Metrics and logs** to configure metrics and logs.
:::image type="content" source="media/dynatrace-link-to-existing/dynatrace-metrics-and-logs.png" alt-text="Screenshot showing options for metrics and logs.":::
- - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure-resource-manager/management/control-plane-and-data-plane.md). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There\'s a single activity log for each Azure subscription.
+ - **Subscription activity logs** - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). Updates on service-health events are also included. Use the activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE). There\'s a single activity log for each Azure subscription.
- - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+ - **Azure resource logs** - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md).
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
Flexible servers that are configured with high availability, log data is replica
## Availability for non-HA servers For Flexible servers configured **without** high availability, the service still provides built-in availability, storage redundancy and resiliency to help to recover from any planned or unplanned downtime events. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this non-HA configuration.
+
+During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
+
+1. A new compute Linux VM is provisioned.
+2. The storage with data files is mapped to the new Virtual Machine
+3. PostgreSQL database engine is brought online on the new Virtual Machine.
+
+Picture below shows transition for VM and storage failure.
:::image type="content" source="./media/business-continuity/concepts-availability-without-zone-redundant-ha-architecture.png" alt-text="Diagram that shows availability without zone redundant ha - steady state." border="false" lightbox="./media/business-continuity/concepts-availability-without-zone-redundant-ha-architecture.png":::
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
For more details on the PgBouncer configurations, please see [pgbouncer.ini](htt
## Monitoring PgBouncer statistics
-PgBouncer also provides an **internal* database that you can connect to called `pgbouncer`. Once connected to the database you can execute `SHOW` commands that provide information on the current state of pgbouncer.
+PgBouncer also provides an **internal** database that you can connect to called `pgbouncer`. Once connected to the database you can execute `SHOW` commands that provide information on the current state of pgbouncer.
Steps to connect to `pgbouncer` database 1. Set `pgBouncer.stats_users` parameter to the name of an existing user (ex. "myUser"), and apply the changes.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Flexible servers are best suited for
- Zone redundant high availability - Managed maintenance windows
-## High availability
+## Architecture and high availability
The flexible server deployment model is designed to support high availability within a single availability zone and across multiple availability zones. The architecture separates compute and storage. The database engine runs on a container inside a Linux virtual machine, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
-During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
-
-1. A new compute Linux VM is provisioned.
-2. The storage with data files is mapped to the new Virtual Machine
-3. PostgreSQL database engine is brought online on the new Virtual Machine.
-
-Picture below shows transition for VM and storage failure.
-
- :::image type="content" source="./media/overview/overview-azure-postgres-flex-virtualmachine.png" alt-text="Flexible server - VM and storage failures":::
- If zone redundant high availability is configured, the service provisions and maintains a warm standby server across availability zone within the same Azure region. The data changes on the source server are synchronously replicated to the standby server to ensure zero data loss. With zone redundant high availability, once the planned or unplanned failover event is triggered, the standby server comes online immediately and is available to process incoming transactions. This allows the service resiliency from availability zone failure within an Azure region that supports multiple availability zones as shown in the picture below. :::image type="content" source="./media/business-continuity/concepts-zone-redundant-high-availability-architecture.png" alt-text="Zone redundant high availability":::
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
Azure Database for PostgreSQL Single Server planning the root certificate change
Historically, Azure database for PostgreSQL users could only use the predefined certificate to connect to their PostgreSQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
-As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
+As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for PostgreSQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your Postgres servers.
The new certificate is rolled out and in effect starting December, 2022 (12/2022).
postgresql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-ssl-connection-security.md
Azure Database for PostgreSQL prefers connecting your client applications to the
By default, the PostgreSQL database service is configured to require TLS connection. You can choose to disable requiring TLS if your client application does not support TLS connectivity. >[!NOTE]
-> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021).
+> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till November 30,2022(11/30/2022).
> [!IMPORTANT]
-> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md)
+> SSL root certificate is set to expire starting December,2022 (12/2022). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md)
## Enforcing TLS connections
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
# Collect the required information to deploy a private mobile network
-This how-to guide takes you through the process of collecting the information you'll need to deploy a private mobile network through Azure Private 5G Core Preview.
+This how-to guide takes you through the process of collecting the information you'll need to deploy a private mobile network through Azure Private 5G Core Preview.
- You can use this information to deploy a private mobile network through the [Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md). - Alternatively, you can use the information to quickly deploy a private mobile network with a single site using an [Azure Resource Manager template (ARM template)](deploy-private-mobile-network-with-site-arm-template.md). In this case, you'll also need to [collect information for the site](collect-required-information-for-a-site.md).
Collect all of the following values for the mobile network resource that will re
|The mobile country code for the private mobile network. |**Network configuration: Mobile country code (MCC)**| |The mobile network code for the private mobile network. |**Network configuration: Mobile network code (MNC)**|
-## Collect SIM values
+## Collect SIM and SIM group values
-Each SIM resource represents a physical SIM or eSIM that will be served by the private mobile network.
+Each SIM resource represents a physical SIM or eSIM that will be served by the private mobile network. Each SIM must be a member of exactly one SIM group. If you only have a small number of SIMs, you may want to add them all to the same SIM group. Alternatively, you can create multiple SIM groups to sort your SIMs. For example, you could categorize your SIMs by their purpose (such as SIMs used by specific UE types like cameras or cellphones), or by their on-site location.
-As part of creating your private mobile network, you can provision one or more SIMs that will use it. If you decide not to provision SIMs at this point, you can do so after deploying your private mobile network using the instructions in [Provision SIMs](provision-sims-azure-portal.md).
+As part of creating your private mobile network, you can provision one or more SIMs that will use it. If you decide not to provision SIMs at this point, you can do so after deploying your private mobile network using the instructions in [Provision SIMs](provision-sims-azure-portal.md). Likewise, if you need more than one SIM group, you can create additional SIM groups after you've deployed your private mobile network using the instructions in [Manage SIM groups](manage-sim-groups.md).
-If you want to provision SIMs as part of deploying your private mobile network, take the following steps.
+If you want to provision SIMs as part of deploying your private mobile network:
+
+1. Choose one of the following encryption types for the new SIM group to which all of the SIMs you provision will be added:
+Note that once the SIM group is created, the encryption type cannot be changed.
+ - Microsoft-managed keys (MMK) that Microsoft manages internally for [Encryption at rest](/azure/security/fundamentals/encryption-atrest).
+ - Customer-managed keys (CMK) that you must manually configure.
+ You must create a Key URI in your [Azure Key Vault](/azure/key-vault/) and a [User-assigned identity](/azure/active-directory/managed-identities-azure-resources/overview) with read, wrap, and unwrap access to the key.
+ - The key must be configured to have an activation and expiration date and we recommend that you [configure cryptographic key auto-rotation in Azure Key Vault](/azure/key-vault/keys/how-to-configure-key-rotation).
+ - The SIM group accesses the key via the user-assigned identity.
+ - For additional information on configuring CMK for a SIM group, see [Configure customer-managed keys](/azure/cosmos-db/how-to-setup-cmk).
+
+1. Collect each of the values given in the following table for the SIM group you want to provision.
+
+ |Value |Field name in Azure portal |
+ |||
+ |The name for the SIM group resource. The name must only contain alphanumeric characters, dashes, and underscores. |**SIM group name**|
+ |The region that the SIM group belongs to.|**Region**|
+ |The mobile network that the SIM group belongs to.|**Mobile network**|
+ |The chosen encryption type for the SIM group. Microsoft-managed keys (MMK) by default, or customer-managed keys (CMK).|**Encryption Type**|
+ |The Azure Key Vault URI containing the customer-managed Key for the SIM group.|**Key URI**|
+ |The User-assigned identity for accessing the SIM group's customer-managed Key within the Azure Key Vault.|**User-assigned identity**|
-1. Choose a name for a new SIM group to which all of the SIMs you provision will be added. If you need more than one SIM group, you can create additional SIM groups after you've deployed your private mobile network using the instructions in [Manage SIM groups](manage-sim-groups.md).
-
1. Choose one of the following methods for provisioning your SIMs: - Manually entering values for each SIM into fields in the Azure portal. This option is best when provisioning a few SIMs.
The following example shows the file format you'll need if you want to provision
## Decide whether you want to use the default service and SIM policy
- Azure Private 5G Core offers a default service and SIM policy that allow all traffic in both directions for all the SIMs you provision. They're designed to allow you to quickly deploy a private mobile network and bring SIMs into service automatically, without the need to design your own policy control configuration.
+ Azure Private 5G Core offers a default service and SIM policy that allow all traffic in both directions for all the SIMs you provision. They're designed to allow you to quickly deploy a private mobile network and bring SIMs into service automatically, without the need to design your own policy control configuration.
-- If you're using the ARM template in [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md), the default service and SIM policy are automatically included.
+- If you're using the ARM template in [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md), the default service and SIM policy are automatically included.
- If you use the Azure portal to deploy your private mobile network, you'll be given the option of creating the default service and SIM policy. You'll need to decide whether the default service and SIM policy are suitable for the initial use of your private mobile network. You can find information on each of the specific settings for these resources in [Default service and SIM policy](default-service-sim-policy.md) if you need it.
- If they aren't suitable, you can choose to deploy the private mobile network without any services or SIM policies. In this case, any SIMs you provision won't be brought into service when you create your private mobile network. You'll need to create your own services and SIM policies later.
+- If they aren't suitable, you can choose to deploy the private mobile network without any services or SIM policies. In this case, any SIMs you provision won't be brought into service when you create your private mobile network. You'll need to create your own services and SIM policies later.
For detailed information on services and SIM policies, see [Policy control](policy-control.md).
private-5g-core Collect Required Information For Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-service.md
You can specify a QoS for this service, or inherit the parent SIM Policy's QoS.
| The default Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** |No. Defaults to 9.| | The default 5G QoS Indicator (5QI) or QoS class identifier (QCI) value for this service. The 5QI (for 5G networks) or QCI (for 4G networks) value identifies a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers. </br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value.</p><p>Azure Private 5G Core doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5QI/QCI** |No. Defaults to 9.| | The default preemption capability for QoS flows or EPS bearers for this service. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** | **Preemption capability** |No. Defaults to **May not preempt**.|
-| The default preemption vulnerability for QoS flows or EPS bearers for this service. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** | **Preemption vulnerability** |No. Defaults to **Preemptable**.|
+| The default preemption vulnerability for QoS flows or EPS bearers for this service. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. You can choose from the following values: </br></br>- **Preemptible** </br>- **Not Preemptible** | **Preemption vulnerability** |No. Defaults to **Preemptible**.|
## Data flow policy rule(s)
private-5g-core Collect Required Information For Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-sim-policy.md
Collect each of the values in the table below for the network scope.
|The default 5QI (for 5G) or QCI (for 4G) value for this data network. These values identify a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers.</br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5QI/QCI** | No. Defaults to 9. | |The default Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** | No. Defaults to 1. | |The default preemption capability for QoS flows or EPS bearers on this data network. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** | **Preemption capability** | No. Defaults to **May not preempt**.|
-|The default preemption vulnerability for QoS flows or EPS bearers on this data network. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** | **Preemption vulnerability** | No. Defaults to **Preemptable**.|
+|The default preemption vulnerability for QoS flows or EPS bearers on this data network. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptible** </br>- **Not Preemptible** | **Preemption vulnerability** | No. Defaults to **Preemptible**.|
|The default PDU session type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type.| **Default session type** | No. Defaults to **IPv4**.| ## Next steps
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
In this how-to guide, you'll carry out each of the tasks you need to complete be
## Get access to Azure Private 5G Core for your Azure subscription
-Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP).
+Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP).
Once your trials engineer has confirmed your access, register the Mobile Network resource provider (Microsoft.MobileNetwork) for your subscription, as described in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
For each site you're deploying, do the following:
DNS allows the translation between human-readable domain names and their associated machine-readable IP addresses. Depending on your requirements, you have the following options for configuring a DNS server for your data network: - If you need the UEs connected to this data network to resolve domain names, you must configure one or more DNS servers. You must use a private DNS server if you need DNS resolution of internal hostnames. If you're only providing internet access to public DNS names, you can use a public or private DNS server.-- If you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers (instead of the DNS servers signalled to them by the packet core), you can omit this configuration.
+- If you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers (instead of the DNS servers signaled to them by the packet core), you can omit this configuration.
## Prepare your networks
Do the following for each site you want to add to your private mobile network. D
| 6. | Configure a name, DNS name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) | | 7. | Configure certificates for your Azure Stack Edge Pro device. | [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md) | | 8. | Activate your Azure Stack Edge Pro device. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
-| 9. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning.</br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
+| 9. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
| 10. | Deploy an Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on your Azure Stack Edge Pro device. At the end of this step, the Kubernetes cluster will be connected to Azure Arc and ready to host a packet core instance. During this step, you'll need to use the information you collected in [Allocate subnets and IP addresses](#allocate-subnets-and-ip-addresses). | Contact your trials engineer for detailed instructions. | - ## Next steps You can now collect the information you'll need to deploy your own private mobile network. -- [Collect the required information to deploy your own private mobile network](collect-required-information-for-private-mobile-network.md)
+- [Collect the required information to deploy your own private mobile network](collect-required-information-for-private-mobile-network.md)
private-5g-core Default Service Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/default-service-sim-policy.md
The following tables provide the settings for the default SIM policy and its ass
|The default 5G QoS identifier (5QI) or QoS class identifier (QCI) value for this data network. The 5QI or QCI identifies a set of 5G or 4G QoS characteristics that control QoS forwarding treatment for QoS Flows, such as limits for Packet Error Rate. | *9* | |The default QoS Flow Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt those with a lower ARP priority level. | *1* | |The default QoS Flow preemption capability for QoS Flows on this data network. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. | *May not preempt* |
-|The default QoS Flow preemption vulnerability for QoS Flows on this data network. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. | *Preemptable* |
+|The default QoS Flow preemption vulnerability for QoS Flows on this data network. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. | *Preemptible* |
## Next steps
private-5g-core How To Guide Deploy A Private Mobile Network Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/how-to-guide-deploy-a-private-mobile-network-azure-portal.md
In this step, you'll create the Mobile Network resource representing your privat
:::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab.":::
-1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected in [Collect SIM values](collect-required-information-for-private-mobile-network.md#collect-sim-values).
-
+1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected in [Collect SIM and SIM Group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values).
+
- If you decided that you don't want to provision any SIMs at this point, select **Add SIMs later**. - If you select **Add manually**, a new **Add SIM** button will appear under **Enter SIM profile configurations**. Select it, fill out the fields with the correct settings for the first SIM you want to provision, and select **Add SIM**. Repeat this process for every additional SIM you want to provision.
In this step, you'll create the Mobile Network resource representing your privat
:::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab."::: 1. If you're provisioning SIMs at this point, you'll need to take the following additional steps.
- 1. If you want to use the default service and SIM policy, set **Do you wish to create a basic, default SIM policy and assign it these SIMs?** to **Yes**, and then enter the name of the data network into the **Data network name** field that appears.
+ 1. If you want to use the default service and SIM policy, set **Do you wish to create a basic, default SIM policy and assign it to these SIMs?** to **Yes**, and then enter the name of the data network into the **Data network name** field that appears.
1. Under **Enter SIM group information**, set **SIM group name** to your chosen name for the SIM group to which your SIMs will be added.
+ 1. Under **Enter encryption details for SIM group**, set **Encryption type** to your chosen encryption type. Once the SIM group is created, you cannot change the encryption type.
+ 1. If you selected **Customer-managed keys (CMK)**, set the **Key URI** and **User-assigned identity** to those the SIM group will use for encryption.
1. Select **Review + create**. 1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
private-5g-core Manage Sim Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-sim-groups.md
You can view your existing SIM groups in the Azure portal.
## Create a SIM group
-You can create new SIM groups in the Azure portal. As part of creating a SIM group, you'll be given the option of provisioning new SIMs to add to your new SIM group. If you want to provision new SIMs, you'll need to [collect values for your SIMs](collect-required-information-for-private-mobile-network.md#collect-sim-values) before you start.
+You can create new SIM groups in the Azure portal. As part of creating a SIM group, you'll be given the option of provisioning new SIMs to add to your new SIM group. If you want to provision new SIMs, you'll need to [Collect SIM and SIM Group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values) before you start.
To create a new SIM group:
To create a new SIM group:
- Set **Region** to **East US**. - Select your private mobile network from the **Mobile network** drop-down menu.
- :::image type="content" source="media/manage-sim-groups/create-sim-group-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab.":::
+ :::image type="content" source="media/manage-sim-groups/create-sim-group-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab.":::
+
+1. Select **Next: Encryption**.
+1. On the **Encryption** configuration tab, select your chosen encryption type next to **Encryption Type**. By default, Microsoft-managed keys (MMK) is selected. Once created, you cannot change the encryption type of a SIM group.
+
+ - If you leave **Microsoft-managed keys (MMK)** selected, you will not need to enter any more configuration information on this tab.
+ - If you select **Customer-managed Keys (CMK)**, a new set of fields will appear. You need to provide the Key URI and User-assigned identity created or collected in [Collect SIM and SIM Group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values). These values can be updated as required after SIM group creation.
+ :::image type="content" source="media/manage-sim-groups/create-sim-group-encryption-tab.png" alt-text="Screenshot of the Azure portal showing the Encryption configuration tab.":::
1. Select **Next: SIMs**. 1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected for your SIMs.
To create a new SIM group:
- If you select **Upload JSON file**, the **Upload SIM profile configurations** field will appear. Use this field to upload your chosen JSON file.
- :::image type="content" source="media/manage-sim-groups/create-sim-group-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
+ :::image type="content" source="media/manage-sim-groups/create-sim-group-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
1. Select **Review + create**.
-1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
:::image type="content" source="media/manage-sim-groups/create-sim-group-review-create-tab.png" alt-text="Screenshot of the Azure portal showing validated configuration for a SIM group."::: If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-1. Once your configuration has been validated, you can select **Create** to create the SIM group. The Azure portal will display the following confirmation screen when the SIM group has been created.
+1. Once your configuration has been validated, you can select **Create** to create the SIM group. The Azure portal will display the following confirmation screen when the SIM group has been created.
:::image type="content" source="media/manage-sim-groups/sim-group-deployment-complete.png" alt-text="Screenshot of the Azure portal. It shows confirmation of the successful creation of a SIM group.":::
To create a new SIM group:
1. At this point, your SIMs will not have any assigned SIM policies and so will not be brought into service. If you want to begin using the SIMs, [assign a SIM policy to them](manage-existing-sims.md#assign-sim-policies). If you've configured static IP address allocation for your packet core instance(s), you may also want to [assign static IP addresses](manage-existing-sims.md#assign-static-ip-addresses) to the SIMs you've provisioned.
+## Modify a SIM group
+
+If you have configured CMK encryption for your SIM group, you can modify the key URI and user-assigned identity through the Azure portal.
+
+1. Navigate to the list of SIM groups in your private mobile network, as described in [View existing SIM groups](#view-existing-sim-groups).
+1. Select the SIM group you want to modify.
+1. Select the **Encryption** blade.
+
+ :::image type="content" source="media/manage-sim-groups/modify-sim-group-encryption.png" alt-text="Screenshot of the Azure portal showing the Encryption blade of a SIM group." lightbox="media/manage-sim-groups/modify-sim-group-encryption.png" :::
+
+1. If you want to change the key URI, enter the new value in the **Key URI** field using the values you collected in [Collect SIM and SIM group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values).
+1. If you want to change the user-assigned identity, click the current **User-assigned identity** hyperlink. This expands a new window to select the new identity. Select the identity created in [Collect SIM and SIM group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values) and select **Add**.
+
+ :::image type="content" source="media/manage-sim-groups/modify-sim-group-identity-select.png" alt-text="Screenshot of the Azure portal showing the Select user assigned managed identity selection window for a SIM group." lightbox="media/manage-sim-groups/modify-sim-group-identity-select.png" :::
+
+1. Select **Next**.
+1. Review your changes. If they are correct, select **Create**.
+ ## Delete a SIM group
-You can delete SIM groups through the Azure portal.
+You can delete SIM groups through the Azure portal.
1. Navigate to the list of SIM groups in your private mobile network, as described in [View existing SIM groups](#view-existing-sim-groups). 1. Make sure any SIMs in the SIM group are no longer needed. When you delete the SIM group, all SIMs that it contains will be deleted.
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
A *QoS profile* has two main components.
The required parameters for each 5QI value are pre-configured in the Next Generation Node B (gNB). > [!NOTE]
-> Azure Private 5G Core does not support dynamically assigned 5QI, where specific QoS characteristics are signalled to the gNB during QoS flow creation.
+> Azure Private 5G Core does not support dynamically assigned 5QI, where specific QoS characteristics are signaled to the gNB during QoS flow creation.
- An *allocation and retention priority (ARP) value*. The ARP value defines a QoS flow's importance. It controls whether a particular QoS flow should be retained or preempted when there's resource constraint in the network, based on its priority compared to other QoS flows. The QoS profile may also define whether the QoS flow can preempt or be preempted by another QoS flow.
private-5g-core Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/security.md
Azure Private 5G Core requires deployment of packet core instances onto a secure
## Encryption at rest
-The Azure Private 5G Core service stores all data securely at rest, including SIM credentials. It provides [encryption of data at rest](../security/fundamentals/encryption-overview.md) using platform-managed encryption keys, managed by Microsoft.
+The Azure Private 5G Core service stores all data securely at rest, including SIM credentials. It provides [encryption of data at rest](../security/fundamentals/encryption-overview.md) using platform-managed encryption keys, managed by Microsoft. Encryption at rest is used by default when [creating a SIM group](manage-sim-groups.md#create-a-sim-group).
-Azure Private 5G Core packet core instances are deployed on Azure Stack Edge devices, which handle [protection of data](../databox-online/azure-stack-edge-security.md#protect-your-data).
+Azure Private 5G Core packet core instances are deployed on Azure Stack Edge devices, which handle [protection of data](../databox-online/azure-stack-edge-security.md#protect-your-data).
+
+## Customer-managed key encryption at rest
+
+In addition to the default [Encryption at rest](#encryption-at-rest) using Microsoft-Managed Keys (MMK), you can optionally use Customer Managed Keys (CMK) when [creating a SIM group](manage-sim-groups.md#create-a-sim-group) or [when deploying a private mobile network](how-to-guide-deploy-a-private-mobile-network-azure-portal.md#deploy-your-private-mobile-network) to encrypt data with your own key.
+
+If you elect to use a CMK, you will need to create a Key URI in your [Azure Key Vault](/azure/key-vault/) and a [User-assigned identity](/azure/active-directory/managed-identities-azure-resources/overview) with read, wrap, and unwrap access to the key.
+
+- The key must be configured to have an activation and expiration date and we recommend that you [configure cryptographic key auto-rotation in Azure Key Vault](/azure/key-vault/keys/how-to-configure-key-rotation).
+- The SIM group accesses the key via the user-assigned identity.
+- For additional information on configuring CMK for a SIM group, see [Configure customer-managed keys](/azure/cosmos-db/how-to-setup-cmk).
+
+> [!IMPORTANT]
+> Once a SIM group is created, you cannot change the encryption type. However, if the SIM group uses CMK, you can update the key used for encryption.
## Write-only SIM credentials
private-5g-core Tutorial Create Example Set Of Policy Control Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/tutorial-create-example-set-of-policy-control-configuration.md
Azure Private 5G Core Preview provides flexible traffic handling. You can custom
In this tutorial, you'll learn how to: > [!div class="checklist"]
+>
> * Create a new service that filters packets based on their protocol. > * Create a new service that blocks traffic labeled with specific remote IP addresses and ports. > * Create a new service that limits the bandwidth of traffic on matching flows.
To create the service:
|**Allocation and Retention Priority level** | `2` | |**5QI/QCI** | `9` | |**Preemption capability** | Select **May not preempt**. |
- |**Preemption vulnerability** | Select **Not preemptable**. |
+ |**Preemption vulnerability** | Select **Not preemptible**. |
1. Under **Data flow policy rules**, select **Add a policy rule**.
To create the service:
|**Allocation and Retention Priority level** | `2` | |**5QI/QCI** | `9` | |**Preemption capability** | Select **May not preempt**. |
- |**Preemption vulnerability** | Select **Not preemptable**. |
+ |**Preemption vulnerability** | Select **Not preemptible**. |
1. Under **Data flow policy rules**, select **Add a policy rule**.
To create the service:
|**Allocation and Retention Priority level** | `2` | |**5QI/QCI** | `9` | |**Preemption capability** | Select **May not preempt**. |
- |**Preemption vulnerability** | Select **Preemptable**. |
+ |**Preemption vulnerability** | Select **Preemptible**. |
1. Under **Data flow policy rules**, select **Add a policy rule**.
Let's create the SIM policies.
|**5QI/QCI** | `9` | |**Allocation and Retention Priority level** | `9` | |**Preemption capability** | Select **May not preempt**. |
- |**Preemption vulnerability** | Select **Preemptable**. |
+ |**Preemption vulnerability** | Select **Preemptible**. |
|**Default session type** | Select **IPv4**. | 1. Select **Add**.
Let's create the SIM policies.
|**5QI/QCI** | `9` | |**Allocation and Retention Priority level** | `9` | |**Preemption capability** | Select **May not preempt**. |
- |**Preemption vulnerability** | Select **Preemptable**. |
+ |**Preemption vulnerability** | Select **Preemptible**. |
|**Default session type** | Select **IPv4**. | 1. Select **Add**.
In this step, we will provision two SIMs and assign a SIM policy to each one. Th
1. Select **Create** and then **Upload JSON from file**.
- :::image type="content" source="media/provision-sims-azure-portal/create-new-sim.png" alt-text="Screenshot of the Azure portal showing the Create button and its options - Upload J S O N from file and Add manually.":::
+ :::image type="content" source="media/provision-sims-azure-portal/create-new-sim.png" alt-text="Screenshot of the Azure portal showing the Create button and its options - Upload JSON from file and Add manually.":::
1. Select **Browse** and then select the JSON file you created at the start of this step.
-1. Under **SIM group name**, select **Create new** and then enter **SIMGroup1** into the field that appears.
+1. Under **SIM group name**, select **Create new** and then enter **SIMGroup1** into the field that appears.
1. Select **Add**. 1. The Azure portal will now begin deploying the SIM group and SIMs. When the deployment is complete, select **Go to resource group**.
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|| [Hive Metastore Database](register-scan-hive-metastore-source.md) | [Yes](register-scan-hive-metastore-source.md#register) | No | [Yes*](register-scan-hive-metastore-source.md#lineage) | No| No | || [MongoDB](register-scan-mongodb.md) | [Yes](register-scan-mongodb.md#register) | No | No | No | No | || [MySQL](register-scan-mysql.md) | [Yes](register-scan-mysql.md#register) | No | [Yes](register-scan-mysql.md#lineage) | No | No |
-|| [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| No | [Yes*](register-scan-oracle-source.md#lineage) | No| No |
+|| [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| [Yes](register-scan-oracle-source.md#scan) | [Yes*](register-scan-oracle-source.md#lineage) | No| No |
|| [PostgreSQL](register-scan-postgresql.md) | [Yes](register-scan-postgresql.md#register) | No | [Yes](register-scan-postgresql.md#lineage) | No | No | || [SAP Business Warehouse](register-scan-sap-bw.md) | [Yes](register-scan-sap-bw.md#register) | No | No | No | No | || [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | No |
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
This article outlines how to register Oracle, and how to authenticate and intera
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**| |||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes*](#lineage)| No |
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | [Yes](#scan) | No| [Yes*](#lineage)| No |
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
role-based-access-control Conditions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-prerequisites.md
Previously updated : 11/16/2021 Last updated : 10/19/2022 #Customer intent:
When using Azure CLI to add or update conditions, you must use the following ver
- [Azure CLI 2.18 or later](/cli/azure/install-azure-cli)
+## REST API
+
+When using the REST API to add or update conditions, you must use the following versions:
+
+- `2020-03-01-preview` or later
+- `2020-04-01-preview` or later if you want to utilize the `description` property for role assignments
+- `2022-04-01` is the first stable version
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ## Permissions Just like role assignments, to add or update conditions, you must be signed in to Azure with a user that has the `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner).
role-based-access-control Conditions Role Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-rest.md
Previously updated : 05/07/2021 Last updated : 10/19/2022
An [Azure role assignment condition](conditions-overview.md) is an additional ch
## Prerequisites
-For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](conditions-prerequisites.md).
+You must use the following versions:
+
+- `2020-03-01-preview` or later
+- `2020-04-01-preview` or later if you want to utilize the `description` property for role assignments
+- `2022-04-01` is the first stable version
+
+For more information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](conditions-prerequisites.md).
## Add a condition
-To add a role assignment condition, use the [Role Assignments - Create](/rest/api/authorization/roleassignments/create) REST API. Set the `api-version` to `2020-03-01-preview` or later. If you want to utilize the `description` property for role assignments, use `2020-04-01-preview` or later. [Role Assignments - Create](/rest/api/authorization/roleassignments/create) includes the following parameters related to conditions.
+To add a role assignment condition, use the [Role Assignments - Create](/rest/api/authorization/role-assignments/create) REST API. [Role Assignments - Create](/rest/api/authorization/role-assignments/create) includes the following parameters related to conditions.
| Parameter | Type | Description | | | | |
To add a role assignment condition, use the [Role Assignments - Create](/rest/ap
Use the following request and body: ```http
-PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2020-04-01-preview
+PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2022-04-01
``` ```json
PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleA
The following example shows how to assign the [Storage Blob Data Reader](built-in-roles.md#storage-blob-data-reader) role with a condition. The condition checks whether container name equals 'blobs-example-container'. ```http
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2020-04-01-preview
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2022-04-01
``` ```json
The following shows an example of the output:
"scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}", "condition": "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})) OR (@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'))", "conditionVersion": "2.0",
- "createdOn": "2021-04-20T06:20:44.0205560Z",
- "updatedOn": "2021-04-20T06:20:44.2955371Z",
+ "createdOn": "2022-07-20T06:20:44.0205560Z",
+ "updatedOn": "2022-07-20T06:20:44.2955371Z",
"createdBy": null, "updatedBy": "{updatedById}", "delegatedManagedIdentityResourceId": null,
The following shows an example of the output:
## Edit a condition
-To edit an existing role assignment condition, use the same [Role Assignments - Create](/rest/api/authorization/roleassignments/create) REST API as you used to add the role assignment condition. The following shows an example JSON where `condition` and `description` are updated. Only the `condition`, `conditionVersion`, and `description` properties can be edited. You must specify the other properties to match the existing role assignment.
+To edit an existing role assignment condition, use the same [Role Assignments - Create](/rest/api/authorization/role-assignments/create) REST API as you used to add the role assignment condition. The following shows an example JSON where `condition` and `description` are updated. Only the `condition`, `conditionVersion`, and `description` properties can be edited. You must specify the other properties to match the existing role assignment.
```http
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2020-04-01-preview
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2022-04-01
``` ```json
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
} ``` - ## List a condition
-To list a role assignment condition, use the [Role Assignments - List](/rest/api/authorization/roleassignments/list) API. Set the `api-version` to `2020-03-01-preview` or later. If you want to utilize the `description` property for role assignments, use `2020-04-01-preview` or later. For more information, see [List Azure role assignments using the REST API](role-assignments-list-rest.md).
+To list a role assignment condition, use the [Role Assignments](/rest/api/authorization/role-assignments) Get or List REST API. For more information, see [List Azure role assignments using the REST API](role-assignments-list-rest.md).
## Delete a condition To delete a role assignment condition, edit the role assignment condition and set both the condition and condition version to either an empty string or null.
-Alternatively, if you want to delete both the role assignment and the condition, you can use the [Role Assignments - Delete](/rest/api/authorization/roleassignments/delete) API. For more information, see [Remove Azure role assignments](role-assignments-remove.md).
+Alternatively, if you want to delete both the role assignment and the condition, you can use the [Role Assignments - Delete](/rest/api/authorization/role-assignments/delete) API. For more information, see [Remove Azure role assignments](role-assignments-remove.md).
## Next steps
role-based-access-control Conditions Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-template.md
Previously updated : 06/29/2021 Last updated : 10/19/2022
An [Azure role assignment condition](conditions-overview.md) is an additional ch
## Prerequisites
-For information about the prerequisites to add role assignment conditions, see [Conditions prerequisites](conditions-prerequisites.md).
+You must use the following versions:
+
+- `2020-03-01-preview` or later
+- `2020-04-01-preview` or later if you want to utilize the `description` property for role assignments
+- `2022-04-01` is the first stable version
+
+For more information about the prerequisites to add role assignment conditions, see [Conditions prerequisites](conditions-prerequisites.md).
## Add a condition
To use the template, you must specify the following input:
{ "name": "[parameters('roleAssignmentGuid')]", "type": "Microsoft.Authorization/roleAssignments",
- "apiVersion": "2020-04-01-preview", // API version to call the role assignment PUT.
+ "apiVersion": "2022-04-01", // API version to call the role assignment PUT.
"properties": { "roleDefinitionId": "[variables('StorageBlobDataReader')]", "principalId": "[parameters('principalId')]",
role-based-access-control Custom Roles Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-rest.md
rest-api Previously updated : 07/28/2022 Last updated : 10/19/2022
If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own custom roles. This article describes how to list, create, update, or delete custom roles using the REST API.
+## Prerequisites
+
+You must use the following version:
+
+- `2015-07-01` or later
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ## List custom roles
-To list all custom roles in a directory, use the [Role Definitions - List](/rest/api/authorization/roledefinitions/list) REST API.
+To list all custom roles in a directory, use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) REST API.
1. Start with the following request: ```http
- GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions?api-version=2015-07-01&$filter={filter}
+ GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions?api-version=2022-04-01&$filter={filter}
``` 1. Replace *{filter}* with the role type.
To list all custom roles in a directory, use the [Role Definitions - List](/rest
## List custom roles at a scope
-To list custom roles at a scope, use the [Role Definitions - List](/rest/api/authorization/roledefinitions/list) REST API.
+To list custom roles at a scope, use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) REST API.
1. Start with the following request: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions?api-version=2015-07-01&$filter={filter}
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions?api-version=2022-04-01&$filter={filter}
``` 1. Within the URI, replace *{scope}* with the scope for which you want to list the roles.
To list custom roles at a scope, use the [Role Definitions - List](/rest/api/aut
## List a custom role definition by name
-To get information about a custom role by its display name, use the [Role Definitions - Get](/rest/api/authorization/roledefinitions/get) REST API.
+To get information about a custom role by its display name, use the [Role Definitions - Get](/rest/api/authorization/role-definitions/get) REST API.
1. Start with the following request: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions?api-version=2015-07-01&$filter={filter}
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions?api-version=2022-04-01&$filter={filter}
``` 1. Within the URI, replace *{scope}* with the scope for which you want to list the roles.
To get information about a custom role by its display name, use the [Role Defini
## List a custom role definition by ID
-To get information about a custom role by its unique identifier, use the [Role Definitions - Get](/rest/api/authorization/roledefinitions/get) REST API.
+To get information about a custom role by its unique identifier, use the [Role Definitions - Get](/rest/api/authorization/role-definitions/get) REST API.
-1. Use the [Role Definitions - List](/rest/api/authorization/roledefinitions/list) REST API to get the GUID identifier for the role.
+1. Use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) REST API to get the GUID identifier for the role.
1. Start with the following request: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2015-07-01
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2022-04-01
``` 1. Within the URI, replace *{scope}* with the scope for which you want to list the roles.
To get information about a custom role by its unique identifier, use the [Role D
## Create a custom role
-To create a custom role, use the [Role Definitions - Create Or Update](/rest/api/authorization/roledefinitions/createorupdate) REST API. To call this API, you must be signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission on all the `assignableScopes`. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) include this permission.
+To create a custom role, use the [Role Definitions - Create Or Update](/rest/api/authorization/role-definitions/create-or-update) REST API. To call this API, you must be signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission on all the `assignableScopes`. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) include this permission.
1. Review the list of [resource provider operations](resource-provider-operations.md) that are available to create the permissions for your custom role.
To create a custom role, use the [Role Definitions - Create Or Update](/rest/api
1. Start with the following request and body: ```http
- PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2015-07-01
+ PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2022-04-01
``` ```json
To create a custom role, use the [Role Definitions - Create Or Update](/rest/api
## Update a custom role
-To update a custom role, use the [Role Definitions - Create Or Update](/rest/api/authorization/roledefinitions/createorupdate) REST API. To call this API, you must be signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission on all the `assignableScopes`. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) include this permission.
+To update a custom role, use the [Role Definitions - Create Or Update](/rest/api/authorization/role-definitions/create-or-update) REST API. To call this API, you must be signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission on all the `assignableScopes`. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) include this permission.
-1. Use the [Role Definitions - List](/rest/api/authorization/roledefinitions/list) or [Role Definitions - Get](/rest/api/authorization/roledefinitions/get) REST API to get information about the custom role. For more information, see the earlier [List custom roles](#list-custom-roles) section.
+1. Use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) or [Role Definitions - Get](/rest/api/authorization/role-definitions/get) REST API to get information about the custom role. For more information, see the earlier [List custom roles](#list-custom-roles) section.
1. Start with the following request: ```http
- PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2015-07-01
+ PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2022-04-01
``` 1. Within the URI, replace *{scope}* with the first `assignableScopes` of the custom role.
To update a custom role, use the [Role Definitions - Create Or Update](/rest/api
## Delete a custom role
-To delete a custom role, use the [Role Definitions - Delete](/rest/api/authorization/roledefinitions/delete) REST API. To call this API, you must be signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/delete` permission on all the `assignableScopes`. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) include this permission.
+To delete a custom role, use the [Role Definitions - Delete](/rest/api/authorization/role-definitions/delete) REST API. To call this API, you must be signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/delete` permission on all the `assignableScopes`. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) include this permission.
1. Remove any role assignments that use the custom role. For more information, see [Find role assignments to delete a custom role](custom-roles.md#find-role-assignments-to-delete-a-custom-role).
-1. Use the [Role Definitions - List](/rest/api/authorization/roledefinitions/list) or [Role Definitions - Get](/rest/api/authorization/roledefinitions/get) REST API to get the GUID identifier of the custom role. For more information, see the earlier [List custom roles](#list-custom-roles) section.
+1. Use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) or [Role Definitions - Get](/rest/api/authorization/role-definitions/get) REST API to get the GUID identifier of the custom role. For more information, see the earlier [List custom roles](#list-custom-roles) section.
1. Start with the following request: ```http
- DELETE https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2015-07-01
+ DELETE https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2022-04-01
``` 1. Within the URI, replace *{scope}* with the scope that you want to delete the custom role.
role-based-access-control Custom Roles Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-template.md
Previously updated : 12/16/2020 Last updated : 10/19/2022
To create a custom role, you must have:
- Permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator).
+You must use the following version:
+
+- `2018-07-01` or later
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ## Review the template The template used in this article is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-role-def). The template has four parameters and a resources section. The four parameters are:
Here are the changes you would need to make to the previous Quickstart template
"resources": [ { "type": "Microsoft.Authorization/roleDefinitions",
- "apiVersion": "2018-07-01",
+ "apiVersion": "2022-04-01",
"name": "[parameters('roleDefName')]", "properties": { ...
role-based-access-control Deny Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/deny-assignments-rest.md
rest-api Previously updated : 01/24/2022 Last updated : 10/19/2022
To get information about a deny assignment, you must have:
- `Microsoft.Authorization/denyAssignments/read` permission, which is included in most [Azure built-in roles](built-in-roles.md).
+You must use the following version:
+
+- `2018-07-01-preview` or later
+- `2022-04-01` is the first stable version
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ## List a single deny assignment
+To list a single deny assignment, use the [Deny Assignments - Get](/rest/api/authorization/deny-assignments/get) REST API.
+ 1. Start with the following request: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/denyAssignments/{deny-assignment-id}?api-version=2018-07-01-preview
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/denyAssignments/{deny-assignment-id}?api-version=2022-04-01
``` 1. Within the URI, replace *{scope}* with the scope for which you want to list the deny assignments.
To get information about a deny assignment, you must have:
## List multiple deny assignments
+To list multiple deny assignments, use the [Deny Assignments - List](/rest/api/authorization/deny-assignments/list) REST API.
+ 1. Start with one of the following requests: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/denyAssignments?api-version=2018-07-01-preview
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/denyAssignments?api-version=2022-04-01
``` With optional parameters: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/denyAssignments?api-version=2018-07-01-preview&$filter={filter}
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/denyAssignments?api-version=2022-04-01&$filter={filter}
``` 1. Within the URI, replace *{scope}* with the scope for which you want to list the deny assignments.
To get information about a deny assignment, you must have:
1. Use the following request: ```http
- GET https://management.azure.com/providers/Microsoft.Authorization/denyAssignments?api-version=2018-07-01-preview&$filter={filter}
+ GET https://management.azure.com/providers/Microsoft.Authorization/denyAssignments?api-version=2022-04-01&$filter={filter}
``` 1. Replace *{filter}* with the condition that you want to apply to filter the deny assignment list. A filter is required.
role-based-access-control Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md
Previously updated : 09/10/2021 Last updated : 10/19/2022
To remove the User Access Administrator role assignment for yourself or another
## REST API
+### Prerequisites
+
+You must use the following versions:
+
+- `2015-07-01` or later to list and remove role assignments
+- `2016-07-01` or later to elevate access
+- `2018-07-01-preview` or later to list deny assignments
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ### Elevate access for a Global Administrator Use the following basic steps to elevate access for a Global Administrator using the REST API.
Use the following basic steps to elevate access for a Global Administrator using
You can list all of the role assignments for a user at root scope (`/`). -- Call [GET roleAssignments](/rest/api/authorization/roleassignments/listforscope) where `{objectIdOfUser}` is the object ID of the user whose role assignments you want to retrieve.
+- Call [Role Assignments - List For Scope](/rest/api/authorization/role-assignments/list-for-scope) where `{objectIdOfUser}` is the object ID of the user whose role assignments you want to retrieve.
```http
- GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2015-07-01&$filter=principalId+eq+'{objectIdOfUser}'
+ GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01&$filter=principalId+eq+'{objectIdOfUser}'
``` ### List deny assignments at root scope (/)
You can list all of the deny assignments for a user at root scope (`/`).
- Call GET denyAssignments where `{objectIdOfUser}` is the object ID of the user whose deny assignments you want to retrieve. ```http
- GET https://management.azure.com/providers/Microsoft.Authorization/denyAssignments?api-version=2018-07-01-preview&$filter=gdprExportPrincipalId+eq+'{objectIdOfUser}'
+ GET https://management.azure.com/providers/Microsoft.Authorization/denyAssignments?api-version=2022-04-01&$filter=gdprExportPrincipalId+eq+'{objectIdOfUser}'
``` ### Remove elevated access When you call `elevateAccess`, you create a role assignment for yourself, so to revoke those privileges you need to remove the User Access Administrator role assignment for yourself at root scope (`/`).
-1. Call [GET roleDefinitions](/rest/api/authorization/roledefinitions/get) where `roleName` equals User Access Administrator to determine the name ID of the User Access Administrator role.
+1. Call [Role Definitions - Get](/rest/api/authorization/role-definitions/get) where `roleName` equals User Access Administrator to determine the name ID of the User Access Administrator role.
```http
- GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions?api-version=2015-07-01&$filter=roleName+eq+'User Access Administrator'
+ GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions?api-version=2022-04-01&$filter=roleName+eq+'User Access Administrator'
``` ```json
When you call `elevateAccess`, you create a role assignment for yourself, so to
1. You also need to list the role assignment for the directory administrator at directory scope. List all assignments at directory scope for the `principalId` of the directory administrator who made the elevate access call. This will list all assignments in the directory for the objectid. ```http
- GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2015-07-01&$filter=principalId+eq+'{objectid}'
+ GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01&$filter=principalId+eq+'{objectid}'
``` >[!NOTE] >A directory administrator should not have many assignments, if the previous query returns too many assignments, you can also query for all assignments just at directory scope level, then filter the results:
- > `GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2015-07-01&$filter=atScope()`
+ > `GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01&$filter=atScope()`
1. The previous calls return a list of role assignments. Find the role assignment where the scope is `"/"` and the `roleDefinitionId` ends with the role name ID you found in step 1 and `principalId` matches the objectId of the directory administrator.
When you call `elevateAccess`, you create a role assignment for yourself, so to
1. Finally, Use the role assignment ID to remove the assignment added by `elevateAccess`: ```http
- DELETE https://management.azure.com/providers/Microsoft.Authorization/roleAssignments/11111111-1111-1111-1111-111111111111?api-version=2015-07-01
+ DELETE https://management.azure.com/providers/Microsoft.Authorization/roleAssignments/11111111-1111-1111-1111-111111111111?api-version=2022-04-01
``` ## View elevate access logs
role-based-access-control Role Assignments List Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-rest.md
rest-api Previously updated : 02/27/2021 Last updated : 10/19/2022
[!INCLUDE [gdpr-dsr-and-stp-note](../../includes/gdpr-dsr-and-stp-note.md)]
+## Prerequisites
+
+You must use the following version:
+
+- `2015-07-01` or later
+- `2022-04-01` or later to include conditions
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ## List role assignments
-In Azure RBAC, to list access, you list the role assignments. To list role assignments, use one of the [Role Assignments - List](/rest/api/authorization/roleassignments/list) REST APIs. To refine your results, you specify a scope and an optional filter.
+In Azure RBAC, to list access, you list the role assignments. To list role assignments, use one of the [Role Assignments](/rest/api/authorization/role-assignments) Get or List REST APIs. To refine your results, you specify a scope and an optional filter.
1. Start with the following request: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments?api-version=2015-07-01&$filter={filter}
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01&$filter={filter}
``` 1. Within the URI, replace *{scope}* with the scope for which you want to list the role assignments.
In Azure RBAC, to list access, you list the role assignments. To list role assig
The following request lists all role assignments for the specified user at subscription scope: ```http
-GET https://management.azure.com/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleAssignments?api-version=2015-07-01&$filter=atScope()+and+assignedTo('{objectId1}')
+GET https://management.azure.com/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01&$filter=atScope()+and+assignedTo('{objectId1}')
``` The following shows an example of the output:
The following shows an example of the output:
"properties": { "roleDefinitionId": "/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-4ae2-8e65-a410df84e7d1", "principalId": "{objectId1}",
+ "principalType": "User",
"scope": "/subscriptions/{subscriptionId1}",
- "createdOn": "2019-01-15T21:08:45.4904312Z",
- "updatedOn": "2019-01-15T21:08:45.4904312Z",
+ "condition": null,
+ "conditionVersion": null,
+ "createdOn": "2022-01-15T21:08:45.4904312Z",
+ "updatedOn": "2022-01-15T21:08:45.4904312Z",
"createdBy": "{createdByObjectId1}",
- "updatedBy": "{updatedByObjectId1}"
+ "updatedBy": "{updatedByObjectId1}",
+ "delegatedManagedIdentityResourceId": null,
+ "description": null
}, "id": "/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId1}", "type": "Microsoft.Authorization/roleAssignments",
role-based-access-control Role Assignments Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-remove.md
Previously updated : 02/15/2021 Last updated : 10/19/2022 ms.devlang: azurecli
To remove role assignments, you must have:
- `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](../../articles/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../articles/role-based-access-control/built-in-roles.md#owner)
+For the REST API, you must use the following version:
+
+- `2015-07-01` or later
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ## Azure portal 1. Open **Access control (IAM)** at a scope, such as management group, subscription, resource group, or resource, where you want to remove access.
az role assignment delete --assignee "alain@example.com" \
## REST API
-In the REST API, you remove a role assignment by using [Role Assignments - Delete](/rest/api/authorization/roleassignments/delete).
+In the REST API, you remove a role assignment by using [Role Assignments - Delete](/rest/api/authorization/role-assignments/delete).
1. Get the role assignment identifier (GUID). This identifier is returned when you first create the role assignment or you can get it by listing the role assignments. 1. Start with the following request: ```http
- DELETE https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2015-07-01
+ DELETE https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2022-04-01
``` 1. Within the URI, replace *{scope}* with the scope for removing the role assignment.
In the REST API, you remove a role assignment by using [Role Assignments - Delet
The following request removes the specified role assignment at subscription scope: ```http
-DELETE https://management.azure.com/subscriptions/{subscriptionId1}/providers/microsoft.authorization/roleassignments/{roleAssignmentId1}?api-version=2015-07-01
+DELETE https://management.azure.com/subscriptions/{subscriptionId1}/providers/microsoft.authorization/roleassignments/{roleAssignmentId1}?api-version=2022-04-01
``` The following shows an example of the output:
The following shows an example of the output:
"properties": { "roleDefinitionId": "/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleDefinitions/a795c7a0-d4a2-40c1-ae25-d81f01202912", "principalId": "{objectId1}",
+ "principalType": "User",
"scope": "/subscriptions/{subscriptionId1}",
- "createdOn": "2020-05-06T23:55:24.5379478Z",
- "updatedOn": "2020-05-06T23:55:24.5379478Z",
+ "condition": null,
+ "conditionVersion": null,
+ "createdOn": "2022-05-06T23:55:24.5379478Z",
+ "updatedOn": "2022-05-06T23:55:24.5379478Z",
"createdBy": "{createdByObjectId1}",
- "updatedBy": "{updatedByObjectId1}"
+ "updatedBy": "{updatedByObjectId1}",
+ "delegatedManagedIdentityResourceId": null,
+ "description": null
}, "id": "/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId1}", "type": "Microsoft.Authorization/roleAssignments",
role-based-access-control Role Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-rest.md
rest-api Previously updated : 04/06/2021 Last updated : 10/19/2022
[!INCLUDE [Azure role assignment prerequisites](../../includes/role-based-access-control/prerequisites-role-assignments.md)]
+You must use the following versions:
+
+- `2015-07-01` or later to assign an Azure role
+- `2018-09-01-preview` or later to assign an Azure role to a new service principal
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ## Assign an Azure role
-To assign a role, use the [Role Assignments - Create](/rest/api/authorization/roleassignments/create) REST API and specify the security principal, role definition, and scope. To call this API, you must have access to the `Microsoft.Authorization/roleAssignments/write` action. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) are granted access to this action.
+To assign a role, use the [Role Assignments - Create](/rest/api/authorization/role-assignments/create) REST API and specify the security principal, role definition, and scope. To call this API, you must have access to the `Microsoft.Authorization/roleAssignments/write` action. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) are granted access to this action.
-1. Use the [Role Definitions - List](/rest/api/authorization/roledefinitions/list) REST API or see [Built-in roles](built-in-roles.md) to get the identifier for the role definition you want to assign.
+1. Use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) REST API or see [Built-in roles](built-in-roles.md) to get the identifier for the role definition you want to assign.
1. Use a GUID tool to generate a unique identifier that will be used for the role assignment identifier. The identifier has the format: `00000000-0000-0000-0000-000000000000` 1. Start with the following request and body: ```http
- PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2015-07-01
+ PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2022-04-01
``` ```json
To assign a role, use the [Role Assignments - Create](/rest/api/authorization/ro
1. Replace *{roleAssignmentId}* with the GUID identifier of the role assignment.
-1. Within the request body, replace *{scope}* with the scope for the role assignment.
-
- > [!div class="mx-tableFixed"]
- > | Scope | Type |
- > | | |
- > | `providers/Microsoft.Management/managementGroups/{groupId1}` | Management group |
- > | `subscriptions/{subscriptionId1}` | Subscription |
- > | `subscriptions/{subscriptionId1}/resourceGroups/myresourcegroup1` | Resource group |
- > | `subscriptions/{subscriptionId1}/resourceGroups/myresourcegroup1/providers/microsoft.web/sites/mysite1` | Resource |
+1. Within the request body, replace *{scope}* with the same scope as in the URI.
1. Replace *{roleDefinitionId}* with the role definition identifier.
To assign a role, use the [Role Assignments - Create](/rest/api/authorization/ro
The following request and body assigns the [Backup Reader](built-in-roles.md#backup-reader) role to a user at subscription scope: ```http
-PUT https://management.azure.com/subscriptions/{subscriptionId1}/providers/microsoft.authorization/roleassignments/{roleAssignmentId1}?api-version=2015-07-01
+PUT https://management.azure.com/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId1}?api-version=2022-04-01
``` ```json
The following shows an example of the output:
"properties": { "roleDefinitionId": "/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleDefinitions/a795c7a0-d4a2-40c1-ae25-d81f01202912", "principalId": "{objectId1}",
+ "principalType": "User",
"scope": "/subscriptions/{subscriptionId1}",
- "createdOn": "2020-05-06T23:55:23.7679147Z",
- "updatedOn": "2020-05-06T23:55:23.7679147Z",
+ "condition": null,
+ "conditionVersion": null,
+ "createdOn": "2022-05-06T23:55:23.7679147Z",
+ "updatedOn": "2022-05-06T23:55:23.7679147Z",
"createdBy": null,
- "updatedBy": "{updatedByObjectId1}"
+ "updatedBy": "{updatedByObjectId1}",
+ "delegatedManagedIdentityResourceId": null,
+ "description": null
}, "id": "/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId1}", "type": "Microsoft.Authorization/roleAssignments",
The following shows an example of the output:
If you create a new service principal and immediately try to assign a role to that service principal, that role assignment can fail in some cases. For example, if you create a new managed identity and then try to assign a role to that service principal, the role assignment might fail. The reason for this failure is likely a replication delay. The service principal is created in one region; however, the role assignment might occur in a different region that hasn't replicated the service principal yet.
-To address this scenario, use the [Role Assignments - Create](/rest/api/authorization/roleassignments/create) REST API and set the `principalType` property to `ServicePrincipal`. You must also set the `apiVersion` to `2018-09-01-preview` or later.
+To address this scenario, use the [Role Assignments - Create](/rest/api/authorization/role-assignments/create) REST API and set the `principalType` property to `ServicePrincipal`. You must also set the `apiVersion` to `2018-09-01-preview` or later. `2022-04-01` is the first stable version.
```http
-PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2018-09-01-preview
+PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentId}?api-version=2022-04-01
``` ```json
role-based-access-control Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-template.md
Previously updated : 09/07/2022 Last updated : 10/19/2022 ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [Azure role assignment prerequisites](../../includes/role-based-access-control/prerequisites-role-assignments.md)]
+You must use the following versions:
+
+- `2018-09-01-preview` or later to assign an Azure role to a new service principal
+- `2020-04-01-preview` or later to assign an Azure role at resource scope
+- `2022-04-01` is the first stable version
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ## Get object IDs To assign a role, you need to specify the ID of the user, group, or application you want to assign the role to. The ID has the format: `11111111-1111-1111-1111-111111111111`. You can get the ID using the Azure portal, Azure PowerShell, or Azure CLI.
To use the template, you must do the following:
"resources": [ { "type": "Microsoft.Authorization/roleAssignments",
- "apiVersion": "2018-09-01-preview",
+ "apiVersion": "2022-04-01",
"name": "[guid(resourceGroup().id)]", "properties": { "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')]",
To use the template, you must specify the following inputs:
"resources": [ { "type": "Microsoft.Authorization/roleAssignments",
- "apiVersion": "2018-09-01-preview",
+ "apiVersion": "2022-04-01",
"name": "[parameters('roleNameGuid')]", "properties": { "roleDefinitionId": "[variables(parameters('builtInRoleType'))]",
To use the template, you must specify the following inputs:
}, { "type": "Microsoft.Authorization/roleAssignments",
- "apiVersion": "2020-04-01-preview",
+ "apiVersion": "2022-04-01",
"name": "[parameters('roleNameGuid')]", "scope": "[concat('Microsoft.Storage/storageAccounts', '/', variables('storageName'))]", "dependsOn": [
The following shows an example of the Contributor role assignment to a user for
If you create a new service principal and immediately try to assign a role to that service principal, that role assignment can fail in some cases. For example, if you create a new managed identity and then try to assign a role to that service principal in the same Azure Resource Manager template, the role assignment might fail. The reason for this failure is likely a replication delay. The service principal is created in one region; however, the role assignment might occur in a different region that hasn't replicated the service principal yet.
-To address this scenario, you should set the `principalType` property to `ServicePrincipal` when creating the role assignment. You must also set the `apiVersion` of the role assignment to `2018-09-01-preview` or later.
+To address this scenario, you should set the `principalType` property to `ServicePrincipal` when creating the role assignment. You must also set the `apiVersion` of the role assignment to `2018-09-01-preview` or later. `2022-04-01` is the first stable version.
The following template demonstrates:
To use the template, you must specify the following inputs:
}, { "type": "Microsoft.Authorization/roleAssignments",
- "apiVersion": "2018-09-01-preview",
+ "apiVersion": "2022-04-01",
"name": "[variables('bootstrapRoleAssignmentId')]", "dependsOn": [ "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('identityName'))]"
role-based-access-control Role Definitions List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-definitions-list.md
Previously updated : 10/15/2021 Last updated : 10/19/2022 ms.devlang: azurecli
az role definition list --name "Virtual Machine Contributor" --output json --que
## REST API
+### Prerequisites
+
+You must use the following version:
+
+- `2015-07-01` or later
+
+For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
+ ### List role definitions
-To list role definitions, use the [Role Definitions - List](/rest/api/authorization/roledefinitions/list) REST API. To refine your results, you specify a scope and an optional filter.
+To list role definitions, use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) REST API. To refine your results, you specify a scope and an optional filter.
1. Start with the following request: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions?$filter={$filter}&api-version=2015-07-01
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions?$filter={$filter}&api-version=2022-04-01
``` 1. Within the URI, replace *{scope}* with the scope for which you want to list the role definitions.
To list role definitions, use the [Role Definitions - List](/rest/api/authorizat
The following request lists custom role definitions at subscription scope: ```http
-GET https://management.azure.com/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleDefinitions?api-version=2015-07-01&$filter=type+eq+'CustomRole'
+GET https://management.azure.com/subscriptions/{subscriptionId1}/providers/Microsoft.Authorization/roleDefinitions?api-version=2022-04-01&$filter=type+eq+'CustomRole'
``` The following shows an example of the output:
The following shows an example of the output:
], "notActions": [ "Microsoft.CostManagement/exports/delete"
- ]
+ ],
+ "dataActions": [],
+ "notDataActions": []
} ],
- "createdOn": "2020-02-21T04:49:13.7679452Z",
- "updatedOn": "2020-02-21T04:49:13.7679452Z",
+ "createdOn": "2021-05-22T21:57:23.5764138Z",
+ "updatedOn": "2021-05-22T21:57:23.5764138Z",
"createdBy": "{createdByObjectId1}", "updatedBy": "{updatedByObjectId1}" },
The following shows an example of the output:
### List a role definition
-To list the details of a specific role, use the [Role Definitions - Get](/rest/api/authorization/roledefinitions/get) or [Role Definitions - Get By Id](/rest/api/authorization/roledefinitions/getbyid) REST API.
+To list the details of a specific role, use the [Role Definitions - Get](/rest/api/authorization/role-definitions/get) or [Role Definitions - Get By ID](/rest/api/authorization/role-definitions/get-by-id) REST API.
1. Start with the following request: ```http
- GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2015-07-01
+ GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2022-04-01
``` For a directory-level role definition, you can use this request: ```http
- GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2015-07-01
+ GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions/{roleDefinitionId}?api-version=2022-04-01
``` 1. Within the URI, replace *{scope}* with the scope for which you want to list the role definition.
To list the details of a specific role, use the [Role Definitions - Get](/rest/a
The following request lists the [Reader](built-in-roles.md#reader) role definition: ```http
-GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7?api-version=2015-07-01
+GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7?api-version=2022-04-01
``` The following shows an example of the output:
The following shows an example of the output:
"properties": { "roleName": "Reader", "type": "BuiltInRole",
- "description": "Lets you view everything, but not make any changes.",
+ "description": "View all resources, but does not allow you to make any changes.",
"assignableScopes": [ "/" ],
The following shows an example of the output:
"actions": [ "*/read" ],
- "notActions": []
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
} ], "createdOn": "2015-02-02T21:55:09.8806423Z",
- "updatedOn": "2019-02-05T21:24:35.7424745Z",
+ "updatedOn": "2021-11-11T20:13:47.8628684Z",
"createdBy": null, "updatedBy": null },
search Resource Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-tools.md
The following tools are built by engineers at Microsoft, but aren't part of the
|--| |-| | [Azure Cognitive Search Lab readme](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md) | Connects to your search service with a Web UI that exercises the full REST API, including the ability to edit a live search index. | [https://github.com/Azure-Samples/azure-search-lab](https://github.com/Azure-Samples/azure-search-lab) | | [Knowledge Mining Accelerator readme](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) |
-| [Back up and Restore readme](https://github.com/liamc) | Download a populated search index to your local device and then upload the index and its content to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
+| [Back up and Restore readme](https://github.com/liamc) | Download a populated search index to your local device and then upload the index and its content to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
| [Performance testing readme](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md) | This solution helps you load test Azure Cognitive Search. It uses Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) |
search Search Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-bicep.md
Title: 'Quickstart: Deploy using Bicep' description: You can quickly deploy an Azure Cognitive Search service instance using Bicep.--++
search Search Normalizers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-normalizers.md
Title: Text normalization for filters, facets, sort
description: Specify normalizers to text fields in an index to customize the strict keyword matching behavior in filtering, faceting and sorting. -+ -+ Last updated 07/14/2022
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Using alerts and the logging infrastructure in Azure, you can pick up on query v
Azure Cognitive Search participates in regular audits, and has been certified against many global, regional, and industry-specific standards for both the public cloud and Azure Government. For the complete list, download the [**Microsoft Azure Compliance Offerings** whitepaper](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/) from the official Audit reports page.
-For compliance, you can use [Azure Policy](../governance/policy/overview.md) to implement the high-security best practices of [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). The Microsoft cloud security benchmark is a collection of security recommendations, codified into security controls that map to key actions you should take to mitigate threats to services and data. There are currently 12 security controls, including [Network Security](/security/benchmark/azure/mcsb-network-security), [Logging and Monitoring](/security/benchmark/azure/mcsb-logging-monitoring), and [Data Protection](/security/benchmark/azure/mcsb-data-protection).
+For compliance, you can use [Azure Policy](../governance/policy/overview.md) to implement the high-security best practices of [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). The Microsoft cloud security benchmark is a collection of security recommendations, codified into security controls that map to key actions you should take to mitigate threats to services and data. There are currently 12 security controls, including [Network Security](/security/benchmark/azure/mcsb-network-security), Logging and Monitoring, and [Data Protection](/security/benchmark/azure/mcsb-data-protection).
Azure Policy is a capability built into Azure that helps you manage compliance for multiple standards, including those of Microsoft cloud security benchmark. For well-known benchmarks, Azure Policy provides built-in definitions that provide both criteria and an actionable response that addresses non-compliance.
security Azure Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-domains.md
This page is a partial list of the Azure domains in use. Some of them are REST A
|[Azure Table Storage](../../storage/tables/table-storage-overview.md)|*.table.core.windows.net| |[Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md)|*.trafficmanager.net| |Azure Websites|*.azurewebsites.net|
-|[Visual Studio Codespaces](https://visualstudio.microsoft.com/services/visual-studio-codespaces/)|*.visualstudio.com|
+|[GitHub Codespaces](https://visualstudio.microsoft.com/services/github-codespaces/)|*.visualstudio.com|
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
Last updated 02/22/2022
-# Microsoft Sentinel Solution for SAP data reference (public preview)
+# Microsoft Sentinel Solution for SAP data reference
> [!IMPORTANT] > Some components of the Microsoft Sentinel Threat Monitoring for SAP solution are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - (PREVIEW) HANA DB -User Admin actions** | Identifies user administration actions. | Create, update, or delete a database user. <br><br>**Data Sources**: Linux Agent - Syslog* |Privilege Escalation | | **SAP - New ICF Service Handlers** | Identifies creation of ICF Handlers. | Assign a new handler to a service using SICF.<br><br>**Data sources**: SAPcon - Audit Log | Command and Control, Lateral Movement, Persistence | | **SAP - New ICF Services** | Identifies creation of ICF Services. | Create a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
-| **SAP - (PREVIEW) Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control |
+| **SAP - Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control |
| **SAP - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control | | **SAP - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Create a user using SU01, and then sign in, using the newly created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access | | **SAP - User Unlocks and uses other users** | Identifies a user being unlocked and used by other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Unlock a user using SU01, and then sign in using the unlocked user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log, SAPcon - Change Documents Log | Discovery, PreAttack, Initial Access, Lateral Movement | | **SAP - Assignment of a sensitive profile** | Identifies new assignments of a sensitive profile to a user. <br><br>Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Assign a profile to a user using `SU01`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
-| **SAP - (PREVIEW) Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation |
+| **SAP - Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation |
| **SAP - (PREVIEW) Critical authorizations assignment - New Authorization Value** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new authorization object or update an existing one in a role, using `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation | | **SAP - Critical authorizations assignment - New User Assignment** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new user to a role that holds critical authorization values, using `SU01`/`PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation | | **SAP - Sensitive Roles Changes** |Identifies changes in sensitive roles. <br><br> Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist. | Change a role using PFCG. <br><br>**Data sources**: SAPcon - Change Documents Log, SAPcon ΓÇô Audit Log | Impact, Privilege Escalation, Persistence |
service-fabric Service Fabric Resource Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-resource-governance.md
Let's now revisit our example with a **RequestsAndLimits** specification. This t
1. First the container based service package is placed on the node. The runtime activates the container and sets the CPU limit to two cores. The container won't be able to use more than two cores. 2. Next, the process based service package is placed on the node. The runtime activates the service process and sets its CPU limit to one core.
- At this point, the sum of CPU requests of service packages that are placed on the node is equal to the CPU capacity of the node. CRM will not place any more containers or service processes with CPU requests on this node. However, on the node, the sum of limits (two cores for the container + one core for the process) exceeds the capacity of two cores. If the container and the process burst at the same time, there is possibility of contention for the CPU resource. Such contention will be manged by the underlying OS for the platform. For this example, the container could burst up to two CPU cores, resulting in the process's request of one CPU core not being guaranteed.
+ At this point, the sum of CPU requests of service packages that are placed on the node is equal to the CPU capacity of the node. CRM will not place any more containers or service processes with CPU requests on this node. However, on the node, the sum of limits (two cores for the container + one core for the process) exceeds the capacity of two cores. If the container and the process burst at the same time, there is possibility of contention for the CPU resource. Such contention will be managed by the underlying OS for the platform. For this example, the container could burst up to two CPU cores, resulting in the process's request of one CPU core not being guaranteed.
> [!NOTE] > As illustrated in the previous example, the request values for CPU and memory **do not lead to reservation of resources on a node**. These values represent the resource consumption that the Cluster Resource Manager considers when making placement decisions. Limit values represent the actual resource limits applied when a process or a container is activated on a node.
site-recovery Azure To Azure How To Enable Replication Ade Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms.md
Previously updated : 09/05/2022 Last updated : 10/19/2022
site-recovery Azure To Azure How To Enable Replication Cmk Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md
Previously updated : 07/25/2021 Last updated : 10/19/2022
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
description: Learn how to configure replication to another region for Azure VMs,
Previously updated : 09/16/2022 Last updated : 10/19/2022
site-recovery Azure To Azure Tutorial Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-enable-replication.md
Title: Tutorial to set up Azure VM disaster recovery with Azure Site Recovery description: In this tutorial, set up disaster recovery for Azure VMs to another Azure region, using the Site Recovery service. Previously updated : 08/22/2022 Last updated : 10/19/2022 #Customer intent: As an Azure admin, I want to set up disaster recovery for my Azure VMs, so that they're available in a secondary region if the primary region becomes unavailable.
site-recovery Hyper V Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-common-questions.md
You can replicate any app or workload running a Hyper-V VM that complies with [r
### Can I replicate to Azure with a site-to-site VPN?
-Site Recovery replicates data from on-premises to Azure storage over a public endpoint, or using ExpressRoute Microsoft peering. Replication over a site-to-site VPN network isn't supported.
-
-### Can I replicate to Azure with ExpressRoute?
-
-Yes, ExpressRoute can be used to replicate VMs to Azure. Site Recovery replicates data to an Azure Storage Account over a public endpoint, and you need to set up [Microsoft peering](../expressroute/expressroute-circuit-peerings.md#microsoftpeering) for Site Recovery replication. After VMs fail over to an Azure virtual network, you can access them using [private peering](../expressroute/expressroute-circuit-peerings.md#privatepeering).
-
+Azure Site Recovery replicates data to an Azure storage account or managed disks, over a public endpoint. However, replication can be performed over Site-to-Site VPN as well. Site-to-Site VPN connectivity allows organizations to connect existing networks to Azure, or Azure networks to each other. Site-to-Site VPN occurs over IPSec tunneling over the internet, leveraging existing on-premises edge network equipment and network appliances in Azure, either native features like Azure Virtual Private Network (VPN) Gateway or 3rd party options such as Check Point CloudGaurd, Palo Alto NextGen Firewall. Replicating to Azure with site-to-site VPN is only supported when using [private endpoints](../private-link/private-endpoint-overview.md).
### Why can't I replicate over VPN?
spring-apps Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-custom-domain.md
**This article applies to:** ✔️ Java ✔️ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Standard tier ✔️ Enterprise tier
Domain Name Service (DNS) is a technique for storing network node names throughout a network. This tutorial maps a domain, such as www.contoso.com, using a CNAME record. It secures the custom domain with a certificate and shows how to enforce Transport Layer Security (TLS), also known as Secure Sockets Layer (SSL).
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-key-vault.md
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
You'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
-## Build the sample Spring Boot app with Java SDK
-
-This sample can set and get secrets from Azure Key Vault. The [Azure Key Vault Secret client library for Java](/java/api/overview/azure/security-keyvault-secrets-readme) provides Azure Active Directory token authentication support across the Azure SDK. The library provides a set of `TokenCredential` implementations that you can use to construct Azure SDK clients to support Azure AD token authentication.
-
-The Azure Key Vault Secret client library enables you to securely store and control the access to tokens, passwords, API keys, and other secrets. The library offers operations to create, retrieve, update, delete, purge, back up, restore, and list the secrets and its versions.
-
-To build the sample, use the following steps:
-
-1. Clone the sample project.
-
- ```azurecli
- git clone https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples.git
- ```
-
-1. Specify your key vault in your app.
-
- ```azurecli
- cd Azure-Spring-Cloud-Samples/managed-identity-keyvault
- vim src/main/resources/application.properties
- ```
-
- To use managed identity for Azure Spring Apps apps, add properties with the following content to *src/main/resources/application.properties*.
-
- ```properties
- azure.keyvault.enabled=true
- azure.keyvault.uri=https://<your-keyvault-name>.vault.azure.net
- ```
-
-1. Include [ManagedIdentityCredentialBuilder](/java/api/com.azure.identity.managedidentitycredentialbuilder) to get a token from Azure Active Directory and [SecretClientBuilder](/java/api/com.azure.security.keyvault.secrets.secretclientbuilder) to set or get secrets from Key Vault in your code.
-
- Get the example from the [MainController.java](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/src/main/java/com/microsoft/azure/MainController.java#L28) file of the cloned sample project.
-
- Include `azure-identity` and `azure-security-keyvault-secrets` as a dependency in your *pom.xml* file. Get the example from the [pom.xml](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/pom.xml#L21) file of the cloned sample project.
-
-1. Use the following command to package your sample app.
-
- ```azurecli
- mvn clean package
- ```
-
-1. Now deploy the app to Azure with the following command:
-
- ```azurecli
- az spring app deploy \
- --resource-group <your-resource-group-name> \
- --name "springapp" \
- --service <your-Azure-Spring-Apps-instance-name> \
- --jar-path target/asc-managed-identity-keyvault-sample-0.1.0.jar
- ```
-
-1. Access the public endpoint or test endpoint to test your app.
-
- First, get the value of your secret that you set in Azure Key Vault.
-
- ```azurecli
- curl https://myspringcloud-springapp.azuremicroservices.io/secrets/connectionString
- ```
-
- You'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
-
- Now create a secret and then retrieve it using the Java SDK.
-
- ```azurecli
- curl -X PUT https://myspringcloud-springapp.azuremicroservices.io/secrets/test?value=success
-
- curl https://myspringcloud-springapp.azuremicroservices.io/secrets/test
- ```
-
- You'll see the message `Successfully got the value of secret test from Key Vault https://<your-keyvault-name>.vault.azure.net: success`.
- ## Next steps * [How to access Storage blob with managed identity in Azure Spring Apps](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/managed-identity-storage-blob)
storage-mover Agent Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-deploy.md
The image is hosted on Microsoft Download Center as a zip file. Download the fil
## Determine required resources for the VM
-Like every VM, the agent requires available compute, memory, and storage space resources on the host. Although overall data size may affect the time required to complete a migration, it's generally the number of files and folders that drives resource requirements.
+Like every VM, the agent requires available compute, memory, network, and storage space resources on the host. Although overall data size may affect the time required to complete a migration, it's generally the number of files and folders that drives resource requirements.
+
+### Network resources
+
+The agent will require unrestricted internet connectivity.
+
+There's no single network configuration option that will work for every environment. However, the simplest configuration will involve the deployment of an external virtual switch. The external switch type is connected to a physical adapter and will allow your host operating system (OS) to share its connection with all your virtual machines (VMs). This switch allows communication between your physical network, the management operating system, and the virtual adapters on your virtual machines. This approach is fine for a test environment, but may not be suitable for a production server.
+
+After the switch is created, ensure that the management and agent VMs are both on the same switch. On the WAN link firewall, outbound TCP port 443 must be open. Keep in mind that connectivity interruptions are to be expected when changing network configurations.
+
+You can get help with [creating a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines) in the [Windows Server](/windows-server/) documentation.
### Recommended compute and memory resources
Like every VM, the agent requires available compute, memory, and storage space r
| 50 million items | 16 GiB | 8 virtual cores | |100 million items | 16 GiB | 8 virtual cores |
-**Number of items refers to the total number of files and folders in the source.*
+**Number of items** *refers to the total number of files and folders in the source.*
> [!IMPORTANT] > While agent VMs below minimal specs may work for your migration, they may not perform optimally.
At a minimum, the agent image needs 20 GiB of local storage. The amount required
:::image type="content" source="media/agent-deploy/agent-vm-generation-select-sml.png" lightbox="media/agent-deploy/agent-vm-generation-select-lrg.png" alt-text="Image showing the location of the VM Generation options within the New Virtual Machine Wizard."::: > [!IMPORTANT]
- Only *Generation 1* VMs are supported. This Linux image won't boot as a *Generation 2* VM.
+ > Only *Generation 1* VMs are supported. This Linux image won't boot as a *Generation 2* VM.
1. If you haven't already, [determine the amount of memory you'll need for your VM](#determine-required-resources-for-the-vm). Enter this amount in the **Assign Memory** pane, noting that you need to enter the value in MiB. 1 GiB = 1024 MiB. Using the **Dynamic Memory** feature is fine.
storage Blob Storage Monitoring Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-storage-monitoring-scenarios.md
You can find the friendly name of that security principal by taking the value of
### Auditing data plane operations
-Data plane operations are captured in [Azure resource logs for Storage](monitor-blob-storage.md#analyzing-logs). You can [configure Diagnostic setting](monitor-blob-storage.md#send-logs-to-azure-log-analytics) to export logs to Log Analytics workspace for a native query experience.
+Data plane operations are captured in [Azure resource logs for Storage](monitor-blob-storage.md#analyzing-logs). You can [configure Diagnostic setting](../../azure-monitor/platform/diagnostic-settings.md) to export logs to Log Analytics workspace for a native query experience.
Here's a Log Analytics query that retrieves the "when", "who", "what", and "how" information in a list of log entries.
StorageBlobLogs
For security reasons, SAS tokens don't appear in logs. However, the SHA-256 hash of the SAS token will appear in the `AuthenticationHash` field that is returned by this query.
-If you've distributed several SAS tokens, and you want to know which SAS tokens are being used, you'll have to convert each of your SAS tokens to a SHA-256 hash, and then compare that hash to the hash value that appears in logs.
+If you've distributed several SAS tokens, and you want to know which SAS tokens are being used, you'll have to convert each of your SAS tokens to an SHA-256 hash, and then compare that hash to the hash value that appears in logs.
First decode each SAS token string. The following example decodes a SAS token string by using PowerShell.
You can export logs to Log Analytics for rich native query capabilities. When yo
With Azure Synapse, you can create server-less SQL pool to query log data when you need. This could save costs significantly.
-1. Export logs to storage account. For more information, see [Creating a diagnostic setting](monitor-blob-storage.md#creating-a-diagnostic-setting).
+1. Export logs to storage account. For more information, see [Creating a diagnostic setting](../../azure-monitor/platform/diagnostic-settings.md).
2. Create and configure a Synapse workspace. For more information, see [Quickstart: Create a Synapse workspace](../../synapse-analytics/quickstart-create-workspace.md).
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Previously updated : 05/06/2022 Last updated : 10/06/2022 - ms.devlang: csharp
When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure Blob Storage and how you can use the features of Azure Monitor to analyze alerts on this data.
-## Monitor overview
+## Monitoring overview page in Azure portal
The **Overview** page in the Azure portal for each Blob storage resource includes a brief view of the resource usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring data is available. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration. + ## What is Azure Monitor? Azure Blob Storage creates monitoring data by using [Azure Monitor](../../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources and resources in other clouds and on-premises.
You can continue using classic metrics and logs if you want to. In fact, classic
## Collection and routing
-Platform metrics and the Activity log are collected automatically, but can be routed to other locations by using a diagnostic setting.
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
To collect resource logs, you must create a diagnostic setting. When you create the setting, choose **blob** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs.
To collect resource logs, you must create a diagnostic setting. When you create
> [!NOTE] > Data Lake Storage Gen2 doesn't appear as a storage type. That's because Data Lake Storage Gen2 is a set of capabilities available to Blob storage.
-## Creating a diagnostic setting
-
-This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-
-> [!TIP]
-> You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
->
-> This section doesn't describe templates or policy definitions.
->
-> - To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
->
-> - To learn how to create a diagnostic setting by using a policy definition, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
-
-### [Azure portal](#tab/azure-portal)
-
-1. Sign in to the Azure portal.
-
-2. Navigate to your storage account.
-
-3. In the **Monitoring** section, click **Diagnostic settings**.
-
- > [!div class="mx-imgBorder"]
- > ![portal - Diagnostics logs](media/monitor-blob-storage/diagnostic-logs-settings-pane.png)
-
-4. Choose **blob** as the type of storage that you want to enable logs for.
-
-5. Click **Add diagnostic setting**.
-
- The **Diagnostic settings** page appears.
-
- > [!div class="mx-imgBorder"]
- > ![Resource logs page](media/monitor-blob-storage/diagnostic-logs-page.png)
-
-6. In the **Name** field of the page, enter a name for this Resource log setting. Then, select which operations you want logged (read, write, and delete operations), and where you want the logs to be sent.
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-1. Select the **Archive to a storage account** checkbox, and then select the **Configure** button.
-
-2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, and then select the **Save** button.
-
- [!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
-
- > [!NOTE]
- > Before you choose a storage account as the export destination, see [Archive Azure resource logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) to understand prerequisites on the storage account.
-
-#### Stream logs to Azure Event Hubs
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-1. Select the **Stream to an event hub** checkbox, and then select the **Configure** button.
-
-2. In the **Select an event hub** pane, choose the namespace, name, and policy name of the event hub that you want to stream your logs to.
-
-3. Select the **Save** button.
-
-#### Send logs to Azure Log Analytics
-
-1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then select the **Save** button. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
--
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-
-### [PowerShell](#tab/azure-powershell)
-
-1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
-
- ```powershell
- Connect-AzAccount
- ```
-
-2. Set your active subscription to subscription of the storage account that you want to enable logging for.
-
- ```powershell
- Set-AzContext -SubscriptionId <subscription-id>
- ```
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet along with the `StorageAccountId` parameter.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -StorageAccountId <storage-account-resource-id> -Enabled $true -Category <operations-to-log>
-```
-
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID of the blob service. You can find the resource ID in the Azure portal by opening the **Endpoints** page of your storage account.
-
-You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **Category** parameter.
--
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/blobServices/default -StorageAccountId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount -Enabled $true -Category StorageWrite,StorageDelete`
-
-For a description of each parameter, see the [Archive Azure Resource logs via Azure PowerShell](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
-
-#### Stream logs to an event hub
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log>
-```
-
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/blobServices/default -EventHubAuthorizationRuleId /subscriptions/20884142-a14v3-4234-5450-08b10c09f4/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhubnamespace/authorizationrules/RootManageSharedAccessKey -Enabled $true -Category StorageDelete`
-
-For a description of each parameter, see the [Stream Data to Event Hubs via PowerShell cmdlets](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
-
-#### Send logs to Log Analytics
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
-```
--
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/blobServices/default -WorkspaceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace -Enabled $true -Category StorageDelete`
-
-For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-
-### [Azure CLI](#tab/azure-cli)
-
-1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
-
-2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account that you want to enable logs for.
-
- ```azurecli
- az account set --subscription <subscription-id>
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --storage-account <storage-account-name> --resource <storage-service-resource-id> --resource-group <resource-group> --logs '[{"category": <operations>, "enabled": true }]'
-```
-
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID Blob storage service. You can find the resource ID in the Azure portal by opening the **Endpoints** page of your storage account.
-
-You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **category** parameter.
--
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --storage-account mystorageaccount --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/blobServices/default --resource-group myresourcegroup --logs '[{"category": StorageWrite}]'`
-
-For a description of each parameter, see the [Archive Resource logs via the Azure CLI](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
-
-#### Stream logs to an event hub
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true}]'
-```
-
-Here's an example:
+## Destination limitations
-`az monitor diagnostic-settings create --name setting1 --event-hub myeventhub --event-hub-rule /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhubnamespace/authorizationrules/RootManageSharedAccessKey --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/blobServices/default --logs '[{"category": StorageDelete, "enabled": true }]'`
+For general destination limitations, see [Destination limitations](../../azure-monitor/essentials/diagnostic-settings.md#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
-For a description of each parameter, see the [Stream data to Event Hubs via Azure CLI](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
+- You can't send logs to the same storage account that you are monitoring with this setting.
-#### Send logs to Log Analytics
+ This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
+- You can't set a retention policy.
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
-```
--
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --workspace /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/blobServices/default --logs '[{"category": StorageDelete, "enabled": true ]'`
+ If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](lifecycle-management-overview.md).
- For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
--
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
## Analyzing metrics
When a metric supports dimensions, you can read metric values and filter them by
az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'GetBlob' " --aggregation "Total" ``` --
-## Analyze metrics by using code
+### [.NET](#tab/dotnet)
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Management.Monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
The following example shows how to read metric data on the metric supporting mul
``` ++ ## Analyzing logs
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries.
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
-For a detailed reference of the fields that appear in these logs, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The schema for Azure Blob Storage resource logs is found in [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md).
-Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its blob endpoint but not in its table or queue endpoints, only logs that pertain to the blob service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
+To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
+
+Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure Blob Storage service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
+
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
### Log authenticated requests The following types of authenticated requests are logged: - Successful requests-- Failed requests, including timeout, throttling, network, authorization, and other errors
+- Failed requests, including time-out, throttling, network, authorization, and other errors
- Requests that use a shared access signature (SAS) or OAuth, including failed and successful requests - Requests to analytics data (classic log data in the **$logs** container and class metric data in the **$metric** tables)
Requests made by the Blob storage service itself, such as log creation or deleti
- Successful requests - Server errors-- Timeout errors for both client and server
+- Time out errors for both client and server
- Failed GET requests with the error code 304 (Not Modified) All other failed anonymous requests aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-blob-storage-reference.md).
-### Accessing logs in a storage account
-
-Logs appear as blobs stored to a container in the target storage account. Data is collected and stored inside a single blob as a line-delimited JSON payload. The name of the blob follows this naming convention:
+### Sample Kusto queries
-`https://<destination-storage-account>.blob.core.windows.net/insights-logs-<storage-operation>/resourceId=/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<source-storage-account>/blobServices/default/y=<year>/m=<month>/d=<day>/h=<hour>/m=<minute>/PT1H.json`
-
-Here's an example:
-
-`https://mylogstorageaccount.blob.core.windows.net/insights-logs-storagewrite/resourceId=/subscriptions/`<br>`208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/blobServices/default/y=2019/m=07/d=30/h=23/m=12/PT1H.json`
-
-### Accessing logs in an event hub
-
-Logs sent to an event hub aren't stored as a file, but you can verify that the event hub received the log information. In the Azure portal, go to your event hub and verify that the **incoming messages** count is greater than zero.
-
-![Audit logs](media/monitor-blob-storage/event-hub-log.png)
-
-You can access and read log data that's sent to your event hub by using security information and event management and monitoring tools. For more information, see [What can I do with the monitoring data being sent to my event hub?](../../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
-
-### Accessing logs in a Log Analytics workspace
-
-You can access logs sent to a Log Analytics workspace by using Azure Monitor log queries.
-
-For more information, see [Get started with Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-tutorial.md).
-
-Data is stored in the **StorageBlobLog** table. Logs for Data Lake Storage Gen2 do not appear in a dedicated table. That's because Data Lake Storage Gen2 is not service. It's a set of capabilities that you can enable in your storage account. If you've enabled those capabilities, logs will continue to appear in the StorageBlobLog table.
-
-#### Sample Kusto queries
+If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
Here are some queries that you can enter in the **Log search** bar to help you monitor your Blob storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
Use these queries to help you monitor your Azure Storage accounts:
| render piechart ```
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../../azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md).
+
+The following table lists some example scenarios to monitor and the proper metric to use for the alert:
+
+| Scenario | Metric to use for alert |
+|-|-|
+| Blob Storage service is throttled. | Metric: Transactions<br>Dimension name: Response type |
+| Blob Storage requests are successful 99% of the time. | Metric: Availability<br>Dimension names: Geo type, API name, Authentication |
+| Blob Storage egress has exceeded 500 GiB in one day. | Metric: Egress<br>Dimension names: Geo type, API name, Authentication |
+ ## Feature support [!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)]
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
The unsupported client list above is not exhaustive and may change over time.
## Client settings
-To transfer files to or from Azure storage via client applications, see the following recommended client settings.
+To transfer files to or from Azure Blob Storage via SFTP clients, see the following recommended settings.
- WinSCP - Under the **Preferences** dialog, under **Transfer** - **Endurance**, select **Disable** to disable the **Enable transfer resume/transfer to temporary filename** option.
- > [!CAUTION]
- > Leaving this option enabled can cause failures or degraded performance during large file uploads.
+> [!CAUTION]
+> Leaving this option enabled can cause failures or degraded performance during large file uploads.
## Unsupported operations
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
- For performance issues and considerations, see [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md).
+- Maximum file upload size via the SFTP endpoint is 100 GB.
+
- Special containers such as $logs, $blobchangefeed, $root, $web aren't accessible via the SFTP endpoint. - Symbolic links aren't supported.
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
Previously updated : 02/15/2022 Last updated : 10/19/2022
To use the Azure portal to create a storage account with infrastructure encrypti
1. In the Azure portal, navigate to the **Storage accounts** page. 1. Choose the **Add** button to add a new general-purpose v2 or premium block blob storage account.
-1. On the **Advanced** tab, locate **Infrastructure** encryption, and select **Enabled**.
+1. On the **Encryption** tab, locate **Enable infrastructure encryption**, and select **Enabled**.
1. Select **Review + create** to finish creating the storage account. :::image type="content" source="media/infrastructure-encryption-enable/create-account-infrastructure-encryption-portal.png" alt-text="Screenshot showing how to enable infrastructure encryption when creating account":::
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
The SAS expiration period appears in the console output.
## Query logs for policy violations
-To log the creation of a SAS that is valid over a longer interval than the SAS expiration policy recommends, first create a diagnostic setting that sends logs to an Azure Log Analytics workspace. For more information, see [Send logs to Azure Log Analytics](../blobs/monitor-blob-storage.md#send-logs-to-azure-log-analytics).
+To log the creation of a SAS that is valid over a longer interval than the SAS expiration policy recommends, first create a diagnostic setting that sends logs to an Azure Log Analytics workspace. For more information, see [Send logs to Azure Log Analytics](../../azure-monitor/platform/diagnostic-settings.md).
Next, use an Azure Monitor log query to monitor whether policy has been violated. Create a new query in your Log Analytics workspace, add the following query text, and press **Run**.
Follow these steps to assign the built-in policy to the appropriate scope in the
To monitor your storage accounts for compliance with the key expiration policy, follow these steps:
-1. On the Azure Policy dashboard, locate the built-in policy definition for the scope that you specified in the policy assignment. You can search for *Storage accounts should have shared access signature (SAS) policies configured* in the **Search** box to filter for the built-in policy.
+1. On the Azure Policy dashboard, locate the built-in policy definition for the scope that you specified in the policy assignment. You can search for `Storage accounts should have shared access signature (SAS) policies configured` in the **Search** box to filter for the built-in policy.
1. Select the policy name with the desired scope. 1. On the **Policy assignment** page for the built-in policy, select **View compliance**. Any storage accounts in the specified subscription and resource group that do not meet the policy requirements appear in the compliance report.
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
Previously updated : 05/06/2022 Last updated : 10/06/2022 -
To collect resource logs, you must create a diagnostic setting. When you create
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
-## Creating a diagnostic setting
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
-This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
+## Destination limitations
-> [!TIP]
-> You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
->
-> This section doesn't describe templates or policy definitions.
->
-> - To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
->
-> - To learn how to create a diagnostic setting by using a policy definition, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+For general destination limitations, see [Destination limitations](../../azure-monitor/essentials/diagnostic-settings.md#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
-### [Azure portal](#tab/azure-portal)
-
-1. Sign in to the Azure portal.
-
-2. Navigate to your storage account.
-
-3. In the **Monitoring** section, click **Diagnostic settings**.
-
- > [!div class="mx-imgBorder"]
- > ![portal - Diagnostics logs](media/storage-files-monitoring/diagnostic-logs-settings-pane.png)
-
-4. Choose **file** as the type of storage that you want to enable logs for.
-
-5. Click **Add diagnostic setting**.
-
- The **Diagnostic settings** page appears.
-
- > [!div class="mx-imgBorder"]
- > ![Resource logs page](media/storage-files-monitoring/diagnostic-logs-page.png)
-
-6. In the **Name** field of the page, enter a name for this Resource log setting. Then, select which operations you want logged (read, write, and delete operations), and where you want the logs to be sent.
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-1. Select the **Archive to a storage account** checkbox, and then click the **Configure** button.
-
-2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, click the **OK** button, and then click the **Save** button.
-
- [!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
-
- > [!NOTE]
- > Before you choose a storage account as the export destination, see [Archive Azure resource logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) to understand prerequisites on the storage account.
-
-#### Stream logs to Azure Event Hubs
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-1. Select the **Stream to an event hub** checkbox, and then click the **Configure** button.
-
-2. In the **Select an event hub** pane, choose the namespace, name, and policy name of the event hub that you want to stream your logs to.
-
-3. Click the **OK** button, and then click the **Save** button.
-
-#### Send logs to Azure Log Analytics
-
-1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then click the **Save** button. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
--
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-
-### [PowerShell](#tab/azure-powershell)
-
-1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
-
- ```powershell
- Connect-AzAccount
- ```
-
-2. Set your active subscription to subscription of the storage account that you want to enable logging for.
-
- ```powershell
- Set-AzContext -SubscriptionId <subscription-id>
- ```
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet along with the `StorageAccountId` parameter.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -StorageAccountId <storage-account-resource-id> -Enabled $true -Category <operations-to-log>
-```
-
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID of the Azure File service. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
-
-You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **Category** parameter.
--
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/fileServices/default -StorageAccountId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount -Enabled $true -Category StorageWrite,StorageDelete`
-
-For a description of each parameter, see the [Archive Azure Resource logs via Azure PowerShell](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
-
-#### Stream logs to an event hub
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log>
-```
-
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/fileServices/default -EventHubAuthorizationRuleId /subscriptions/20884142-a14v3-4234-5450-08b10c09f4/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhubnamespace/authorizationrules/RootManageSharedAccessKey -Enabled $true -Category StorageDelete`
-
-For a description of each parameter, see the [Stream Data to Event Hubs via PowerShell cmdlets](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
-
-#### Send logs to Log Analytics
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
-```
-
+- You can't send logs to the same storage account that you're monitoring with this setting.
-Here's an example:
+ This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/fileServices/default -WorkspaceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace -Enabled $true -Category StorageDelete`
+- You can't set a retention policy.
-For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
+ If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-
-### [Azure CLI](#tab/azure-cli)
-
-1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
-
-2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account that you want to enable logs for.
-
- ```azurecli-interactive
- az account set --subscription <subscription-id>
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --storage-account <storage-account-name> --resource <storage-service-resource-id> --resource-group <resource-group> --logs '[{"category": <operations>, "enabled": true}]'
-```
-
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID Blob storage service. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
-
-You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **category** parameter.
--
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --storage-account mystorageaccount --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/fileServices/default --resource-group myresourcegroup --logs '[{"category": StorageWrite, "enabled": true}]'`
-
-For a description of each parameter, see the [Archive Resource logs via the Azure CLI](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
-
-#### Stream logs to an event hub
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true}]'
-```
-
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --event-hub myeventhub --event-hub-rule /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhubnamespace/authorizationrules/RootManageSharedAccessKey --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/fileServices/default --logs '[{"category": StorageDelete, "enabled": true }]'`
-
-For a description of each parameter, see the [Stream data to Event Hubs via Azure CLI](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
-
-#### Send logs to Log Analytics
-
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
-```
--
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --workspace /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/fileServices/default --logs '[{"category": StorageDelete, "enabled": true ]'`
-
- For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
--
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
## Analyzing metrics
When a metric supports dimensions, you can read metric values and filter them by
az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'GetFile' " --aggregation "Total" ``` --
-## Analyze metrics by using code
+### [.NET](#tab/dotnet)
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Management.Monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
The following example shows how to read metric data on the metric supporting mul
``` ++ ## Analyzing logs
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries.
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
-To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The schema for Azure Files resource logs is found in [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
+
+To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure File service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
++ ### Log authenticated requests The following types of authenticated requests are logged:
Log entries are created only if there are requests made against the service endp
Requests made by the Azure Files service itself, such as log creation or deletion, aren't logged. For a full list of the SMB and REST requests that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
-### Accessing logs in a storage account
-
-Logs appear as blobs stored to a container in the target storage account. Data is collected and stored inside a single blob as a line-delimited JSON payload. The name of the blob follows this naming convention:
-
-`https://<destination-storage-account>.blob.core.windows.net/insights-logs-<storage-operation>/resourceId=/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<source-storage-account>/fileServices/default/y=<year>/m=<month>/d=<day>/h=<hour>/m=<minute>/PT1H.json`
-
-Here's an example:
-
-`https://mylogstorageaccount.blob.core.windows.net/insights-logs-storagewrite/resourceId=/subscriptions/`<br>`208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/fileServices/default/y=2019/m=07/d=30/h=23/m=12/PT1H.json`
-
-### Accessing logs in an event hub
-
-Logs sent to an event hub aren't stored as a file, but you can verify that the event hub received the log information. In the Azure portal, go to your event hub and verify that the **incoming messages** count is greater than zero.
-
-![Audit logs](media/storage-files-monitoring/event-hub-log.png)
-
-You can access and read log data that's sent to your event hub by using security information and event management and monitoring tools. For more information, see [What can I do with the monitoring data being sent to my event hub?](../../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
-
-### Accessing logs in a Log Analytics workspace
-
-You can access logs sent to a Log Analytics workspace by using Azure Monitor log queries. Data is stored in the **StorageFileLogs** table.
-
-For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
+### Sample Kusto queries
-#### Sample Kusto queries
+If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
Here are some queries that you can enter in the **Log search** bar to help you monitor your Azure Files. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
The following table lists some example scenarios to monitor and the proper metri
2. In the **Monitoring** section, click **Alerts**, and then click **+ New alert rule**. 3. Click **Edit resource**, select the **File resource type** for the storage account and then click **Done**. For example, if the storage account name is `contoso`, select the `contoso/file` resource. 4. Click **Add condition** to add a condition.
-5. You will see a list of signals supported for the storage account, select the **Transactions** metric.
+5. You'll see a list of signals supported for the storage account, select the **Transactions** metric.
6. On the **Configure signal logic** blade, click the **Dimension name** drop-down and select **Response type**. 7. Click the **Dimension values** drop-down and select the appropriate response types for your file share.
The following table lists some example scenarios to monitor and the proper metri
10. Define the **alert parameters** (threshold value, operator, aggregation granularity and frequency of evaluation) and click **Done**. > [!TIP]
- > If you are using a static threshold, the metric chart can help determine a reasonable threshold value if the file share is currently being throttled. If you are using a dynamic threshold, the metric chart will display the calculated thresholds based on recent data.
+ > If you are using a static threshold, the metric chart can help determine a reasonable threshold value if the file share is currently being throttled. If you're using a dynamic threshold, the metric chart will display the calculated thresholds based on recent data.
11. Click **Add action groups** to add an **action group** (email, SMS, etc.) to the alert either by selecting an existing action group or creating a new action group. 12. Fill in the **Alert details** like **Alert rule name**, **Description**, and **Severity**.
The following table lists some example scenarios to monitor and the proper metri
2. In the **Monitoring** section, click **Alerts** and then click **+ New alert rule**. 3. Click **Edit resource**, select the **File resource type** for the storage account and then click **Done**. For example, if the storage account name is `contoso`, select the `contoso/file` resource. 4. Click **Add condition** to add a condition.
-5. You will see a list of signals supported for the storage account, select the **File Capacity** metric.
+5. You'll see a list of signals supported for the storage account, select the **File Capacity** metric.
6. For **premium file shares**, click the **Dimension name** drop-down and select **File Share**. For **standard file shares**, skip to **step #8**. > [!NOTE]
The following table lists some example scenarios to monitor and the proper metri
2. In the Monitoring section, click **Alerts** and then click **+ New alert rule**. 3. Click **Edit resource**, select the **File resource type** for the storage account and then click **Done**. For example, if the storage account name is contoso, select the contoso/file resource. 4. Click **Add condition** to add a condition.
-5. You will see a list of signals supported for the storage account, select the **Egress** metric.
+5. You'll see a list of signals supported for the storage account, select the **Egress** metric.
6. For **premium file shares**, click the **Dimension name** drop-down and select **File Share**. For **standard file shares**, skip to **step #8**. > [!NOTE]
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
Previously updated : 05/23/2022 Last updated : 10/06/2022
You can continue using classic metrics and logs if you want to. In fact, classic
Platform metrics and the activity log are collected automatically, but can be routed to other locations by using a diagnostic setting.
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+ To collect resource logs, you must create a diagnostic setting. When you create the setting, choose **queue** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs. | Category | Description |
To collect resource logs, you must create a diagnostic setting. When you create
| **StorageWrite** | Write operations on objects. | | **StorageDelete** | Delete operations on objects. |
-## Creating a diagnostic setting
-
-This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-
-> [!TIP]
-> You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
->
-> This section doesn't describe templates or policy definitions.
->
-> - To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
->
-> - To learn how to create a diagnostic setting by using a policy definition, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
-
-### [Azure portal](#tab/azure-portal)
-
-1. Sign in to the Azure portal.
-
-2. Navigate to your storage account.
-
-3. In the **Monitoring** section, click **Diagnostic settings**.
-
- > [!div class="mx-imgBorder"]
- > ![portal - Diagnostics logs](media/monitor-queue-storage/diagnostic-logs-settings-pane.png)
-
-4. Choose **queue** as the type of storage that you want to enable logs for.
-
-5. Click **Add diagnostic setting**.
-
- The **Diagnostic settings** page appears.
-
- > [!div class="mx-imgBorder"]
- > ![Resource logs page](media/monitor-queue-storage/diagnostic-logs-page.png)
-
-6. In the **Name** field of the page, enter a name for this resource log setting. Then, select which operations you want logged (read, write, and delete operations), and where you want the logs to be sent.
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-1. Select the **Archive to a storage account** check box, and then select the **Configure** button.
-
-2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, click the **OK** button, and then select the **Save** button.
-
- [!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
-
- > [!NOTE]
- > Before you choose a storage account as the export destination, see [Archive Azure resource logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) to understand prerequisites on the storage account.
-
-#### Stream logs to Azure Event Hubs
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-1. Select the **Stream to an event hub** check box, and then select the **Configure** button.
-
-2. In the **Select an event hub** pane, choose the namespace, name, and policy name of the event hub that you want to stream your logs to.
-
-3. Click the **OK** button, and then select the **Save** button.
-
-#### Send logs to Azure Log Analytics
-
-1. Select the **Send to Log Analytics** check box, select a Log Analytics workspace, and then select the **Save** button. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
--
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-
-### [PowerShell](#tab/azure-powershell)
-
-1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
-
- ```powershell
- Connect-AzAccount
- ```
-
-2. Set your active subscription to subscription of the storage account that you want to enable logging for.
-
- ```powershell
- Set-AzContext -SubscriptionId <subscription-id>
- ```
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet along with the `StorageAccountId` parameter.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -StorageAccountId <storage-account-resource-id> -Enabled $true -Category <operations-to-log>
-```
-
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID of the queue. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
-
-You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **Category** parameter.
--
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/queueServices/default -StorageAccountId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount -Enabled $true -Category StorageWrite,StorageDelete`
-
-For a description of each parameter, see [Archive Azure resource logs via Azure PowerShell](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
-#### Stream logs to an event hub
+## Destination limitations
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
+For general destination limitations, see [Destination limitations](../../azure-monitor/essentials/diagnostic-settings.md#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter.
+- You can't send logs to the same storage account that you are monitoring with this setting.
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log>
-```
-
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/queueServices/default -EventHubAuthorizationRuleId /subscriptions/20884142-a14v3-4234-5450-08b10c09f4/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhubnamespace/authorizationrules/RootManageSharedAccessKey -Enabled $true -Category StorageDelete`
-
-For a description of each parameter, see [Stream data to Event Hubs via PowerShell cmdlets](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
-
-#### Send logs to Log Analytics
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
-```
--
-Here's an example:
+ This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/queueServices/default -WorkspaceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace -Enabled $true -Category StorageDelete`
+- You can't set a retention policy.
-For more information, see [Stream Azure resource logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-
-### [Azure CLI](#tab/azure-cli)
+ If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
-1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed the Azure CLI](/cli/azure/install-azure-cli) locally, open a command console application such as PowerShell.
-
-2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account that you want to enable logs for.
-
- ```azurecli-interactive
- az account set --subscription <subscription-id>
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --storage-account <storage-account-name> --resource <storage-service-resource-id> --resource-group <resource-group> --logs '[{"category": <operations>, "enabled": true}]'
-```
-
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID of the queue. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
-
-You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the `category` parameter.
--
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --storage-account mystorageaccount --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/queueServices/default --resource-group myresourcegroup --logs '[{"category": StorageWrite, "enabled": true}]'`
-
-For a description of each parameter, see [Archive resource logs via the Azure CLI](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
-
-#### Stream logs to an event hub
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true}]'
-```
-
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --event-hub myeventhub --event-hub-rule /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhubnamespace/authorizationrules/RootManageSharedAccessKey --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/queueServices/default --logs '[{"category": StorageDelete, "enabled": true }]'`
-
-For a description of each parameter, see [Stream data to Event Hubs via Azure CLI](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
-
-#### Send logs to Log Analytics
-
-Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
-```
--
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --workspace /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/queueServices/default --logs '[{"category": StorageDelete, "enabled": true ]'`
-
- For more information, see [Stream Azure resource logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
--
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
## Analyzing metrics
$dimFilter = [String](New-AzMetricFilter -Dimension ApiName -Operator eq -Value
Get-AzMetric -ResourceId $resourceId -MetricName Transactions -TimeGrain 01:00:00 -MetricFilter $dimFilter -AggregationType "Total" ``` - ### [Azure CLI](#tab/azure-cli) #### List the account-level metric definition
When a metric supports dimensions, you can read metric values and filter them by
az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'GetMessages' " --aggregation "Total" ``` --
-## Analyzing metrics by using code
+### [.NET](#tab/dotnet)
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/microsoft.azure.management.monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
The following example shows how to read metric data on the metric supporting mul
``` ++ ## Analyzing logs
-You can access resource logs either as a queue in a storage account, as event data, or through Log Analytics queries.
+****
+
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The schema for Azure Queue Storage resource logs is found in [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md).
-For a detailed reference of the fields that appear in these logs, see [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md).
+To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
+
+Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure Queue service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its queue endpoint but not in its table or blob endpoints, only logs that pertain to Queue Storage are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
++ ### Log authenticated requests The following types of authenticated requests are logged:
The following types of anonymous requests are logged:
- Successful requests - Server errors-- Time-out errors for both client and server
+- Time out errors for both client and server
- Failed `GET` requests with the error code 304 (`Not Modified`) All other failed anonymous requests aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-queue-storage-reference.md).
-### Accessing logs in a storage account
-
-Logs appear as blobs stored to a container in the target storage account. Data is collected and stored inside a single blob as a line-delimited JSON payload. The name of the blob follows this naming convention:
+### Sample Kusto queries
-`https://<destination-storage-account>.blob.core.windows.net/insights-logs-<storage-operation>/resourceId=/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<source-storage-account>/queueServices/default/y=<year>/m=<month>/d=<day>/h=<hour>/m=<minute>/PT1H.json`
-
-Here's an example:
-
-`https://mylogstorageaccount.blob.core.windows.net/insights-logs-storagewrite/resourceId=/subscriptions/`<br>`208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/queueServices/default/y=2019/m=07/d=30/h=23/m=12/PT1H.json`
-
-### Accessing logs in an event hub
-
-Logs sent to an event hub aren't stored as a file, but you can verify that the event hub received the log information. In the Azure portal, go to your event hub and verify that the `incoming requests` count is greater than zero.
-
-![Audit logs](media/monitor-queue-storage/event-hub-log.png)
-
-You can access and read log data that's sent to your event hub by using security information and event management and monitoring tools. For more information, see [What can I do with the monitoring data being sent to my event hub?](../../azure-monitor/essentials/stream-monitoring-data-event-hubs.md#partner-tools-with-azure-monitor-integration).
-
-### Accessing logs in a Log Analytics workspace
-
-You can access logs sent to a Log Analytics workspace by using Azure Monitor log queries.
-
-For more information, see [Get started with Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-tutorial.md).
-
-Data is stored in the `StorageQueueLogs` table.
-
-#### Sample Kusto queries
+If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
Here are some queries that you can enter in the **Log search** bar to help you monitor your queues. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
Use these queries to help you monitor your Azure Storage accounts:
| render piechart ```
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../../azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md).
+
+The following table lists some example scenarios to monitor and the proper metric to use for the alert:
+
+| Scenario | Metric to use for alert |
+|-|-|
+| Queue Storage service is throttled. | Metric: Transactions<br>Dimension name: Response type |
+| Queue Storage requests are successful 99% of the time. | Metric: Availability<br>Dimension names: Geo type, API name, Authentication |
+| Queue Storage egress has exceeded 500 GiB in one day. | Metric: Egress<br>Dimension names: Geo type, API name, Authentication |
+ ## FAQ **Does Azure Storage support metrics for managed disks or unmanaged disks?**
storage Queues Storage Monitoring Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-storage-monitoring-scenarios.md
You can find the friendly name of that security principal by taking the value of
### Auditing data plane operations
-Data plane operations are captured in [Azure resource logs for Storage](monitor-queue-storage.md#analyzing-logs). You can [configure Diagnostic setting](monitor-queue-storage.md#send-logs-to-azure-log-analytics) to export logs to Log Analytics workspace for a native query experience.
+Data plane operations are captured in [Azure resource logs for Storage](monitor-queue-storage.md#analyzing-logs). You can [configure Diagnostic setting](../../azure-monitor/platform/diagnostic-settings.md) to export logs to Log Analytics workspace for a native query experience.
Here's a Log Analytics query that retrieves the "when", "who", "what", and "how" information in a list of log entries.
You can export logs to Log Analytics for rich native query capabilities. When yo
With Azure Synapse, you can create server-less SQL pool to query log data when you need. This could save costs significantly.
-1. Export logs to storage account. See [Creating a diagnostic setting](monitor-queue-storage.md#creating-a-diagnostic-setting).
+1. Export logs to storage account. See [Creating a diagnostic setting](../../azure-monitor/platform/diagnostic-settings.md).
2. Create and configure a Synapse workspace. See [Quickstart: Create a Synapse workspace](../../synapse-analytics/quickstart-create-workspace.md).
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
Previously updated : 05/23/2022 Last updated : 10/06/2022 ms.devlang: csharp
You can continue using classic metrics and logs if you want to. In fact, classic
Platform metrics and the Activity log are collected automatically, but can be routed to other locations by using a diagnostic setting.
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+ To collect resource logs, you must create a diagnostic setting. When you create the setting, choose **table** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs. | Category | Description |
To collect resource logs, you must create a diagnostic setting. When you create
| StorageWrite | Write operations on objects. | | StorageDelete | Delete operations on objects. |
-## Creating a diagnostic setting
-
-This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-
-> [!TIP]
-> You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
->
-> This section doesn't describe templates or policy definitions.
->
-> - To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
->
-> - To learn how to create a diagnostic setting by using a policy definition, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
-
-### [Azure portal](#tab/azure-portal)
-
-1. Sign in to the Azure portal.
-
-2. Navigate to your storage account.
-
-3. In the **Monitoring** section, click **Diagnostic settings**.
-
- > [!div class="mx-imgBorder"]
- > ![portal - Diagnostics logs](media/monitor-table-storage/diagnostic-logs-settings-pane.png)
-
-4. Choose **table** as the type of storage that you want to enable logs for.
-
-5. Click **Add diagnostic setting**.
-
- The **Diagnostic settings** page appears.
-
- > [!div class="mx-imgBorder"]
- > ![Resource logs page](media/monitor-table-storage/diagnostic-logs-page.png)
-
-6. In the **Name** field of the page, enter a name for this Resource log setting. Then, select which operations you want logged (read, write, and delete operations), and where you want the logs to be sent.
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-1. Select the **Archive to a storage account** checkbox, and then click the **Configure** button.
-
-2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, click the **OK** button, and then click the **Save** button.
-
- [!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
-
- > [!NOTE]
- > Before you choose a storage account as the export destination, see [Archive Azure resource logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) to understand prerequisites on the storage account.
-
-#### Stream logs to Azure Event Hubs
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-1. Select the **Stream to an event hub** checkbox, and then click the **Configure** button.
-
-2. In the **Select an event hub** pane, choose the namespace, name, and policy name of the event hub that you want to stream your logs to.
-
-3. Click the **OK** button, and then click the **Save** button.
-
-#### Send logs to Azure Log Analytics
-
-1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then click the **Save** button. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
--
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-
-### [PowerShell](#tab/azure-powershell)
-
-1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
-
- ```powershell
- Connect-AzAccount
- ```
-
-2. Set your active subscription to subscription of the storage account that you want to enable logging for.
-
- ```powershell
- Set-AzContext -SubscriptionId <subscription-id>
- ```
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet along with the `StorageAccountId` parameter.
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -StorageAccountId <storage-account-resource-id> -Enabled $true -Category <operations-to-log>
-```
-
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID of the table service. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
-
-You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **Category** parameter.
+## Destination limitations
+For general destination limitations, see [Destination limitations](../../azure-monitor/essentials/diagnostic-settings.md#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
-Here's an example:
+- You can't send logs to the same storage account that you are monitoring with this setting.
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/tableServices/default -StorageAccountId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount -Enabled $true -Category StorageWrite,StorageDelete`
+ This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-For more information about archiving resource logs to Azure Storage, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
+- You can't set a retention policy.
-#### Stream logs to an event hub
+ If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -EventHubAuthorizationRuleId <event-hub-namespace-and-key-name> -Enabled $true -Category <operations-to-log>
-```
-
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/tableServices/default -EventHubAuthorizationRuleId /subscriptions/20884142-a14v3-4234-5450-08b10c09f4/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhubnamespace/authorizationrules/RootManageSharedAccessKey -Enabled $true -Category StorageDelete`
-
-For more information about sending resource logs to event hubs, see [Azure Resource Logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
-
-#### Send logs to Log Analytics
-
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
-```
--
-Here's an example:
-
-`Set-AzDiagnosticSetting -ResourceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/tableServices/default -WorkspaceId /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace -Enabled $true -Category StorageDelete`
-
-For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-
-### [Azure CLI](#tab/azure-cli)
-
-1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
-
-2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account that you want to enable logs for.
-
- ```azurecli-interactive
- az account set --subscription <subscription-id>
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-#### Archive logs to a storage account
-
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --storage-account <storage-account-name> --resource <storage-service-resource-id> --resource-group <resource-group> --logs '[{"category": <operations>, "enabled": true}]'
-```
-
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID Table storage service. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
-
-You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **category** parameter.
--
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --storage-account mystorageaccount --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/tableServices/default --resource-group myresourcegroup --logs '[{"category": StorageWrite, "enabled": true}]'`
-
-#### Stream logs to an event hub
-
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
-
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --event-hub <event-hub-name> --event-hub-rule <event-hub-namespace-and-key-name> --resource <storage-account-resource-id> --logs '[{"category": <operations>, "enabled": true}]'
-```
-
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --event-hub myeventhub --event-hub-rule /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhubnamespace/authorizationrules/RootManageSharedAccessKey --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/tableServices/default --logs '[{"category": StorageDelete, "enabled": true }]'`
-
-#### Send logs to Log Analytics
-
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
-
-```azurecli-interactive
-az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
-```
--
-Here's an example:
-
-`az monitor diagnostic-settings create --name setting1 --workspace /subscriptions/208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.OperationalInsights/workspaces/my-analytic-workspace --resource /subscriptions/938841be-a40c-4bf4-9210-08bcf06c09f9/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/myloggingstorageaccount/tableServices/default --logs '[{"category": StorageDelete, "enabled": true ]'`
-
- For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-
-#### Send to a partner solution
-
-You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
--
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
## Analyzing metrics
When a metric supports dimensions, you can read metric values and filter them by
az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'QueryEntities' " --aggregation "Total" ``` --
-## Analyzing metrics by using code
+### [.NET](#tab/dotnet)
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Management.Monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
The following example shows how to read metric data on the metric supporting mul
``` ++ ## Analyzing logs
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries.
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The schema for Azure Table Storage resource logs is found in [Azure Table storage monitoring data reference](monitor-table-storage-reference.md).
+
+To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
-For a detailed reference of the fields that appear in these logs, see [Azure Table storage monitoring data reference](monitor-table-storage-reference.md).
+Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure Blob Storage service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its table endpoint but not in its blob or queue endpoints, only logs that pertain to the table service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
### Log authenticated requests
Requests made by the Table storage service itself, such as log creation or delet
- Successful requests - Server errors-- Time-out errors for both client and server
+- Time out errors for both client and server
- Failed GET requests with the error code 304 (Not Modified) All other failed anonymous requests aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-table-storage-reference.md).
-### Accessing logs in a storage account
-
-Logs appear as blobs stored to a container in the target storage account. Data is collected and stored inside a single blob as a line-delimited JSON payload. The name of the blob follows this naming convention:
-
-`https://<destination-storage-account>.blob.core.windows.net/insights-logs-<storage-operation>/resourceId=/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<source-storage-account>/tableServices/default/y=<year>/m=<month>/d=<day>/h=<hour>/m=<minute>/PT1H.json`
-
-Here's an example:
+### Sample Kusto queries
-`https://mylogstorageaccount.blob.core.windows.net/insights-logs-storagewrite/resourceId=/subscriptions/`<br>`208841be-a4v3-4234-9450-08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/tableServices/default/y=2019/m=07/d=30/h=23/m=12/PT1H.json`
+If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
-### Accessing logs in an event hub
-
-Logs sent to an event hub aren't stored as a file, but you can verify that the event hub received the log information. In the Azure portal, go to your event hub and verify that the **incoming messages** count is greater than zero.
-
-![Audit logs](media/monitor-table-storage/event-hub-log.png)
-
-You can access and read log data that's sent to your event hub by using security information and event management and monitoring tools. For more information, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs).
-
-### Accessing logs in a Log Analytics workspace
-
-You can access logs sent to a Log Analytics workspace by using Azure Monitor log queries.
-
-For more information, see [Stream Azure monitoring data to event hub and external partners](../../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
-
-Data is stored in the **StorageTableLogs** table.
-
-#### Sample Kusto queries
-
-Here are some queries that you can enter in the **Log search** bar to help you monitor your Table storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
+Here are some queries that you can enter in the **Log search** bar to help you monitor your Blob storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
> [!IMPORTANT] > When you select **Logs** from the storage account resource group menu, Log Analytics is opened with the query scope set to the current resource group. This means that log queries will only include data from that resource group. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../azure-monitor/logs/scope.md) for details.
Use these queries to help you monitor your Azure Storage accounts:
| summarize count() by OperationName | sort by count_ desc | render piechart+ ```+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../../azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md).
+
+The following table lists some example scenarios to monitor and the proper metric to use for the alert:
+
+| Scenario | Metric to use for alert |
+|-|-|
+| Table Storage service is throttled. | Metric: Transactions<br>Dimension name: Response type |
+| Table Storage requests are successful 99% of the time. | Metric: Availability<br>Dimension names: Geo type, API name, Authentication |
+| Table Storage egress has exceeded 500 GiB in one day. | Metric: Egress<br>Dimension names: Geo type, API name, Authentication |
+ ## FAQ **Does Azure Storage support metrics for Managed Disks or Unmanaged Disks?**
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
You can create a new Guest OS update maintenance configuration or modify an exis
The update management center (preview) allows you to target a dynamic group of Azure or non-Azure VMs for update deployment. Using a dynamic group keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use dynamic scoping by using built-in policies which you can customize as per your use-case. > [!NOTE]
-> This policy also ensures that the patch orchestration property for Azure machines is set to **Automatic by Platform** or **Azure-orchestrated (preview)** as it is a prerequisite for scheduled patching.
+> This policy also ensures that the patch orchestration property for Azure machines is set to **Azure-orchestrated (Automatic by Platform)** as it is a prerequisite for scheduled patching.
+ ### Assign a policy
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
You can install the multimedia redirection extension using Group Policy, either
# [Edge](#tab/edge)
-1. Download and install the Microsoft Edge administrative template by following the directions in [Configure Microsoft Edge policy settings on Windows devices](/deployedge/configure-microsoft-edge.md#1-download-and-install-the-microsoft-edge-administrative-template)
+1. Download and install the Microsoft Edge administrative template by following the directions in [Configure Microsoft Edge policy settings on Windows devices](/deployedge/configure-microsoft-edge#1-download-and-install-the-microsoft-edge-administrative-template)
1. Next, decide whether you want to configure Group Policy centrally from your domain or locally for each session host:
virtual-desktop Troubleshoot Statuses Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-statuses-checks.md
+
+ Title: Azure Virtual Desktop session host statuses and health checks
+description: How to troubleshoot the failed session host statuses and failed health checks
++ Last updated : 10/18/2022+++
+# Azure Virtual Desktop session host statuses and health checks
+
+The Azure Virtual Desktop Agent regularly runs health checks on the session host. The agent assigns these health checks various statuses that include descriptions of how to fix common issues. This article will tell you what each status means and how to act on them during a health check.
+
+## Session host statuses
+
+The following table lists all statuses for session hosts in the Azure portal each potential status. *Available* is considered the ideal default status. Any other statuses represent potential issues that you need to take care of to ensure the service works properly.
+
+>[!NOTE]
+>If an issue is listed as "non-fatal," the service can still run with the issue active. However, we recommend you resolve the issue as soon as possible to prevent future issues. If an issue is listed as "fatal," then it will prevent the service from running. You must resolve all fatal issues to make sure your users can access the session host.
+
+| Session host status | Description | How to resolve related issues |
+||::|::|
+|Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it will still be listed as ΓÇ£Available." |N/A|
+|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. You can find which health checks have failed in the session hosts detailed view in the Azure portal. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
+|Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status will change to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. |Turn on the session host. |
+|Unavailable| The session host is either turned off or hasn't passed fatal health checks, which prevents user sessions from connecting to this session host. |If the session host is off, turn it back on. If the session host didn't pass the domain join check or side-by-side stack listener health checks, refer to the table in [Health check](#health-check) for ways to resolve the issue. If the status is still "Unavailable" after following those directions, open a support case.|
+|Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This doesn't affect new nor existing user sessions. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).|
+|Upgrading| This status means that the agent upgrade is in progress. This status will be updated to ΓÇ£AvailableΓÇ¥ once the upgrade is done and the session host can accept connections again.|If your session host has been stuck in the "Upgrading" state, then [reinstall the agent](troubleshoot-agent.md#error-session-host-vms-are-stuck-in-unavailable-or-upgrading-state).|
+
+## Health check
+
+The health check is a test run by the agent on the session host. The following table lists each type of health check and describes what it does.
+
+| Health check name | Description | What happens if the session host doesn't pass the check |
+||::|::|
+| Domain joined | Verifies that the session host is joined to a domain controller. | If this check fails, users won't be able to connect to the session host. To solve this issue, join your session host to a domain. |
+| Geneva Monitoring Agent | Verifies that the session host has a healthy monitoring agent by checking if the monitoring agent is installed and running in the expected registry location. | If this check fails, it's semi-fatal. There may be successful connections, but they'll contain no logging information. To resolve this, make sure a monitoring agent is installed. If it's already installed, contact Microsoft support. |
+| Integrated Maintenance Data System (IMDS) reachable | Verifies that the service can't access the IMDS endpoint. | If this check fails, it's semi-fatal. There may be successful connections, but they won't contain logging information. To resolve this issue, you'll need to reconfigure your networking, firewall, or proxy settings. |
+| Side-by-side (SxS) Stack Listener | Verifies that the side-by-side stack is up and running, listening, and ready to receive connections. | If this check fails, it's fatal, and users won't be able to connect to the session host. Try restarting your virtual machine (VM). If this doesnΓÇÖt work, contact Microsoft support. |
+| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this, follow the directions in [Error: VMs are stuck in the Needs Assistance state](#error-vms-are-stuck-in-the-needs-assistance-state). |
+
+## Error: VMs are stuck in the "Needs Assistance" state
+
+If the session host doesn't pass the *UrlsAccessibleCheck* health check, you'll need to identify which [required URL](safe-url-list.md) your deployment is currently blocking. Once you know which URL is blocked, identify which setting is blocking that URL and remove it.
+
+There are two reasons why the service is blocking a required URL:
+
+- You have an active firewall that's blocking most outbound traffic and access to the required URLs.
+- Your local hosts file is blocking the required websites.
+
+To resolve a firewall-related issue, add a rule that allows outbound connections to the TCP port 80/443 associated with the blocked URLs.
+
+If your local hosts file is blocking the required URLs, make sure none of the required URLs are in the **Hosts** file on your device. You can find the Hosts file location at the following registry key and value:
+
+**Key:** HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
+
+**Type:** REG_EXPAND_SZ
+
+**Name:** DataBasePath
+
+If the session host doesn't pass the *MetaDataServiceCheck* health check, then the service can't access the IMDS endpoint. To resolve this issue, you'll need to do the following things:
+
+- Reconfigure your networking, firewall, or proxy settings to unblock the IP address 169.254.169.254.
+- Make sure your HTTP clients bypass web proxies within the VM when querying IMDS. We recommend that you allow the required IP address in any firewall policies within the VM that deal with outbound network traffic direction.
+
+If your issue is caused by a web proxy, add an exception for 169.254.169.254 in the web proxy's configuration. To add this exception, open an elevated Command Prompt or PowerShell session and run the following command:
+
+```cmd
+netsh winhttp set proxy proxy-server="http=<customerwebproxyhere>" bypass-list="169.254.169.254"
+```
+
+## Next steps
+
+- For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).
+- To troubleshoot issues while creating an Azure Virtual Desktop environment and host pool in an Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).
+- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues related to the Azure Virtual Desktop agent or session connectivity, see [Troubleshoot common Azure Virtual Desktop Agent issues](troubleshoot-agent.md).
+- To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md).
+- To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
The following platform SKUs are currently supported (and more are added periodic
| Publisher | OS Offer | Sku | |-||--| | Canonical | UbuntuServer | 18.04-LTS |
-| Canonical | UbuntuServer | 18.04-LTS-Gen2 |
-| Canonical | 0001-com-ubuntu-server-focal | 20.04-LTS |
-| Canonical | 0001-com-ubuntu-server-focal | 20.04-LTS-Gen2 |
+| Canonical | UbuntuServer | 18_04-LTS-Gen2 |
+| Canonical | 0001-com-ubuntu-server-focal | 20_04-LTS |
+| Canonical | 0001-com-ubuntu-server-focal | 20_04-LTS-Gen2 |
+| Canonical | 0001-com-ubuntu-server-jammy | 22_04-LTS |
| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-1 | | MicrosoftCblMariner | Cbl-Mariner | 1-Gen2 | | MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2
-| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2-Gen2
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2-Gen2 |
+| MicrosoftSqlServer | Sql2017-ws2019| enterprise |
| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2016-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-gensecond |
The following platform SKUs are currently supported (and more are added periodic
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gs | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-containers-gs |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers-gs |
| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-smalldisk | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-smalldisk-g2 |
The following platform SKUs are currently supported (and more are added periodic
| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-core | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-core-smalldisk | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-g2 |
+| MicrosoftWindowsServer | WindowsServer | Datacenter-core-20h2-with-containers-smalldisk-gs |
## Requirements for configuring automatic OS image upgrade
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
Title: Configure Azure DevOps Services for SAP Deployment Automation Framework
-description: Configure your Azure DevOps Services for the SAP Deployment Automation Framework on Azure.
+ Title: Configure Azure DevOps Services for SAP deployment automation framework
+description: Configure your Azure DevOps Services for the SAP deployment automation framework on Azure.
Previously updated : 09/25/2022 Last updated : 10/19/2022
-# Use SAP Deployment Automation Framework from Azure DevOps Services
+# Use SAP deployment automation framework from Azure DevOps Services
Using Azure DevOps will streamline the deployment process by providing pipelines that can be executed to perform both the infrastructure deployment and the configuration and SAP installation activities. You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application.
You can use Azure Repos to store your configuration files and Azure Pipelines to
To use Azure DevOps Services, you'll need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account.
-## Configure Azure DevOps Services for the SAP Deployment Automation Framework
+## Configure Azure DevOps Services for the SAP deployment automation framework
-You can use the following script to do a basic installation of Azure Devops Services for the SAP Deployment Automation Framework.
+You can use the following script to do a basic installation of Azure Devops Services for the SAP deployment automation framework.
Log in to Azure Cloud Shell ```bash
Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/ma
You can run the 'Create Sample Deployer Configuration' pipeline to create a sample configuration for the Control Plane. When running choose the appropriate Azure region.
-## Manual configuration of Azure DevOps Services for the SAP Deployment Automation Framework
+## Manual configuration of Azure DevOps Services for the SAP deployment automation framework
### Create a new project
Open (https://dev.azure.com) and create a new project by clicking on the _New Pr
Record the URL of the project. ### Import the repository
-Start by importing the SAP Deployment Automation Framework GitHub repository into Azure Repos.
+Start by importing the SAP deployment automation framework GitHub repository into Azure Repos.
Navigate to the Repositories section and choose Import a repository, import the 'https://github.com/Azure/sap-automation.git' repository into Azure DevOps. For more info, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true)
-If you're unable to import a repository, you can create the 'sap-automation' repository, and manually import the content from the SAP Deployment Automation Framework GitHub repository to it.
+If you're unable to import a repository, you can create the 'sap-automation' repository, and manually import the content from the SAP deployment automation framework GitHub repository to it.
### Create the repository for manual import
Clone the repository to a local folder by clicking the _Clone_ button in the Fi
### Manually importing the repository content using a local clone
-You can also download the content from the SAP Deployment Automation Framework repository manually and add it to your local clone of the Azure DevOps repository.
+You can also download the content from the SAP deployment automation framework repository manually and add it to your local clone of the Azure DevOps repository.
Navigate to 'https://github.com/Azure/SAP-automation' repository and download the repository content as a ZIP file by clicking the _Code_ button and choosing _Download ZIP_.
Select the source control icon and provide a message about the change, for examp
### Create configuration root folder > [!IMPORTANT]
- > In order to ensure that your configuration files are not overwritten by changes in the SAP Deployment Automation Framework, store them in a separate folder hierarchy.
+ > In order to ensure that your configuration files are not overwritten by changes in the SAP deployment automation framework, store them in a separate folder hierarchy.
-Create a top level folder called 'WORKSPACES', this folder will be the root folder for all the SAP deployment configuration files. Create the following folders in the 'WORKSPACES' folder: 'DEPLOYER', 'LIBRARY', 'LANDSCAPE' and 'SYSTEM'. These will contain the configuration files for the different components of the SAP Deployment Automation Framework.
+Create a top level folder called 'WORKSPACES', this folder will be the root folder for all the SAP deployment configuration files. Create the following folders in the 'WORKSPACES' folder: 'DEPLOYER', 'LIBRARY', 'LANDSCAPE' and 'SYSTEM'. These will contain the configuration files for the different components of the SAP deployment automation framework.
Optionally you may copy the sample configuration files from the 'samples/WORKSPACES' folders to the WORKSPACES folder you created, this will allow you to experiment with sample deployments.
virtual-machines Automation Configure Sap Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-sap-parameters.md
description: Define SAP parameters for Ansible
Previously updated : 02/14/2022 Last updated : 10/19/2022
disks:
From the v3.4 release, it is possible to deploy SAP on Azure systems in a Shared Home configuration using an Oracle database backend. For more information on running SAP on Oracle in Azure, see [Azure Virtual Machines Oracle DBMS deployment for SAP workload](dbms_guide_oracle.md).
-In order to install the Oracle backend using the SAP Deployment Automation Framework, you need to provide the following parameters
+In order to install the Oracle backend using the SAP deployment automation framework, you need to provide the following parameters
> [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type |
virtual-machines Automation Configure Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-webapp.md
Title: Configure a Deployer Web Application for SAP Deployment Automation Framework
+ Title: Configure a Deployer Web Application for SAP deployment automation framework
description: Configure a web app as a part of the control plane to help creating and deploying SAP workload zones and systems on Azure. Previously updated : 08/1/2022 Last updated : 10/19/2022
rm ./manifest.json
## Deploy via Azure Pipelines
-For full instructions on setting up the web app using Azure DevOps, see [Use SAP Deployment Automation Framework from Azure DevOps Services](automation-configure-devops.md)
+For full instructions on setting up the web app using Azure DevOps, see [Use SAP deployment automation framework from Azure DevOps Services](automation-configure-devops.md)
### Summary of steps required to set up the web app before deploying the control plane: 1. Add the web app deployment pipeline (deploy/pipelines/21-deploy-web-app.yaml).
virtual-machines Automation Devops Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-devops-tutorial.md
Title: SAP deployment automation framework DevOps hands-on lab
-description: DevOps Hands-on lab for the SAP Deployment Automation Framework on Azure.
+description: DevOps Hands-on lab for the SAP deployment automation framework on Azure.
Previously updated : 12/14/2021 Last updated : 10/19/2022
-# SAP Deployment Automation Framework DevOps - Hands-on lab
+# SAP deployment automation framework DevOps - Hands-on lab
This tutorial shows how to perform the deployment activities of the [SAP deployment automation framework on Azure](automation-deployment-framework.md) using Azure DevOps Services.
virtual-machines Automation Naming Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-naming-module.md
description: Explanation of how to implement custom naming conventions for the S
Previously updated : 11/17/2021 Last updated : 10/19/2022
The names for the key vaults are defined in the "keyvault_names" structure. The
``` > [!NOTE]
-> This key vault names need to be unique across Azure, SDAF appends 3 random characters (ABC in the example) at the end of the key vault name to reduce the likelihood for name conflicts.
+> This key vault names need to be unique across Azure, SAP deployment automation framework appends 3 random characters (ABC in the example) at the end of the key vault name to reduce the likelihood for name conflicts.
The "private_access" names are currently not used.
The names for the storage accounts are defined in the "storageaccount_names" str
``` > [!NOTE]
-> This key vault names need to be unique across Azure, SDAF appends 3 random characters (abc in the example) at the end of the key vault name to reduce the likelihood for name conflicts.
+> This key vault names need to be unique across Azure, SAP deployment automation framework appends 3 random characters (abc in the example) at the end of the key vault name to reduce the likelihood for name conflicts.
### Virtual Machine names
virtual-machines Automation Tools Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-tools-configuration.md
Title: Configuring external tools for the SAP Deployment Automation Framework
-description: Describes how to configure external tools for using SAP Deployment Automation Framework.
+ Title: Configuring external tools for the SAP deployment automation framework
+description: Describes how to configure external tools for using SAP deployment automation framework.
Previously updated : 12/14/2021 Last updated : 10/19/2022
-# Configuring external tools to use with the SAP Deployment Automation Framework
+# Configuring external tools to use with the SAP deployment automation framework
-This document describes how to configure external tools to use the SAP Deployment Automation Framework.
+This document describes how to configure external tools to use the SAP deployment automation framework.
## Configuring Visual Studio Code
virtual-machines Azure Monitor Alerts Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-alerts-portal.md
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to configure alerts in Azure Monitor for SAP solutions so that I can receive alerts and notifications about my SAP systems.
Last updated 07/28/2022
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn how to configure alerts in Azure Monitor for SAP solutions (AMS). You can configure alerts and notifications from the [Azure portal](https://azure.microsoft.com/features/azure-portal) using its browser-based interface.
+In this how-to guide, you'll learn how to configure alerts in Azure Monitor for SAP solutions. You can configure alerts and notifications from the [Azure portal](https://azure.microsoft.com/features/azure-portal) using its browser-based interface.
-This content applies to both versions of the service, AMS and AMS (classic).
+This content applies to both versions of the service, Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic).
## Prerequisites - An Azure subscription.-- A deployment of an AMS resource with at least one provider. You can configure providers for:
+- A deployment of an Azure Monitor for SAP solutions resource with at least one provider. You can configure providers for:
- The SAP application (NetWeaver) - SAP HANA - Microsoft SQL Server
virtual-machines Azure Monitor Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-providers.md
Title: What are providers in Azure Monitor for SAP solutions? (preview)
-description: This article provides answers to frequently asked questions about Azure monitor for SAP solutions providers.
+description: This article provides answers to frequently asked questions about Azure Monitor for SAP solutions providers.
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to learn what providers are available for Azure Monitor for SAP solutions so that I can connect to these providers.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In the context of *Azure Monitor for SAP solutions (AMS)*, a *provider* contains the connection information for a corresponding component and helps to collect data from there. There are multiple provider types. For example, an SAP HANA provider is configured for a specific component within the SAP landscape, like an SAP HANA database. You can configure an AMS resource (also known as SAP monitor resource) with multiple providers of the same type or multiple providers of multiple types.
+In the context of *Azure Monitor for SAP solutions*, a *provider* contains the connection information for a corresponding component and helps to collect data from there. There are multiple provider types. For example, an SAP HANA provider is configured for a specific component within the SAP landscape, like an SAP HANA database. You can configure an Azure Monitor for SAP solutions resource (also known as SAP monitor resource) with multiple providers of the same type or multiple providers of multiple types.
-This content applies to both versions of the service, *AMS* and *AMS (classic)*.
+This content applies to both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*.
You can choose to configure different provider types for data collection from the corresponding component in their SAP landscape. For example, you can configure one provider for the SAP HANA provider type, another provider for high availability cluster provider type, and so on. You can also configure multiple providers of a specific provider type to reuse the same SAP monitor resource and associated managed group. For more information, see [Manage Azure Resource Manager resource groups by using the Azure portal](../../../azure-resource-manager/management/manage-resource-groups-portal.md).
-![Diagram showing AMS connection to available providers.](./media/azure-monitor-providers/providers.png)
+![Diagram showing Azure Monitor for SAP solutions connection to available providers.](./media/azure-monitor-providers/providers.png)
-It's recommended to configure at least one provider when you deploy an AMS resource. By configuring a provider, you start data collection from the corresponding component for which the provider is configured.
+It's recommended to configure at least one provider when you deploy an Azure Monitor for SAP solutions resource. By configuring a provider, you start data collection from the corresponding component for which the provider is configured.
-If you don't configure any providers at the time of deployment, the AMS resource is still deployed, but no data is collected. You can add providers after deployment through the SAP monitor resource within the Azure portal. You can add or delete providers from the SAP monitor resource at any time.
+If you don't configure any providers at the time of deployment, the Azure Monitor for SAP solutions resource is still deployed, but no data is collected. You can add providers after deployment through the SAP monitor resource within the Azure portal. You can add or delete providers from the SAP monitor resource at any time.
## Provider type: SAP NetWeaver
-You can configure one or more providers of provider type SAP NetWeaver to enable data collection from SAP NetWeaver layer. AMS NetWeaver provider uses the existing [**SAPControl** Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information.
+You can configure one or more providers of provider type SAP NetWeaver to enable data collection from SAP NetWeaver layer. Azure Monitor for SAP solutions NetWeaver provider uses the existing [**SAPControl** Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information.
-For the current release, the following SOAP web methods are the standard, out-of-box methods invoked by AMS.
+For the current release, the following SOAP web methods are the standard, out-of-box methods invoked by Azure Monitor for SAP solutions.
| Web method | ABAP support | Java support | Metrics | | - | | | - |
Configuring Microsoft SQL Server provider requires:
## Provider type: High-availability cluster
-You can configure one or more providers of provider type *High-availability cluster* to enable data collection from Pacemaker cluster within the SAP landscape. The High-availability cluster provider connects to Pacemaker using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE** based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL** based clusters. AMS then pulls data from the database and pushes it to Log Analytics workspace in your subscription. The High-availability cluster provider collects data every 60 seconds from Pacemaker.
+You can configure one or more providers of provider type *High-availability cluster* to enable data collection from Pacemaker cluster within the SAP landscape. The High-availability cluster provider connects to Pacemaker using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE** based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL** based clusters. Azure Monitor for SAP solutions then pulls data from the database and pushes it to Log Analytics workspace in your subscription. The High-availability cluster provider collects data every 60 seconds from Pacemaker.
In public preview, you can expect to see the following data with the High-availability cluster provider: - Cluster status represented as a roll-up of node and resource status
To configure an OS (Linux) provider, two primary steps are involved:
1. Configure an OS (Linux) provider for each BareMetal or VM node instance in your environment. To configure the OS (Linux) provider, the following information is required:
- - **Name**: a name for this provider, unique to the AMS instance.
+ - **Name**: a name for this provider, unique to the Azure Monitor for SAP solutions instance.
- **Node Exporter endpoint**: usually `http://<servername or ip address>:9100/metrics`. Port 9100 is exposed for the **Node_Exporter** endpoint.
virtual-machines Azure Monitor Sap Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-sap-quickstart-powershell.md
Previously updated : 07/21/2022 Last updated : 10/19/2022 ms.devlang: azurepowershell # Customer intent: As a developer, I want to deploy Azure Monitor for SAP solutions with PowerShell so that I can create resources with PowerShell.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-Get started with Azure Monitor for SAP solutions (AMS) by using the
-[Az.HanaOnAzure](/powershell/module/az.hanaonazure/#sap-hana-on-azure) PowerShell module to create AMS resources. You'll create a resource group, set up monitoring, and create a provider instance.
+Get started with Azure Monitor for SAP solutions by using the
+[Az.HanaOnAzure](/powershell/module/az.hanaonazure/#sap-hana-on-azure) PowerShell module to create Azure Monitor for SAP solutions resources. You'll create a resource group, set up monitoring, and create a provider instance.
-This content only applies to the AMS (classic) version of the service.
+This content only applies to the Azure Monitor for SAP solutions (classic) version of the service.
## Prerequisites
virtual-machines Azure Monitor Sap Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-sap-quickstart.md
Previously updated : 07/21/2022 Last updated : 10/19/2022 # Customer intent: As a developer, I want to deploy Azure Monitor for SAP solutions in the Azure portal so that I can configure providers.
Last updated 07/21/2022
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-Get started with Azure Monitor for SAP solutions (AMS) by using the [Azure portal](https://azure.microsoft.com/features/azure-portal) to deploy AMS resources and configure providers.
+Get started with Azure Monitor for SAP solutions by using the [Azure portal](https://azure.microsoft.com/features/azure-portal) to deploy Azure Monitor for SAP solutions resources and configure providers.
-This content applies to both versions of the service, AMS and AMS (classic).
+This content applies to both versions of the service, Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic).
## Prerequisites If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-## Create AMS monitoring resource
+## Create Azure Monitor for SAP solutions monitoring resource
1. Sign in to the [Azure portal](https://portal.azure.com).
If you don't have an Azure subscription, create a [free](https://azure.microsoft
![Diagram that shows Azure Monitor for SAP solutions Quick Start 2.](./media/azure-monitor-sap/azure-monitor-quickstart-2-new.png)
-## Create AMS (classic) monitoring resource
+## Create Azure Monitor for SAP solutions (classic) monitoring resource
1. Sign in to the [Azure portal](https://portal.azure.com).
If you don't have an Azure subscription, create a [free](https://azure.microsoft
Learn more about Azure Monitor for SAP solutions. > [!div class="nextstepaction"]
-> [Configure AMS Providers](configure-netweaver-azure-monitor-sap-solutions.md)
+> [Configure Azure Monitor for SAP solutions Providers](configure-netweaver-azure-monitor-sap-solutions.md)
virtual-machines Configure Db 2 Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-db-2-azure-monitor-sap-solutions.md
Title: Create IBM Db2 provider for Azure Monitor for SAP solutions (preview)
-description: This article provides details to configure an IBM DB2 provider for Azure Monitor for SAP solutions (AMS).
+description: This article provides details to configure an IBM DB2 provider for Azure Monitor for SAP solutions.
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to create an IBM Db2 provider so that I can monitor the resource through Azure Monitor for SAP solutions.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn how to create an IBM Db2 provider for Azure Monitor for SAP solutions (AMS) through the Azure portal. This content applies only to AMS, not the AMS (classic) version.
+In this how-to guide, you'll learn how to create an IBM Db2 provider for Azure Monitor for SAP solutions through the Azure portal. This content applies only to Azure Monitor for SAP solutions, not the Azure Monitor for SAP solutions (classic) version.
## Prerequisites - An Azure subscription. -- An existing AMS resource. To create an AMS resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
+- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
## Create IBM Db2 provider
-To create the IBM Db2 provider for AMS:
+To create the IBM Db2 provider for Azure Monitor for SAP solutions:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to the AMS service.
-1. Open the AMS resource you want to modify.
+1. Go to the Azure Monitor for SAP solutions service.
+1. Open the Azure Monitor for SAP solutions resource you want to modify.
1. On the resource's menu, under **Settings**, select **Providers**. 1. Select **Add** to add a new provider. 1. For **Type**, select **IBM Db2**.
To create the IBM Db2 provider for AMS:
## Next steps > [!div class="nextstepaction"]
-> [Learn about AMS provider types](azure-monitor-providers.md)
+> [Learn about Azure Monitor for SAP solutions provider types](azure-monitor-providers.md)
virtual-machines Configure Ha Cluster Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-ha-cluster-azure-monitor-sap-solutions.md
Title: Create a High Availability Pacemaker cluster provider for Azure Monitor for SAP solutions (preview)
-description: Learn how to configure High Availability (HA) Pacemaker cluster providers for Azure Monitor for SAP solutions (AMS).
+description: Learn how to configure High Availability (HA) Pacemaker cluster providers for Azure Monitor for SAP solutions.
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to create a High Availability Pacemaker cluster so I can use the resource with Azure Monitor for SAP solutions.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn to create a High Availability (HA) Pacemaker cluster provider for Azure Monitor for SAP solutions (AMS). You'll install the HA agent, then create the provider for AMS.
+In this how-to guide, you'll learn to create a High Availability (HA) Pacemaker cluster provider for Azure Monitor for SAP solutions. You'll install the HA agent, then create the provider for Azure Monitor for SAP solutions.
-This content applies to both AMS and AMS (classic) versions.
+This content applies to both Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic) versions.
## Prerequisites - An Azure subscription. -- An existing AMS resource. To create an AMS resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
+- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
## Install HA agent
For RHEL-based clusters, install **performance co-pilot (PCP)** and the **pcp-pm
For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat.com/articles/6139852) in each node.
-## Create provider for AMS
+## Create provider for Azure Monitor for SAP solutions
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to the AMS service.
-1. Open your AMS resource.
+1. Go to the Azure Monitor for SAP solutions service.
+1. Open your Azure Monitor for SAP solutions resource.
1. In the resource's menu, under **Settings**, select **Providers**. 1. Select **Add** to add a new provider.
For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat.
## Next steps > [!div class="nextstepaction"]
-> [Learn about AMS provider types](azure-monitor-providers.md)
+> [Learn about Azure Monitor for SAP solutions provider types](azure-monitor-providers.md)
virtual-machines Configure Hana Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-hana-azure-monitor-sap-solutions.md
Title: Configure SAP HANA provider for Azure Monitor for SAP solutions (preview)
-description: Learn how to configure the SAP HANA provider for Azure Monitor for SAP solutions (AMS) through the Azure portal.
+description: Learn how to configure the SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal.
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to create an SAP HANA provider so that I can use the resource with Azure Monitor for SAP solutions.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn to configure an SAP HANA provider for Azure Monitor for SAP solutions (AMS) through the Azure portal. There are instructions to set up the [current version](#configure-ams) and the [classic version](#configure-ams-classic) of AMS.
+In this how-to guide, you'll learn to configure an SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal. There are instructions to set up the [current version](#configure-azure-monitor-for-sap-solutions) and the [classic version](#configure-azure-monitor-for-sap-solutions-classic) of Azure Monitor for SAP solutions.
## Prerequisites - An Azure subscription. -- An existing AMS resource. To create an AMS resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
+- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
-## Configure AMS
+## Configure Azure Monitor for SAP solutions
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Azure Monitors for SAP solutions** in the search bar.
-1. On the AMS service page, select **Create**.
-1. On the AMS creation page, enter your basic resource information on the **Basics** tab.
+1. On the Azure Monitor for SAP solutions service page, select **Create**.
+1. On the Azure Monitor for SAP solutions creation page, enter your basic resource information on the **Basics** tab.
1. On the **Providers** tab: 1. Select **Add provider**. 1. On the creation pane, for **Type**, select **SAP HANA**.
- ![Diagram of the AMS resource creation page in the Azure portal, showing all required form fields.](./media/azure-monitor-sap/azure-monitor-providers-hana-setup.png)
+ ![Diagram of the Azure Monitor for SAP solutions resource creation page in the Azure portal, showing all required form fields.](./media/azure-monitor-sap/azure-monitor-providers-hana-setup.png)
1. For **IP address**, enter the IP address or hostname of the server that runs the SAP HANA instance that you want to monitor. If you're using a hostname, make sure there is connectivity within the virtual network. 1. For **Database tenant**, enter the HANA database that you want to connect to. It's recommended to use **SYSTEMDB**, because tenant databases don't have all monitoring views. For legacy single-container HANA 1.0 instances, leave this field blank. 1. For **Instance number**, enter the instance number of the database (0-99). The SQL port is automatically determined based on the instance number. 1. For **Database username**, enter the dedicated SAP HANA database user. This user needs the **MONITORING** or **BACKUP CATALOG READ** role assignment. For non-production SAP HANA instances, use **SYSTEM** instead. 1. For **Database password**, enter the password for the database username. You can either enter the password directly or use a secret inside Azure Key Vault.
-1. Save your changes to the AMS resource.
+1. Save your changes to the Azure Monitor for SAP solutions resource.
-## Configure AMS (classic)
+## Configure Azure Monitor for SAP solutions (classic)
-To configure the SAP HANA provider for AMS (classic):
+To configure the SAP HANA provider for Azure Monitor for SAP solutions (classic):
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select the **Azure Monitors for SAP Solutions (classic)** service in the search bar.
-1. On the AMS (classic) service page, select **Create**.
-1. On the creation page's **Basics** tab, enter the basic information for your AMS instance.
-1. On the **Providers** tab, add the providers that you want to configure. You can add multiple providers during creation. You can also add more providers after you deploy the AMS resource. For each provider:
+1. On the Azure Monitor for SAP solutions (classic) service page, select **Create**.
+1. On the creation page's **Basics** tab, enter the basic information for your Azure Monitor for SAP solutions instance.
+1. On the **Providers** tab, add the providers that you want to configure. You can add multiple providers during creation. You can also add more providers after you deploy the Azure Monitor for SAP solutions resource. For each provider:
1. Select **Add provider**. 1. For **Type**, select **SAP HANA**. Make sure that you configure an SAP HANA provider for the main node. 1. For **IP address**, enter the private IP address for the HANA server.
To configure the SAP HANA provider for AMS (classic):
1. For **Database username**, enter the username that you want to use. Make sure the database user has **monitoring** and **catalog read** role assignments. 1. Select **Add provider** to finish adding the provider. 1. Select **Review + create** to review and validate your configuration.
-1. Select **Create** to finish creating the AMS resource.
+1. Select **Create** to finish creating the Azure Monitor for SAP solutions resource.
## Next steps > [!div class="nextstepaction"]
-> [Learn about AMS provider types](azure-monitor-providers.md)
+> [Learn about Azure Monitor for SAP solutions provider types](azure-monitor-providers.md)
virtual-machines Configure Linux Os Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-linux-os-azure-monitor-sap-solutions.md
Title: Configure Linux provider for Azure Monitor for SAP solutions (preview)
-description: This article explains how to configure a Linux OS provider for Azure Monitor for SAP solutions (AMS).
+description: This article explains how to configure a Linux OS provider for Azure Monitor for SAP solutions.
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to configure a Linux provider so that I can use Azure Monitor for SAP solutions for monitoring.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn to create a Linux OS provider for *Azure Monitor for SAP solutions (AMS)* resources.
+In this how-to guide, you'll learn to create a Linux OS provider for *Azure Monitor for SAP solutions* resources.
-This content applies to both versions of the service, *AMS* and *AMS (classic)*.
+This content applies to both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*.
## Prerequisites - An Azure subscription. -- An existing AMS resource. To create an AMS resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
+- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
- Install [node exporter version 1.3.0](https://prometheus.io/download/#node_exporter) in each SAP host that you want to monitor, either BareMetal or Azure virtual machine (Azure VM). For more information, see [the node exporter GitHub repository](https://github.com/prometheus/node_exporter). ## Create Linux provider 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to the AMS or AMS (classic) service.
-1. Select **Create** to make a new AMS resource.
+1. Go to the Azure Monitor for SAP solutions or Azure Monitor for SAP solutions (classic) service.
+1. Select **Create** to make a new Azure Monitor for SAP solutions resource.
1. Select **Add provider**. 1. Configure the following settings for the new provider: 1. For **Type**, select **OS (Linux)**. 1. For **Name**, enter a name that will be the identifier for the BareMetal instance. 1. For **Node Exporter Endpoint**, enter `http://IP:9100/metrics`.
- 1. For the IP address, use the private IP address of the Linux host. Make sure the host and AMS resource are in the same virtual network.
+ 1. For the IP address, use the private IP address of the Linux host. Make sure the host and Azure Monitor for SAP solutions resource are in the same virtual network.
1. Open firewall port 9100 on the Linux host. 1. If you're using `firewall-cmd`, run `_firewall-cmd_ _--permanent_ _--add-port=9100/tcp_ ` then `_firewall-cmd_ _--reload_`. 1. If you're using `ufw`, run `_ufw_ _allow_ _9100/tcp_` then `_ufw_ _reload_`.
This content applies to both versions of the service, *AMS* and *AMS (classic)*.
## Next steps > [!div class="nextstepaction"]
-> [Learn about AMS provider types](azure-monitor-providers.md)
+> [Learn about Azure Monitor for SAP solutions provider types](azure-monitor-providers.md)
virtual-machines Configure Netweaver Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-netweaver-azure-monitor-sap-solutions.md
Title: Configure SAP NetWeaver for Azure Monitor for SAP solutions (preview)
-description: Learn how to configure SAP NetWeaver for use with Azure Monitor for SAP solutions (AMS).
+description: Learn how to configure SAP NetWeaver for use with Azure Monitor for SAP solutions.
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to configure a SAP NetWeaver provider so that I can use Azure Monitor for SAP solutions.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn to configure the SAP NetWeaver provider for use with *Azure Monitor for SAP solutions (AMS)*. You can use SAP NetWeaver with both versions of the service, *AMS* and *AMS (classic)*.
+In this how-to guide, you'll learn to configure the SAP NetWeaver provider for use with *Azure Monitor for SAP solutions*. You can use SAP NetWeaver with both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*.
-The SAP start service provides multiple services, including monitoring the SAP system. Both versions of AMS use **SAPControl**, which is a SOAP web service interface that exposes these capabilities. The **SAPControl** interface [differentiates between protected and unprotected web service methods](https://wiki.scn.sap.com/wiki/display/SI/Protected+web+methods+of+sapstartsrv). It's necessary to unprotect some methods to use AMS with NetWeaver.
+The SAP start service provides multiple services, including monitoring the SAP system. Both versions of Azure Monitor for SAP solutions use **SAPControl**, which is a SOAP web service interface that exposes these capabilities. The **SAPControl** interface [differentiates between protected and unprotected web service methods](https://wiki.scn.sap.com/wiki/display/SI/Protected+web+methods+of+sapstartsrv). It's necessary to unprotect some methods to use Azure Monitor for SAP solutions with NetWeaver.
## Prerequisites - An Azure subscription. -- An existing AMS resource. To create an AMS resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
+- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
-## Configure NetWeaver for AMS
+## Configure NetWeaver for Azure Monitor for SAP solutions
-To configure the NetWeaver provider for the current AMS version, you'll need to:
+To configure the NetWeaver provider for the current Azure Monitor for SAP solutions version, you'll need to:
1. [Unprotect methods for metrics](#unprotect-methods-for-metrics) 1. [Check that the rules have updated properly](#check-updated-rules)
To validate the rules, run a test query against the web methods. Replace the `<h
For AS ABAP applications only, you can set up the NetWeaver RFC metrics.
-Create or upload the following role in the SAP NW ABAP system. AMS requires this role to connect to SAP. The role uses least privilege access.
+Create or upload the following role in the SAP NW ABAP system. Azure Monitor for SAP solutions requires this role to connect to SAP. The role uses least privilege access.
1. Log in to your SAP system. 1. Download and unzip [Z_AMS_NETWEAVER_MONITORING.zip](https://github.com/Azure/Azure-Monitor-for-SAP-solutions-preview/files/8710130/Z_AMS_NETWEAVER_MONITORING.zip).
It's also recommended to check that you enabled the ICF ports.
To add the NetWeaver provider: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to the AMS service page.
+1. Go to the Azure Monitor for SAP solutions service page.
1. Select **Create** to open the resource creation page. 1. Enter information for the **Basics** tab. 1. Select the **Providers** tab. Then, select **Add provider**.
To add the NetWeaver provider:
1. For **Application Server**, enter the IP address or the fully qualified domain name (FQDN) of the SAP NetWeaver system to monitor. For example, `sapservername.contoso.com` where `sapservername` is the hostname and `contoso.com` is the domain. 1. Save your changes.
-If you're using a hostname, make sure there's connectivity from the virtual network that you used to create the AMS resource.
+If you're using a hostname, make sure there's connectivity from the virtual network that you used to create the Azure Monitor for SAP solutions resource.
- For **Instance number**, specify the instance number of SAP NetWeaver (00-99) - For **Host file entries**, provide the DNS mappings for all SAP VMs associated with the SID.
Make sure that host file entries are provided for all hostnames that the command
- For **SAP password**, enter the password for the user.
-## Configure NetWeaver for AMS (classic)
+## Configure NetWeaver for Azure Monitor for SAP solutions (classic)
-To configure the NetWeaver provider for the AMS (classic) version:
+To configure the NetWeaver provider for the Azure Monitor for SAP solutions (classic) version:
1. [Unprotect some methods](#unprotect-methods) 1. [Restart the SAP start service](#restart-sap-start-service)
To install the NetWeaver provider in the Azure portal:
1. Go to the **Azure Monitor for SAP solutions** service.
-1. Select **Create** to add a new AMS resource.
+1. Select **Create** to add a new Azure Monitor for SAP solutions resource.
1. Select **Add provider**.
To install the NetWeaver provider in the Azure portal:
1. Select **Create** to finish creating the resource.
-If the SAP application servers (VMs) are part of a network domain, such as an Azure Active Directory (Azure AD) managed domain, you must provide the corresponding subdomain. The AMS collector VM exists inside the virtual network, and isn't joined to the domain. AMS can't resolve the hostname of instances inside the SAP system unless the hostname is an FQDN. If you don't provide the subdomain, there can be missing or incomplete visualizations in the NetWeaver workbook.
+If the SAP application servers (VMs) are part of a network domain, such as an Azure Active Directory (Azure AD) managed domain, you must provide the corresponding subdomain. The Azure Monitor for SAP solutions collector VM exists inside the virtual network, and isn't joined to the domain. Azure Monitor for SAP solutions can't resolve the hostname of instances inside the SAP system unless the hostname is an FQDN. If you don't provide the subdomain, there can be missing or incomplete visualizations in the NetWeaver workbook.
For example, if the hostname of the SAP system has an FQDN of `myhost.mycompany.contoso.com`:
Don't specify an IP address for the hostname if your SAP system is part of netwo
## Next steps > [!div class="nextstepaction"]
-> [Learn about AMS provider types](azure-monitor-providers.md)
+> [Learn about Azure Monitor for SAP solutions provider types](azure-monitor-providers.md)
virtual-machines Configure Sql Server Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-sql-server-azure-monitor-sap-solutions.md
Title: Configure Microsoft SQL Server provider for Azure Monitor for SAP solutions (preview)
-description: Learn how to configure a Microsoft SQL Server provider for use with Azure Monitor for SAP solutions (AMS).
+description: Learn how to configure a Microsoft SQL Server provider for use with Azure Monitor for SAP solutions.
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to configure a Microsoft SQL Server provider so that I can use Azure Monitor for SAP solutions for monitoring.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn to configure a Microsoft SQL Server provider for Azure Monitor for SAP solutions (AMS) through the Azure portal.
+In this how-to guide, you'll learn to configure a Microsoft SQL Server provider for Azure Monitor for SAP solutions through the Azure portal.
## Prerequisites - An Azure subscription. -- An existing AMS resource. To create an AMS resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
+- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).
## Open Windows port
-Open the Windows port in the local firewall of SQL Server and the network security group (NSG) where SQL Server and Azure Monitor for SAP solutions (AMS) exist. The default port is 1433.
+Open the Windows port in the local firewall of SQL Server and the network security group (NSG) where SQL Server and Azure Monitor for SAP solutions exist. The default port is 1433.
## Configure SQL server
Configure SQL Server to accept logins from Windows and SQL Server:
1. Restart SQL Server to complete the changes.
-## Create AMS user for SQL Server
+## Create Azure Monitor for SAP solutions user for SQL Server
-Create a user for AMS to log in to SQL Server using the following script. Make sure to replace:
+Create a user for Azure Monitor for SAP solutions to log in to SQL Server using the following script. Make sure to replace:
- `<Database to monitor>` with your SAP database's name - `<password>` with the password for your user
-You can replace the example information for the AMS user with any other SQL username.
+You can replace the example information for the Azure Monitor for SAP solutions user with any other SQL username.
```sql USE [<Database to monitor>]
ALTER ROLE [db_denydatawriter] ADD MEMBER [AMS]
GO ```
-## Install AMS provider
+## Install Azure Monitor for SAP solutions provider
-To install the provider from AMS:
+To install the provider from Azure Monitor for SAP solutions:
-1. Open the AMS resource in the Azure portal.
+1. Open the Azure Monitor for SAP solutions resource in the Azure portal.
1. In the resource menu, under **Settings**, select **Providers**. 1. On the provider page, select **Add** to add a new provider. 1. On the **Add provider** page, enter all required information:
To install the provider from AMS:
## Next steps > [!div class="nextstepaction"]
-> [Learn about AMS provider types](azure-monitor-providers.md)
+> [Learn about Azure Monitor for SAP solutions provider types](azure-monitor-providers.md)
virtual-machines Create Network Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/create-network-azure-monitor-sap-solutions.md
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to set up an Azure virtual network so that I can use Azure Monitor for SAP solutions.
-# Set up network for Azure monitor for SAP solutions (preview)
+# Set up network for Azure Monitor for SAP solutions solutions (preview)
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn how to configure an Azure virtual network so that you can deploy *Azure Monitor for SAP solutions (AMS)*. You'll learn to [create a new subnet](#create-new-subnet) for use with Azure Functions for both versions of the product, *AMS* and *AMS (classic)*. Then, if you're using the current version of AMS, you'll learn to [set up outbound internet access](#configure-outbound-internet-access) to the SAP environment that you want to monitor.
+In this how-to guide, you'll learn how to configure an Azure virtual network so that you can deploy *Azure Monitor for SAP solutions*. You'll learn to [create a new subnet](#create-new-subnet) for use with Azure Functions for both versions of the product, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*. Then, if you're using the current version of Azure Monitor for SAP solutions, you'll learn to [set up outbound internet access](#configure-outbound-internet-access) to the SAP environment that you want to monitor.
## Create new subnet > [!NOTE]
-> This section applies to both AMS and AMS (classic).
+> This section applies to both Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic).
-Azure Functions is the data collection engine for AMS. You'll need to create a new subnet to host Azure Functions.
+Azure Functions is the data collection engine for Azure Monitor for SAP solutions. You'll need to create a new subnet to host Azure Functions.
[Create a new subnet](../../../azure-functions/functions-networking-options.md#subnets) with an **IPv4/28** block or larger.
For more information, see how to [integrate your app with an Azure virtual netwo
## Configure outbound internet access > [!IMPORTANT]
-> This section only applies to the current version of AMS. If you're using AMS (classic), skip this section.
+> This section only applies to the current version of Azure Monitor for SAP solutions. If you're using Azure Monitor for SAP solutions (classic), skip this section.
-In many use cases, you might choose to restrict or block outbound internet access to your SAP network environment. However, AMS requires network connectivity between the [subnet that you configured](#create-new-subnet) and the systems that you want to monitor. Before you deploy an AMS resource, you need to configure outbound internet access, or the deployment will fail.
+In many use cases, you might choose to restrict or block outbound internet access to your SAP network environment. However, Azure Monitor for SAP solutions requires network connectivity between the [subnet that you configured](#create-new-subnet) and the systems that you want to monitor. Before you deploy an Azure Monitor for SAP solutions resource, you need to configure outbound internet access, or the deployment will fail.
There are multiple methods to address restricted or blocked outbound internet access. Choose the method that works best for your use case:
There are multiple methods to address restricted or blocked outbound internet ac
### Use Route All
-**Route All** is a [standard feature of virtual network integration](../../../azure-functions/functions-networking-options.md#virtual-network-integration) in Azure Functions, which is deployed as part of AMS. Enabling or disabling this setting only affects traffic from Azure Functions. This setting doesn't affect any other incoming or outgoing traffic within your virtual network.
+**Route All** is a [standard feature of virtual network integration](../../../azure-functions/functions-networking-options.md#virtual-network-integration) in Azure Functions, which is deployed as part of Azure Monitor for SAP solutions. Enabling or disabling this setting only affects traffic from Azure Functions. This setting doesn't affect any other incoming or outgoing traffic within your virtual network.
-You can configure the **Route All** setting when you create an AMS resource through the Azure portal. If your SAP environment doesn't allow outbound internet access, disable **Route All**. If your SAP environment allows outbound internet access, keep the default setting to enable **Route All**.
+You can configure the **Route All** setting when you create an Azure Monitor for SAP solutions resource through the Azure portal. If your SAP environment doesn't allow outbound internet access, disable **Route All**. If your SAP environment allows outbound internet access, keep the default setting to enable **Route All**.
-You can only use this option before you deploy an AMS resource. It's not possible to change the **Route All** setting after you create the AMS resource.
+You can only use this option before you deploy an Azure Monitor for SAP solutions resource. It's not possible to change the **Route All** setting after you create the Azure Monitor for SAP solutions resource.
### Use service tags
-If you use NSGs, you can create AMS-related [virtual network service tags](../../../virtual-network/service-tags-overview.md) to allow appropriate traffic flow for your deployment. A service tag represents a group of IP address prefixes from a given Azure service.
+If you use NSGs, you can create Azure Monitor for SAP solutions-related [virtual network service tags](../../../virtual-network/service-tags-overview.md) to allow appropriate traffic flow for your deployment. A service tag represents a group of IP address prefixes from a given Azure service.
-You can use this option after you've deployed an AMS resource.
+You can use this option after you've deployed an Azure Monitor for SAP solutions resource.
-1. Find the subnet associated with your AMS managed resource group:
+1. Find the subnet associated with your Azure Monitor for SAP solutions managed resource group:
1. Sign in to the [Azure portal](https://portal.azure.com).
- 1. Search for or select the AMS service.
- 1. On the **Overview** page for AMS, select your AMS resource.
+ 1. Search for or select the Azure Monitor for SAP solutions service.
+ 1. On the **Overview** page for Azure Monitor for SAP solutions, select your Azure Monitor for SAP solutions resource.
1. On the managed resource group's page, select the Azure Functions app. 1. On the app's page, select the **Networking** tab. Then, select **VNET Integration**. 1. Review and note the subnet details. You'll need the subnet's IP address to create rules in the next step.
You can use this option after you've deployed an AMS resource.
| 660 | deny_internet | Any | Any | Any | Internet | Deny |
-The AMS subnet IP address refers to the IP of the subnet associated with your AMS resource. To find the subnet, go to the AMS resource in the Azure portal. On the **Overview** page, review the **vNet/subnet** value.
+The Azure Monitor for SAP solutions subnet IP address refers to the IP of the subnet associated with your Azure Monitor for SAP solutions resource. To find the subnet, go to the Azure Monitor for SAP solutions resource in the Azure portal. On the **Overview** page, review the **vNet/subnet** value.
For the rules that you create, **allow_vnet** must have a lower priority than **deny_internet**. All other rules also need to have a lower priority than **allow_vnet**. However, the remaining order of these other rules is interchangeable.
For the rules that you create, **allow_vnet** must have a lower priority than **
You can enable a private endpoint by creating a new subnet in the same virtual network as the system that you want to monitor. No other resources can use this subnet. It's not possible to use the same subnet as Azure Functions for your private endpoint.
-To create a private endpoint for AMS:
+To create a private endpoint for Azure Monitor for SAP solutions:
1. [Create a new subnet](../../../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) in the same virtual network as the SAP system that you're monitoring.
-1. In the Azure portal, go to your AMS resource.
-1. On the **Overview** page for the AMS resource, select the **Managed resource group**.
+1. In the Azure portal, go to your Azure Monitor for SAP solutions resource.
+1. On the **Overview** page for the Azure Monitor for SAP solutions resource, select the **Managed resource group**.
1. Create a private endpoint connection for the following resources inside the managed resource group. 1. [Azure Key Vault resources](#create-key-vault-endpoint) 2. [Azure Storage resources](#create-storage-endpoint)
Configure the scope:
If you enable a private endpoint after any system accessed the Log Analytics workspace through a public endpoint, restart the Function App before moving forward. Otherwise, you can't access the Log Analytics workspace through the private endpoint.
-1. Go to the AMS resource in the Azure portal.
+1. Go to the Azure Monitor for SAP solutions resource in the Azure portal.
1. On the **Overview** page, select the name of the **Managed resource group**. 1. On the managed resource group's page, select the **Function App**. 1. On the Function App's **Overview** page, select **Restart**. Find and note important IP address ranges:
-1. Find the AMS resource's IP address range.
- 1. Go to the AMS resource in the Azure portal.
+1. Find the Azure Monitor for SAP solutions resource's IP address range.
+ 1. Go to the Azure Monitor for SAP solutions resource in the Azure portal.
1. On the resource's **Overview** page, select the **vNet/subnet** to go to the virtual network. 1. Note the IPv4 address range, which belongs to the source system. 1. Find the IP address range for the key vault and storage account.
- 1. Go to the resource group that contains the AMS resource in the Azure portal.
+ 1. Go to the resource group that contains the Azure Monitor for SAP solutions resource in the Azure portal.
1. On the **Overview** page, note the **Private endpoint** in the resource group. 1. In the resource group's menu, under **Settings**, select **DNS configuration**. 1. On the **DNS configuration** page, note the **IP addresses** for the private endpoint.
Find and note important IP address ranges:
1. Go to the private endpoint created for the AMPLS resource. 2. On the private endpoint's menu, under **Settings**, select **DNS configuration**. 3. On the **DNS configuration** page, note the associated IP addresses.
- 4. Go to the AMS resource in the Azure portal.
+ 4. Go to the Azure Monitor for SAP solutions resource in the Azure portal.
5. On the **Overview** page, select the **vNet/subnet** to go to that resource.
- 6. On the virtual network page, select the subnet that you used to create the AMS resource.
+ 6. On the virtual network page, select the subnet that you used to create the Azure Monitor for SAP solutions resource.
Add outbound security rules:
Add outbound security rules:
## Next steps -- [Quickstart: set up AMS through the Azure portal](azure-monitor-sap-quickstart.md)-- [Quickstart: set up AMS with PowerShell](azure-monitor-sap-quickstart-powershell.md)
+- [Quickstart: set up Azure Monitor for SAP solutions through the Azure portal](azure-monitor-sap-quickstart.md)
+- [Quickstart: set up Azure Monitor for SAP solutions with PowerShell](azure-monitor-sap-quickstart-powershell.md)
virtual-machines Expose Sap Odata To Power Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-odata-to-power-query.md
honoring the SAP named user mapping.
Above example shows the flow for Excel Desktop, but the approach is applicable to **any** Power Query enabled Microsoft product. For more information which products support Power Query, see [the Power Query documentation](/power-query/power-query-what-is-power-query#where-can-you-use-power-query). Popular consumers are [Power BI](/power-bi/connect-dat), [Power Automate](/flow/) and [Dynamics 365](/power-query/power-query-what-is-power-query#where-can-you-use-power-query).
+## Tackle SAP write-back scenarios with Power Automate
+
+The described approach is also applicable to write-back scenarios. For example, you can use [Power Automate](/flow/) to update a business partner in SAP using OData with the [http-enabled connectors](/training/modules/http-connectors/) (alternatively use [RFCs or BAPIs](/connectors/saperp/)). See below an example of a [Power BI service](/power-bi/fundamentals/power-bi-service-overview) dashboard that is connected to Power Automate through [value-based alerts](/power-bi/create-reports/service-set-data-alerts) and a [button](/power-bi/create-reports/power-bi-automate-visual?tabs=powerbi-desktop) (highlighted on the screenshot). Learn more about triggering flows from Power BI reports on the [Power Automate documentation](/power-automate/trigger-flow-powerbi-report).
++
+The highlighted button triggers a flow that forwards the OData PATCH request to the SAP Gateway to change the business partner role.
+
+> [!NOTE]
+> Use the Azure API Management [policy for SAP](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml) to handle the authentication, refresh tokens, [CSRF tokens](https://blogs.sap.com/2021/06/04/how-does-csrf-token-work-sap-gateway/) and overall caching of tokens outside of the flow.
++ ## Next steps [Learn from where you can use Power Query](/power-query/power-query-what-is-power-query#where-can-you-use-power-query)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 10/18/2022 Last updated : 10/19/2022
When you use Microsoft Azure, you can reliably run your mission-critical SAP wor
Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on Azure, you can host other SAP workload scenarios, like SAP BI on Azure.
-We just announced our new services of Azure Center for SAP solutions and Azure Monitor for SAP 2.0 entering the public preview stage. These services will give you the possibility to deploy SAP workload on Azure in a highly automated manner in an optimal architecture and configuration. And monitor your Azure infrastructure, OS, DBMS, and ABAP stack deployments on one single pane of glass.
+We just announced our new services of Azure Center for SAP solutions and Azure Monitor for SAP solutions 2.0 entering the public preview stage. These services will give you the possibility to deploy SAP workload on Azure in a highly automated manner in an optimal architecture and configuration. And monitor your Azure infrastructure, OS, DBMS, and ABAP stack deployments on one single pane of glass.
-For customers and partners who are focussed on deploying and operating their assets in public cloud through Terraform and Ansible, leverage our SAP Deployment Automation Framework (SDAF) to jump start your SAP deployments into Azure using our public Terraform and Ansible modules on [github](https://github.com/Azure/sap-automation).
+For customers and partners who are focussed on deploying and operating their assets in public cloud through Terraform and Ansible, leverage our SAP deployment automation framework to jump start your SAP deployments into Azure using our public Terraform and Ansible modules on [github](https://github.com/Azure/sap-automation).
Hosting SAP workload scenarios in Azure also can create requirements of identity integration and single sign-on. This situation can occur when you use Azure Active Directory (Azure AD) to connect different SAP components and SAP software-as-a-service (SaaS) or platform-as-a-service (PaaS) offers. A list of such integration and single sign-on scenarios with Azure AD and SAP entities is described and documented in the section "Azure AD SAP identity integration and single sign-on."
virtual-machines Monitor Sap On Azure Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure-reference.md
Previously updated : 07/27/2022 Last updated : 10/19/2022 # Data reference for Azure Monitor for SAP solutions (preview)
virtual-machines Monitor Sap On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure.md
Previously updated : 07/28/2022 Last updated : 10/19/2022 #Customer intent: As a developer, I want to learn how to monitor my SAP resources on Azure so that I can better understand their availability, performance, and operation.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-When you have critical SAP applications and business processes that rely on Azure resources, you might want to monitor those resources for availability, performance, and operation. *Azure Monitor for SAP solutions (AMS)* is an Azure-native monitoring product for SAP landscapes that run on Azure. AMS uses specific parts of the [Azure Monitor](../../../azure-monitor/overview.md) infrastructure. You can use AMS with both [SAP on Azure Virtual Machines (Azure VMs)](./hana-get-started.md) and [SAP on Azure Large Instances](./hana-overview-architecture.md).
+When you have critical SAP applications and business processes that rely on Azure resources, you might want to monitor those resources for availability, performance, and operation. *Azure Monitor for SAP solutions* is an Azure-native monitoring product for SAP landscapes that run on Azure. Azure Monitor for SAP solutions uses specific parts of the [Azure Monitor](../../../azure-monitor/overview.md) infrastructure. You can use Azure Monitor for SAP solutions with both [SAP on Azure Virtual Machines (Azure VMs)](./hana-get-started.md) and [SAP on Azure Large Instances](./hana-overview-architecture.md).
-There are currently two versions of this product, *AMS* and *AMS (classic)*.
+There are currently two versions of this product, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*.
## What can you monitor?
-You can use AMS to collect data from Azure infrastructure and databases in one central location. Then, you can visually correlate the data for faster troubleshooting.
+You can use Azure Monitor for SAP solutions to collect data from Azure infrastructure and databases in one central location. Then, you can visually correlate the data for faster troubleshooting.
-To monitor different components of an SAP landscape (such as Azure VMs, high-availability clusters, SAP HANA databases, SAP NetWeaver, etc.), add the corresponding *[provider](./azure-monitor-providers.md)*. For more information, see [how to deploy AMS through the Azure portal](azure-monitor-sap-quickstart.md).
+To monitor different components of an SAP landscape (such as Azure VMs, high-availability clusters, SAP HANA databases, SAP NetWeaver, etc.), add the corresponding *[provider](./azure-monitor-providers.md)*. For more information, see [how to deploy Azure Monitor for SAP solutions through the Azure portal](azure-monitor-sap-quickstart.md).
-The following table provides a quick comparison of the AMS (classic) and AMS.
+The following table provides a quick comparison of the Azure Monitor for SAP solutions (classic) and Azure Monitor for SAP solutions.
-| AMS | AMS (classic) |
+| Azure Monitor for SAP solutions | Azure Monitor for SAP solutions (classic) |
| - | -- | | Azure Functions-based collector architecture | VM-based collector architecture | | Support for Microsoft SQL Server, SAP HANA, and IBM Db2 databases | Support for Microsoft SQL Server, and SAP HANA databases |
-AMS uses the [Azure Monitor](../../../azure-monitor/overview.md) capabilities of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) and [Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). With it, you can:
+Azure Monitor for SAP solutions uses the [Azure Monitor](../../../azure-monitor/overview.md) capabilities of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) and [Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). With it, you can:
-- Create [custom visualizations](../../../azure-monitor/visualize/workbooks-overview.md) by editing the default workbooks provided by AMS.
+- Create [custom visualizations](../../../azure-monitor/visualize/workbooks-overview.md) by editing the default workbooks provided by Azure Monitor for SAP solutions.
- Write [custom queries](../../../azure-monitor/logs/log-analytics-tutorial.md). - Create [custom alerts](../../../azure-monitor/alerts/alerts-log.md) by using Azure Log Analytics workspace. - Take advantage of the [flexible retention period](../../../azure-monitor/logs/data-retention-archive.md) in Azure Monitor Logs/Log Analytics.
AMS uses the [Azure Monitor](../../../azure-monitor/overview.md) capabilities of
## What data is collected?
-AMS doesn't collect Azure Monitor metrics or resource log data, like some other Azure resources do. Instead, AMS sends custom logs directly to the Azure Monitor Logs system. There, you can then use the built-in features of Log Analytics.
+Azure Monitor for SAP solutions doesn't collect Azure Monitor metrics or resource log data, like some other Azure resources do. Instead, Azure Monitor for SAP solutions sends custom logs directly to the Azure Monitor Logs system. There, you can then use the built-in features of Log Analytics.
-Data collection in AMS depends on the providers that you configure. During public preview, the following data is collected.
+Data collection in Azure Monitor for SAP solutions depends on the providers that you configure. During public preview, the following data is collected.
### Pacemaker cluster data
IBM Db2 data includes:
## What is the architecture?
-There are separate explanations for the [AMS architecture](#ams-architecture) and the [AMS (classic) architecture](#ams-classic-architecture).
+There are separate explanations for the [Azure Monitor for SAP solutions architecture](#azure-monitor-for-sap-solutions-architecture) and the [Azure Monitor for SAP solutions (classic) architecture](#azure-monitor-for-sap-solutions-classic-architecture).
Some important points about the architecture include: -- The architecture is **multi-instance**. You can monitor multiple instances of a given component type across multiple SAP systems (SID) within a virtual network with a single resource of AMS. For example, you can monitor HANA databases, high availability (HA) clusters, Microsoft SQL server, SAP NetWeaver, etc.
+- The architecture is **multi-instance**. You can monitor multiple instances of a given component type across multiple SAP systems (SID) within a virtual network with a single resource of Azure Monitor for SAP solutions. For example, you can monitor HANA databases, high availability (HA) clusters, Microsoft SQL server, SAP NetWeaver, etc.
- The architecture is **multi-provider**. The architecture diagram shows the SAP HANA provider as an example. Similarly, you can configure more providers for corresponding components to collect data from those components. For example, HANA DB, HA cluster, Microsoft SQL server, and SAP NetWeaver. - The architecture has an **extensible query framework**. Write [SQL queries to collect data in JSON](https://github.com/Azure/AzureMonitorForSAPSolutions/blob/master/sapmon/content/SapHana.json). Easily add more SQL queries to collect other data.
-### AMS architecture
+### Azure Monitor for SAP solutions architecture
-The following diagram shows, at a high level, how AMS collects data from the SAP HANA database. The architecture is the same if SAP HANA is deployed on Azure VMs or Azure Large Instances.
+The following diagram shows, at a high level, how Azure Monitor for SAP solutions collects data from the SAP HANA database. The architecture is the same if SAP HANA is deployed on Azure VMs or Azure Large Instances.
- Diagram of the new AMS architecture. The customer connects to the AMS resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and JAVA), SAP HANA, Microsoft SQL Server, IBM Db2, Pacemaker clusters, and Linux OS.
+ Diagram of the new Azure Monitor for SAP solutions architecture. The customer connects to the Azure Monitor for SAP solutions resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and JAVA), SAP HANA, Microsoft SQL Server, IBM Db2, Pacemaker clusters, and Linux OS.
:::image-end::: The key components of the architecture are: -- The **Azure portal**, where you access the AMS service.-- The **AMS resource**, where you view monitoring data.-- The **managed resource group**, which is deployed automatically as part of the AMS resource's deployment. The resources inside the managed resource group help to collect data. Key resources include:
+- The **Azure portal**, where you access the Azure Monitor for SAP solutions service.
+- The **Azure Monitor for SAP solutions resource**, where you view monitoring data.
+- The **managed resource group**, which is deployed automatically as part of the Azure Monitor for SAP solutions resource's deployment. The resources inside the managed resource group help to collect data. Key resources include:
- An **Azure Functions resource** that hosts the monitoring code. This logic collects data from the source systems and transfers the data to the monitoring framework. - An **[Azure Key Vault resource](../../../key-vault/general/basic-concepts.md)**, which securely holds the SAP HANA database credentials and stores information about providers.
- - The **Log Analytics workspace**, which is the destination for storing data. Optionally, you can choose to use an existing workspace in the same subscription as your AMS resource at deployment.
+ - The **Log Analytics workspace**, which is the destination for storing data. Optionally, you can choose to use an existing workspace in the same subscription as your Azure Monitor for SAP solutions resource at deployment.
[Azure Workbooks](../../../azure-monitor/visualize/workbooks-overview.md) provides customizable visualization of the data in Log Analytics. To automatically refresh your workbooks or visualizations, pin the items to the Azure dashboard. The maximum refresh frequency is every 30 minutes. You can also use Kusto Query Language (KQL) to [run log queries](../../../azure-monitor/logs/log-query-overview.md) against the raw tables inside the Log Analytics workspace.
-### AMS (classic) architecture
+### Azure Monitor for SAP solutions (classic) architecture
-The following diagram shows, at a high level, how AMS (classic) collects data from the SAP HANA database. The architecture is the same if SAP HANA is deployed on Azure VMs or Azure Large Instances.
+The following diagram shows, at a high level, how Azure Monitor for SAP solutions (classic) collects data from the SAP HANA database. The architecture is the same if SAP HANA is deployed on Azure VMs or Azure Large Instances.
- Diagram of the AMS (classic) architecture. The customer connects to the AMS resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and JAVA), SAP HANA, Microsoft SQL Server, Pacemaker clusters, and Linux OS.
+ Diagram of the Azure Monitor for SAP solutions (classic) architecture. The customer connects to the Azure Monitor for SAP solutions resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and JAVA), SAP HANA, Microsoft SQL Server, Pacemaker clusters, and Linux OS.
:::image-end::: The key components of the architecture are: -- The **Azure portal**, which is your starting point. You can navigate to marketplace within the Azure portal and discover AMS.-- The **AMS resource**, which is the landing place for you to view monitoring data.-- **Managed resource group**, which is deployed automatically as part of the AMS resource's deployment. The resources deployed within the managed resource group help with the collection of data. Key resources deployed and their purposes are:
+- The **Azure portal**, which is your starting point. You can navigate to marketplace within the Azure portal and discover Azure Monitor for SAP solutions.
+- The **Azure Monitor for SAP solutions resource**, which is the landing place for you to view monitoring data.
+- **Managed resource group**, which is deployed automatically as part of the Azure Monitor for SAP solutions resource's deployment. The resources deployed within the managed resource group help with the collection of data. Key resources deployed and their purposes are:
- **Azure VM**, also known as the *collector VM*, which is a **Standard_B2ms** VM. The main purpose of this VM is to host the *monitoring payload*. The monitoring payload is the logic of collecting data from the source systems and transferring the data to the monitoring framework. In the architecture diagram, the monitoring payload contains the logic to connect to the SAP HANA database over the SQL port. You're responsible for patching and maintaining the VM. - **[Azure Key Vault](../../../key-vault/general/basic-concepts.md)**: which is deployed to securely hold SAP HANA database credentials and to store information about providers. - **Log Analytics Workspace**, which is the destination where the data is stored. - Visualization is built on top of data in Log Analytics using [Azure Workbooks](../../../azure-monitor/visualize/workbooks-overview.md). You can customize visualization. You can also pin your Workbooks or specific visualization within Workbooks to Azure dashboard for auto-refresh. The maximum frequency for refresh is every 30 minutes. - You can use your existing workspace within the same subscription as SAP monitor resource by choosing this option at deployment. - You can use KQL to run [queries](../../../azure-monitor/logs/log-query-overview.md) against the raw tables inside the Log Analytics workspace. Look at **Custom Logs**.
- - You can use an existing Log Analytics workspace for data collection if it's deployed within the same Azure subscription as the resource for AMS.
+ - You can use an existing Log Analytics workspace for data collection if it's deployed within the same Azure subscription as the resource for Azure Monitor for SAP solutions.
## Can you analyze metrics?
-AMS doesn't support metrics.
+Azure Monitor for SAP solutions doesn't support metrics.
### Analyze logs
-AMS doesn't support resource logs or activity logs. For a list of the tables used by Azure Monitor Logs that can be queried by Log Analytics, see [the data reference for monitoring SAP on Azure](monitor-sap-on-azure-reference.md#azure-monitor-logs-tables).
+Azure Monitor for SAP solutions doesn't support resource logs or activity logs. For a list of the tables used by Azure Monitor Logs that can be queried by Log Analytics, see [the data reference for monitoring SAP on Azure](monitor-sap-on-azure-reference.md#azure-monitor-logs-tables).
### Make Kusto queries
-When you select **Logs** from the AMS menu, Log Analytics is opened with the query scope set to the current AMS. Log queries only include data from that resource. To run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../../../azure-monitor/logs/scope.md) for details.
+When you select **Logs** from the Azure Monitor for SAP solutions menu, Log Analytics is opened with the query scope set to the current Azure Monitor for SAP solutions. Log queries only include data from that resource. To run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../../../azure-monitor/logs/scope.md) for details.
-You can use Kusto queries to help you monitor your AMS resources. The following sample query gives you data from a custom log for a specified time range. You can specify the time range and the number of rows. In this example, you'll get five rows of data for your selected time range.
+You can use Kusto queries to help you monitor your Azure Monitor for SAP solutions resources. The following sample query gives you data from a custom log for a specified time range. You can specify the time range and the number of rows. In this example, you'll get five rows of data for your selected time range.
```kusto custom log name
custom log name
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. You can then identify and address issues in your system before your customers notice them.
-You can configure alerts in AMS from the Azure portal. For more information, see [how to configure alerts in AMS with the Azure portal](azure-monitor-alerts-portal.md).
+You can configure alerts in Azure Monitor for SAP solutions from the Azure portal. For more information, see [how to configure alerts in Azure Monitor for SAP solutions with the Azure portal](azure-monitor-alerts-portal.md).
-### How can you create AMS resources?
+### How can you create Azure Monitor for SAP solutions resources?
-You have several options to deploy AMS and configure providers:
+You have several options to deploy Azure Monitor for SAP solutions and configure providers:
-- [Deploy AMS directly from the Azure portal](azure-monitor-sap-quickstart.md)-- [Deploy AMS with Azure PowerShell](azure-monitor-sap-quickstart-powershell.md)-- [Deploy AMS (classic) using the Azure Command-Line Interface (Azure CLI)](https://github.com/Azure/azure-hanaonazure-cli-extension#sapmonitor).
+- [Deploy Azure Monitor for SAP solutions directly from the Azure portal](azure-monitor-sap-quickstart.md)
+- [Deploy Azure Monitor for SAP solutions with Azure PowerShell](azure-monitor-sap-quickstart-powershell.md)
+- [Deploy Azure Monitor for SAP solutions (classic) using the Azure Command-Line Interface (Azure CLI)](https://github.com/Azure/azure-hanaonazure-cli-extension#sapmonitor).
## What is the pricing?
-AMS is a free product (no license fee). You're responsible for paying the cost of the underlying components in the managed resource group. You're also responsible for consumption costs associated with data use and retention. For more information, see standard Azure pricing documents:
+Azure Monitor for SAP solutions is a free product (no license fee). You're responsible for paying the cost of the underlying components in the managed resource group. You're also responsible for consumption costs associated with data use and retention. For more information, see standard Azure pricing documents:
- [Azure Functions Pricing](https://azure.microsoft.com/pricing/details/functions/#pricing)-- [Azure VM pricing (applicable to AMS (classic))](https://azure.microsoft.com/pricing/details/virtual-machines/linux/)
+- [Azure VM pricing (applicable to Azure Monitor for SAP solutions (classic))](https://azure.microsoft.com/pricing/details/virtual-machines/linux/)
- [Azure Key vault pricing](https://azure.microsoft.com/pricing/details/key-vault/) - [Azure storage account pricing](https://azure.microsoft.com/pricing/details/storage/queues/) - [Azure Log Analytics and alerts pricing](https://azure.microsoft.com/pricing/details/monitor/)
AMS is a free product (no license fee). You're responsible for paying the cost o
## How do you enable data sharing with Microsoft? > [!NOTE]
-> The following content only applies to the AMS (classic) version.
+> The following content only applies to the Azure Monitor for SAP solutions (classic) version.
-AMS collects system metadata to provide improved support for SAP on Azure. No PII/EUII is collected.
+Azure Monitor for SAP solutions collects system metadata to provide improved support for SAP on Azure. No PII/EUII is collected.
-You can enable data sharing with Microsoft when you create AMS resource by choosing *Share* from the drop-down. We recommend that you enable data sharing. Data sharing gives Microsoft support and engineering teams information about your environment, which helps us provide better support for your mission-critical SAP on Azure solution.
+You can enable data sharing with Microsoft when you create Azure Monitor for SAP solutions resource by choosing *Share* from the drop-down. We recommend that you enable data sharing. Data sharing gives Microsoft support and engineering teams information about your environment, which helps us provide better support for your mission-critical SAP on Azure solution.
## Next steps -- For a list of custom logs relevant to AMS and information on related data types, see [Monitor SAP on Azure data reference](monitor-sap-on-azure-reference.md).-- For information on providers available for AMS, see [AMS providers](azure-monitor-providers.md).
+- For a list of custom logs relevant to Azure Monitor for SAP solutions and information on related data types, see [Monitor SAP on Azure data reference](monitor-sap-on-azure-reference.md).
+- For information on providers available for Azure Monitor for SAP solutions, see [Azure Monitor for SAP solutions providers](azure-monitor-providers.md).
virtual-machines Troubleshooting Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/troubleshooting-monitoring.md
vm-linux Previously updated : 06/23/2021 Last updated : 10/19/2022
virtual-wan Virtual Wan Point To Site Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-azure-ad.md
Previously updated : 07/13/2022 Last updated : 10/11/2022
In this section, you create a connection between your virtual hub and your VNet.
## <a name="download-profile"></a>Download User VPN profile
-All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. In this section, you generate and download the files used to configure your VPN clients.
+All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. You can download global (WAN-level) profiles, or a profile for a specific hub. For information and additional instructions, see [Download global and hub profiles](global-hub-profile.md). The following steps walk you through downloading a global WAN-level profile.
[!INCLUDE [Download profile](../../includes/virtual-wan-p2s-download-profile-include.md)]
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
CRS 3.2 includes 14 rule groups, as shown in the following table. Each group con
|**[REQUEST-941-APPLICATION-ATTACK-XSS](#crs941-32)**|Protect against cross-site scripting attacks| |**[REQUEST-942-APPLICATION-ATTACK-SQLI](#crs942-32)**|Protect against SQL-injection attacks| |**[REQUEST-943-APPLICATION-ATTACK-SESSION-FIXATION](#crs943-32)**|Protect against session-fixation attacks|
-|**[REQUEST-944-APPLICATION-ATTACK-SESSION-JAVA](#crs944-32)**|Protect against JAVA attacks|
+|**[REQUEST-944-APPLICATION-ATTACK-JAVA](#crs944-32)**|Protect against JAVA attacks|
### OWASP CRS 3.1