Updates from: 07/31/2021 03:09:26
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-monitor.md
Azure AD B2C leverages [Azure Active Directory monitoring](../active-directory/r
To enable _Diagnostic settings_ in Azure Active Directory within your Azure AD B2C tenant, you use [Azure Lighthouse](../lighthouse/overview.md) to [delegate a resource](../lighthouse/concepts/architecture.md), which allows your Azure AD B2C (the **Service Provider**) to manage an Azure AD (the **Customer**) resource. > [!TIP]
-> Azure Lighthouse is typically used to manage resources for multiple customers. However, it can also be used to manage resources **within an enterprise which has multiple Azure AD tenants of its own**, which is what we are doing here, except that we are only delegating the management of single resource group.
+> Azure Lighthouse is typically used to manage resources for multiple customers. However, it can also be used to manage resources **within an enterprise that has multiple Azure AD tenants of its own**, which is what we are doing here, except that we are only delegating the management of single resource group.
After you complete the steps in this article, you'll have created a new resource group (here called _azure-ad-b2c-monitor_) and have access to that same resource group that contains the [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) in your **Azure AD B2C** portal. You'll also be able to transfer the logs from Azure AD B2C to your Log Analytics workspace.
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
Title: Configure authentication in a sample spa application using Azure Active Directory B2C
-description: Using Azure Active Directory B2C to sign in and sign up users in an SPA application.
+ Title: Configure authentication in a sample single-page application by using Azure Active Directory B2C
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a single-page application.
-# Configure authentication in a sample Single Page application using Azure Active Directory B2C
+# Configure authentication in a sample single-page application by using Azure AD B2C
-This article uses a sample JavaScript Single Page application to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your SPA apps.
+This article uses a sample JavaScript single-page application (SPA) to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your SPAs.
## Overview
-OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to securely sign a user in to an application. This Single Page Application sample uses [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) and the OIDC PKCE flow. MSAL.js is a Microsoft provided library that simplifies adding authentication and authorization support to SPA apps.
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. You can use it to securely sign a user in to an application. This single-page application sample uses [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) and the OIDC PKCE flow. MSAL.js is a Microsoft provided library that simplifies adding authentication and authorization support to SPAs.
-### Sign in flow
-The sign-in flow involves following steps:
+### Sign-in flow
-1. The user navigates to the web app and selects **Sign-in**.
-1. The app initiates an authentication request, and redirects the user to Azure AD B2C.
-1. The user [signs-up or signs-in](add-sign-up-and-sign-in-policy.md), [resets the password](add-password-reset-policy.md), or signs-in with a [social account](add-identity-provider.md).
-1. Upon successful sign-in, Azure AD B2C returns an ID token to the app.
-1. The Single Page Application validates the ID token, reads the claims, and in turn allows the user to call protected resources/API's.
+The sign-in flow involves the following steps:
+
+1. Users go to the web app and select **Sign-in**.
+1. The app initiates an authentication request and redirects users to Azure AD B2C.
+1. Users [sign up or sign in](add-sign-up-and-sign-in-policy.md) and [reset the password](add-password-reset-policy.md). Alternatively, they can sign in with a [social account](add-identity-provider.md).
+1. After users sign in, Azure AD B2C returns an authorization code to the app.
+1. The single-page application validates the ID token, reads the claims, and in turn allows users to call protected resources and APIs.
### App registration overview
-To enable your app to sign in with Azure AD B2C and call a web API, you must register two applications in the Azure AD B2C directory.
+To enable your app to sign in with Azure AD B2C and call a web API, you register two applications in the Azure AD B2C directory.
-- The **web application** registration enables your app to sign in with Azure AD B2C. During app registration, you specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected to after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app.
+- The **web application** registration enables your app to sign in with Azure AD B2C. During the registration, you specify the *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after their authentication with Azure AD B2C is completed. The app registration process generates an *application ID*, also known as the *client ID*, which uniquely identifies your app.
-- The **web API** registration enables your app to call a secure web API. The registration includes the web API *scopes*. The scopes provide a way to manage permissions to protected resources such as your web API. You grant the web application permissions to the web API's scopes. When an access token is requested, your app specifies the desired permissions in the scope parameter of the request.
+- The **web API** registration enables your app to call a secure web API. The registration includes the web API *scopes*. The scopes provide a way to manage permissions to protected resources, such as your web API. You grant the web application permissions to the web API scopes. When an access token is requested, your app specifies the desired permissions in the scope parameter of the request.
-The following diagrams describe the app registrations and the application architecture.
+The app architecture and registrations are illustrated in the following diagram:
-![Web app with web API call registrations and tokens](./media/configure-authentication-sample-spa-app/spa-app-with-api-architecture.png)
+![Diagram of a web app with web API call registrations and tokens.](./media/configure-authentication-sample-spa-app/spa-app-with-api-architecture.png)
### Call to a web API [!INCLUDE [active-directory-b2c-app-integration-call-api](../../includes/active-directory-b2c-app-integration-call-api.md)]
-### Sign out flow
+### Sign-out flow
[!INCLUDE [active-directory-b2c-app-integration-sign-out-flow](../../includes/active-directory-b2c-app-integration-sign-out-flow.md)]
A computer that's running:
## Step 2: Register your SPA and API
-In this step, you create the SPA app and the web API application registrations, and specify the scopes of your web API.
+In this step, you create the SPA and the web API application registrations, and you specify the scopes of your web API.
-### 2.1 Register the web API application
+### Step 2.1: Register the web API application
[!INCLUDE [active-directory-b2c-app-integration-register-api](../../includes/active-directory-b2c-app-integration-register-api.md)]
-### 2.2 Configure scopes
+### Step 2.2: Configure scopes
[!INCLUDE [active-directory-b2c-app-integration-api-scopes](../../includes/active-directory-b2c-app-integration-api-scopes.md)]
-### 2.3 Register the SPA app
+### Step 2.3: Register the SPA
-Follow these steps to create the SPA app registration:
+To create the SPA registration, do the following:
1. Sign in to the [Azure portal](https://portal.azure.com).+ 1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
-1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Search for and select **Azure AD B2C**.
1. Select **App registrations**, and then select **New registration**.
-1. Enter a **Name** for the application. For example, *MyApp*.
+1. Enter a **Name** for the application (for example, *MyApp*).
1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
-1. Under **Redirect URI**, select **Single-page application (SPA)**, and then enter `http://localhost:6420` in the URL text box.
-1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** check box.
+1. Under **Redirect URI**, select **Single-page application (SPA)** and then, in the URL box, enter `http://localhost:6420`.
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
1. Select **Register**.
+### Step 2.4: Enable the implicit grant flow
+ Next, enable the implicit grant flow:
-1. Under Manage, select Authentication.
-1. Select Try out the new experience (if shown).
-1. Under Implicit grant, select the ID tokens check box.
-1. Select Save.
+1. Under **Manage**, select **Authentication**.
+
+1. Select **Try out the new experience** (if shown).
+
+1. Under **Implicit grant**, select the **ID tokens** checkbox.
+
+1. Select **Save**.
+
+ Record the **Application (client) ID** to use later, when you configure the web application.
-Record the **Application (client) ID** for use in a later step when you configure the web application.
- ![Get your application ID](./media/configure-authentication-sample-web-app/get-azure-ad-b2c-app-id.png)
+ ![Screenshot of the web app Overview page for recording your web application ID.](./media/configure-authentication-sample-web-app/get-azure-ad-b2c-app-id.png)
-### 2.5 Grant permissions
+### Step 2.5: Grant permissions
[!INCLUDE [active-directory-b2c-app-integration-grant-permissions](../../includes/active-directory-b2c-app-integration-grant-permissions.md)] ## Step 3: Get the SPA sample code
-This sample demonstrates how a single-page application can use Azure AD B2C for user sign-up and sign-in. Then the app acquires an access token and calls a protected web API. Download the sample below:
+This sample demonstrates how a single-page application can use Azure AD B2C for user sign-up and sign-in. Then the app acquires an access token and calls a protected web API.
- [Download a zip file](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/archive/main.zip) or clone the sample from GitHub:
+To get the SPA sample code, you can do either of the following:
- ```
- git clone https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa.git
- ```
+* [Download a zip file](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/archive/main.zip).
+* Clone the sample from GitHub by running the following command:
-### 3.1 Update the SPA sample
+ ```bash
+ git clone https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa.git
+ ```
+
+### Step 3.1: Update the SPA sample
-Now that you've obtained the SPA app sample, update the code with your Azure AD B2C and web API values. In the sample folder, under the `App` folder, open the following JavaScript files, and update with the corresponding value:
+Now that you've obtained the SPA sample, update the code with your Azure AD B2C and web API values. In the sample folder, under the `App` folder, open the JavaScript files that are listed in the following table, and then update them with their corresponding values.
|File |Key |Value | ||||
-|authConfig.js|clientId| The SPA application ID from [step 2.3](#23-register-the-spa-app).|
+|authConfig.js|clientId| The SPA ID from [step 2.3](#step-23-register-the-spa).|
|policies.js| names| The user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow).|
-|policies.js|authorities|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`. Then, replace with the user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow). For example, `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`|
-|policies.js|authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`.|
-|apiConfig.js|b2cScopes|The web API scopes you created in [step 2.2](#22-configure-scopes). For example, `b2cScopes: ["https://<your-tenant-name>.onmicrosoft.com/tasks-api/tasks.read"]`.|
+|policies.js|authorities|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso.onmicrosoft.com`). Then, replace with the user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow) (for example, `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`).|
+|policies.js|authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso.onmicrosoft.com`).|
+|apiConfig.js|b2cScopes|The web API scopes you created in [step 2.2](#step-22-configure-scopes) (for example, `b2cScopes: ["https://<your-tenant-name>.onmicrosoft.com/tasks-api/tasks.read"]`).|
|apiConfig.js|webApi|The URL of the web API, `http://localhost:5000/tasks`.|
+| | | |
Your resulting code should look similar to following sample:
const apiConfig = {
## Step 4: Get the web API sample code
-Now that the web API is registered and you've defined its scopes, configure the web API code to work with your Azure AD B2C tenant. Download the sample below:
+Now that the web API is registered and you've defined its scopes, configure the web API code to work with your Azure AD B2C tenant.
-[Download a \*.zip archive](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi/archive/master.zip) or clone the sample web API project from GitHub. You can also browse directly to the [Azure-Samples/active-directory-b2c-javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) project on GitHub.
+To get the web API sample code, do one of the following:
-```console
-git clone https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi.git
-```
+* [Download a \*.zip archive](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi/archive/master.zip).
+
+* Clone the sample web API project from GitHub by running the following command:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi.git
+ ```
+
+* You can also go directly to the [Azure-Samples/active-directory-b2c-javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) project on GitHub.
-### 4.1 Update the web API
+### Step 4.1: Update the web API
1. Open the *config.json* file in your code editor.
-1. Modify the variable values with the application registration you created earlier. Also update the `policyName` with the user flow you created as part of the prerequisites. For example, *b2c_1_susi*.
+1. Modify the variable values with the application registration you created earlier. And update the `policyName` with the user flow you created as part of the prerequisites (for example, *b2c_1_susi*).
```json "credentials": {
git clone https://github.com/Azure-Samples/active-directory-b2c-javascript-nodej
}, ```
-### 4.2 Enable CORS
+### Step 4.2: Enable CORS
-To allow your single-page application to call the Node.js web API, you need to enable [CORS](https://expressjs.com/en/resources/middleware/cors.html) in the web API. In a production application, you should be careful about which domain is making the request. In this example, allow requests from any domain.
+To allow your single-page application to call the Node.js web API, you need to enable [cross-origin resource sharing (CORS)](https://expressjs.com/en/resources/middleware/cors.html) in the web API. In a production application, be careful about which domain is making the request. In this example, allow requests from any domain.
-To enable CORS, use the following middleware. In the Node.js web API code sample you downloaded, it's already been added to the *index.js* file.
+To enable CORS, use the following middleware. In the Node.js web API code sample you downloaded, it has already been added to the *index.js* file.
```javascript app.use((req, res, next) => {
You're now ready to test the single-page application's scoped access to the API.
### Run the Node.js web API
-1. Open a console window and change to the directory containing the Node.js web API sample. For example:
+1. Open a console window, and change to the directory that contains the Node.js web API sample. For example:
```console cd active-directory-b2c-javascript-nodejs-webapi
You're now ready to test the single-page application's scoped access to the API.
### Run the single-page app
-1. Open another console window and change to the directory containing the JavaScript SPA sample. For example:
+1. Open another console window, and change to the directory that contains the JavaScript SPA sample. For example:
```console cd ms-identity-b2c-javascript-spa
You're now ready to test the single-page application's scoped access to the API.
Listening on port 6420... ```
-1. Navigate to `http://localhost:6420` in your browser to view the application.
+1. To view the application, go to `http://localhost:6420` in your browser.
- ![Single-page application sample app shown in browser](./media/configure-authentication-sample-spa-app/sample-app-sign-in.png)
+ ![Screenshot of the SPA sample app displayed in the browser window.](./media/configure-authentication-sample-spa-app/sample-app-sign-in.png)
-1. Sign in using the email address and password you used in the [previous tutorial](tutorial-single-page-app.md). Upon successful login, you should see the `User 'Your Username' logged-in` message.
-1. Select the **Call API** button. The SPA sends the access token in a request to the protected web API, which returns the display name of the logged-in user:
+1. Sign in by using the email address and password you used in the [previous tutorial](tutorial-single-page-app.md). After you've logged in successfully, you should see the "User \<your username> logged in" message.
+1. Select the **Call API** button. The SPA sends the access token in a request to the protected web API, which returns the display name of the logged-in user:
- ![Single-page application in browser showing username JSON result returned by API](./media/configure-authentication-sample-spa-app/sample-app-result.png)
+ ![Screenshot of the SPA in a browser window, showing the username JSON result that's returned by the API.](./media/configure-authentication-sample-spa-app/sample-app-result.png)
## Deploy your application
-In a production application, the app registration redirect URI is typically a publicly accessible endpoint where your app is running, like `https://contoso.com/signin-oidc`.
+In a production application, the app registration redirect URI is ordinarily a publicly accessible endpoint where your app is running, such as `https://contoso.com/signin-oidc`.
You can add and modify redirect URIs in your registered applications at any time. The following restrictions apply to redirect URIs:
You can add and modify redirect URIs in your registered applications at any time
## Next steps
-* Learn more [about the code sample](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa)
-* [Enable authentication in your own SPA application](enable-authentication-spa-app.md)
-* Configure [authentication options in your SPA application](enable-authentication-spa-app-options.md)
-* [Enable authentication in your own web API](enable-authentication-web-api.md)
+For more information about the concepts discussed in this article:
+* [Learn more about the code sample](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa).
+* [Enable authentication in your own SPA](enable-authentication-spa-app.md).
+* [Configure authentication options in your SPA](enable-authentication-spa-app-options.md).
+* [Enable authentication in your own web API](enable-authentication-web-api.md).
active-directory-b2c Configure Authentication Sample Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-web-app-with-api.md
Title: Configure authentication in a sample web application that calls a web API using Azure Active Directory B2C
-description: Using Azure Active Directory B2C to sign in and sign up users in an ASP.NET web application that calls a web API.
+ Title: Configure authentication in a sample web application that calls a web API by using Azure Active Directory B2C
+description: This article discusses using Azure Active Directory B2C to sign in and sign up users in an ASP.NET web application that calls a web API.
-# Configure authentication in a sample web application that calls a web API using Azure Active Directory B2C
+# Configure authentication in a sample web app that calls a web API by using Azure AD B2C
This article uses a sample ASP.NET web application that calls a web API to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your web applications. > [!IMPORTANT]
-> The sample ASP.NET web application referenced in this article is used to call a web API with a bearer token. For a web application that doesn't call a web API, see [Configure authentication in a sample web application using Azure Active Directory B2C](configure-authentication-sample-web-app.md).
+> The sample ASP.NET web app that's referenced in this article is used to call a web API with a bearer token. For a web app that doesn't call a web API, see [Configure authentication in a sample web application by using Azure AD B2C](configure-authentication-sample-web-app.md).
## Overview
-OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to securely sign a user in to an application. This web app sample uses [Microsoft Identity Web](https://www.nuget.org/packages/Microsoft.Identity.Web). Microsoft Identity Web is a set of ASP.NET Core libraries that simplifies adding authentication and authorization support to web apps that can call a secure web API.
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. You can use OIDC to securely sign a user in to an application. This web app sample uses [Microsoft Identity Web](https://www.nuget.org/packages/Microsoft.Identity.Web). Microsoft Identity Web is a set of ASP.NET Core libraries that simplify adding authentication and authorization support to web apps that can call a secure web API.
-The sign-in flow involves following steps:
+The sign-in flow involves the following steps:
-1. The user navigates to the web app and select **Sign-in**.
-1. The app initiates authentication request, and redirects the user to Azure AD B2C.
-1. The user [signs-up or signs-in](add-sign-up-and-sign-in-policy.md), [resets the password](add-password-reset-policy.md), or signs-in with a [social account](add-identity-provider.md).
-1. Upon successful sign-in, Azure AD B2C returns an authorization code to the app.
-1. The app takes the following actions
- 1. Exchanges the authorization code to an ID token, access token and refresh token.
- 1. Reads the ID token claims, and persists an application authorization cookie.
- 1. Stores the refresh token in an in-memory cache for later use.
+1. Users go to the web app and select **Sign-in**.
+1. The app initiates an authentication request and redirects users to Azure AD B2C.
+1. Users [sign up or sign in](add-sign-up-and-sign-in-policy.md) and [reset the password](add-password-reset-policy.md). Alternatively, they can sign in with a [social account](add-identity-provider.md).
+1. After users sign in, Azure AD B2C returns an authorization code to the app.
+1. The app then does the following:
+
+ a. It exchanges the authorization code to an ID token, access token, and refresh token.
+ b. It reads the ID token claims, and persists an application authorization cookie.
+ c. It stores the refresh token in an in-memory cache for later use.
### App registration overview To enable your app to sign in with Azure AD B2C and call a web API, you register two applications in the Azure AD B2C directory. -- The **web application** registration enables your app to sign in with Azure AD B2C. During app registration, you specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected by Azure AD B2C after they authenticate with Azure AD B2C is completed. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. You also create a *client secret*, which is used by your application to securely acquire the tokens.
+- The *web application* registration enables your app to sign in with Azure AD B2C. During the registration, you specify the *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after their authentication with Azure AD B2C is completed. The app registration process generates an *application ID*, also known as the *client ID*, which uniquely identifies your app. You also create a *client secret*, which your app uses to securely acquire the tokens.
-- The **web API** registration enables your app to call a secure web API. The registration includes the web API *scopes*. The scopes provide a way to manage permissions to protected resources, such as your web API. You grant the web application permissions to the web API scopes. When an access token is requested, your app specifies the desired permissions in the scope parameter of the request.
+- The *web API* registration enables your app to call a secure web API. The registration includes the web API *scopes*. The scopes provide a way to manage permissions to protected resources, such as your web API. You grant the web application permissions to the web API scopes. When an access token is requested, your app specifies the desired permissions in the scope parameter of the request.
-The following diagrams describe the apps registration and the application architecture.
+The app architecture and registrations are illustrated in the following diagram:
-![Web app with web API call registrations and tokens](./media/configure-authentication-sample-web-app-with-api/web-app-with-api-architecture.png)
+![Diagram of a web app with web API call registrations and tokens.](./media/configure-authentication-sample-web-app-with-api/web-app-with-api-architecture.png)
### Call to a web API
A computer that's running either:
In this step, you create the web app and the web API application registration, and specify the scopes of your web API.
-### 2.1 Register the web API app
+### Step 2.1: Register the web API app
[!INCLUDE [active-directory-b2c-app-integration-register-api](../../includes/active-directory-b2c-app-integration-register-api.md)]
-### 2.2 Configure web API app scopes
+### Step 2.2: Configure web API app scopes
[!INCLUDE [active-directory-b2c-app-integration-api-scopes](../../includes/active-directory-b2c-app-integration-api-scopes.md)]
-### 2.3 Register the web app
+### Step 2.3: Register the web app
-Follow these steps to create the web app registration:
+To create the web app registration, do the following:
1. Select **App registrations**, and then select **New registration**.
-1. Enter a **Name** for the application. For example, *webapp1*.
+1. Under **Name**, enter a name for the application (for example, *webapp1*).
1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
-1. Under **Redirect URI**, select **Web**, and then enter `https://localhost:5000/signin-oidc` in the URL text box.
-1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** check box.
+1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://localhost:5000/signin-oidc`.
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
1. Select **Register**. 1. After the app registration is completed, select **Overview**.
-1. Record the **Application (client) ID** for use in a later step when you configure the web application.
+1. Record the **Application (client) ID** for later use, when you configure the web application.
- ![Get your web application ID](./media/configure-authentication-sample-web-app-with-api/get-azure-ad-b2c-app-id.png)
+ ![Screenshot of the web app Overview page for recording your web application ID.](./media/configure-authentication-sample-web-app-with-api/get-azure-ad-b2c-app-id.png)
-### 2.4 Create a web app client secret
+### Step 2.4: Create a web app client secret
[!INCLUDE [active-directory-b2c-app-integration-client-secret](../../includes/active-directory-b2c-app-integration-client-secret.md)]
-### 2.5 Grant the web app permissions for the web API
+### Step 2.5: Grant the web app permissions for the web API
[!INCLUDE [active-directory-b2c-app-integration-grant-permissions](../../includes/active-directory-b2c-app-integration-grant-permissions.md)] ## Step 3: Get the web app sample
-[Download the zip file](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/refs/heads/master.zip), or clone the sample web application from GitHub.
+[Download the zip file](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/refs/heads/master.zip), or run the following Bash command to clone the sample web application from GitHub.
```bash git clone https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2 ```
-Extract the sample file to a folder where the total character length of the path is less than 260.
+Extract the sample file to a folder where the total length of the path is 260 or fewer characters.
## Step 4: Configure the sample web API
-In the sample folder, under the `4-WebApp-your-API/4-2-B2C/TodoListService` folder, open the **TodoListService.csproj** project with Visual Studio or Visual Studio Code.
+In the sample folder, in the *4-WebApp-your-API/4-2-B2C/TodoListService* folder, open the **TodoListService.csproj** project with Visual Studio or Visual Studio Code.
-Under the project root folder, open the `appsettings.json` file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token the web app passes as a bearer token. Update the following properties of the app settings:
+Under the project root folder, open the *appsettings.json* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token that the web app passes as a bearer token. Update the following properties of the app settings:
-|Section |Key |Value |
-||||
+| Section | Key | Value |
+| | | |
|AzureAdB2C|Instance| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `https://contoso.b2clogin.com`.| |AzureAdB2C|Domain| Your Azure AD B2C tenant full [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`.|
-|AzureAdB2C|ClientId| The web API application ID from [step 2.1](#21-register-the-web-api-app).|
+|AzureAdB2C|ClientId| The web API application ID from [step 2.1](#step-21-register-the-web-api-app).|
|AzureAdB2C|SignUpSignInPolicyId|The user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow).|
+| | | |
-Your final configuration file should look like the following JSON:
+Your final configuration file should look like the following JSON file:
```json {
Your final configuration file should look like the following JSON:
"SignedOutCallbackPath": "/signout/<your-sign-up-in-policy>", "SignUpSignInPolicyId": "<your-sign-up-in-policy>" },
- // More setting here
+ // More settings here
} ```
-### 4.1 Set the permission policy
+### Step 4.1: Set the permission policy
-The web API verifies that the user authenticated with the bearer token, and the bearer token has the configured accepted scopes. If the bearer token does not have any of these accepted scopes, the web API returns HTTP status code 403 (Forbidden) and writes to the response body a message telling which scopes are expected in the token.
+The web API verifies that the user authenticated with the bearer token, and the bearer token has the configured accepted scopes. If the bearer token doesn't have any of these accepted scopes, the web API returns HTTP status code 403 (Forbidden) and writes to the response body a message telling which scopes are expected in the token.
-To configure the accepted scopes, open the `Controller/TodoListController.cs` class, and set the scope name. The scope name, without the full URI.
+To configure the accepted scopes, open the `Controller/TodoListController.cs` class, and set the scope name, without the full URI.
```csharp [RequiredScope("tasks.read")] ```
-### 4.2 Run the sample web API app
+### Step 4.2: Run the sample web API app
-To allow web app calling the web API sample, follow these steps to run the web API:
+To allow the web app to call the web API sample, run the web API by doing the following:
-1. If requested, restore dependencies.
+1. If you're requested to do so, restore dependencies.
1. Build and run the project.
-1. After the project is built, Visual Studio or Visual Studio Code launches the web API in the browsers with the following address https://localhost:44332.
+1. After the project is built, Visual Studio or Visual Studio Code starts the web API in the browsers with the following address: https://localhost:44332.
## Step 5: Configure the sample web app In the sample folder, under the `4-WebApp-your-API/4-2-B2C/Client` folder, open the **TodoListClient.csproj** project with Visual Studio or Visual Studio Code.
-Under the project root folder, open the `appsettings.json` file. This file contains information about your Azure AD B2C identity provider. The web app uses this information to establish a trust relationship with Azure AD B2C, sign-in the user in and out, acquire tokens, and validate them. Update the following properties of the app settings:
+Under the project root folder, open the `appsettings.json` file. This file contains information about your Azure AD B2C identity provider. The web app uses this information to establish a trust relationship with Azure AD B2C, sign users in and out, acquire tokens, and validate them. Update the following properties of the app settings:
-|Section |Key |Value |
-||||
-|AzureAdB2C|Instance| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `https://contoso.b2clogin.com`.|
-|AzureAdB2C|Domain| Your Azure AD B2C tenant full [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`.|
-|AzureAdB2C|ClientId| The web application ID from [step 2.3](#23-register-the-web-app).|
-|AzureAdB2C | ClientSecret | The web application secret from [step 2.4](#24-create-a-web-app-client-secret). |
+| Section | Key | Value |
+| | | |
+| AzureAdB2C | Instance | The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com`).|
+|AzureAdB2C|Domain| Your Azure AD B2C tenant full [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso.onmicrosoft.com`).|
+|AzureAdB2C|ClientId| The web application ID from [step 2.3](#step-23-register-the-web-app).|
+|AzureAdB2C | ClientSecret | The web application secret from [step 2.4](#step-24-create-a-web-app-client-secret). |
|AzureAdB2C|SignUpSignInPolicyId|The user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).|
-| TodoList | TodoListScope | The web API scopes you created in [step 2.5](#25-grant-the-web-app-permissions-for-the-web-api).|
-| TodoList | TodoListBaseAddress | The base URI of your web API, for example `https://localhost:44332`|
+| TodoList | TodoListScope | The web API scopes you created in [step 2.5](#step-25-grant-the-web-app-permissions-for-the-web-api).|
+| TodoList | TodoListBaseAddress | The base URI of your web API (for example `https://localhost:44332`). |
+| | | |
Your final configuration file should look like the following JSON:
-```JSon
+```json
{ "AzureAdB2C": { "Instance": "https://contoso.b2clogin.com",
Your final configuration file should look like the following JSON:
## Step 6: Run the sample web app 1. Build and run the project.
-1. Browse to https://localhost:5000.
+1. Browse to [https://localhost:5000](https://localhost:5000).
1. Complete the sign-up or sign-in process. After successful authentication, you'll see your display name in the navigation bar. To view the claims that Azure AD B2C token returns to your app, select **TodoList**.
-![Web app token's claims](./media/configure-authentication-sample-web-app-with-api/web-api-to-do-list.png)
+![Screenshot of the web app token claims.](./media/configure-authentication-sample-web-app-with-api/web-api-to-do-list.png)
## Deploy your application
-In a production application, the app registration redirect URI is typically a publicly accessible endpoint where your app is running, like `https://contoso.com/signin-oidc`.
+In a production application, the app registration redirect URI is typically a publicly accessible endpoint where your app is running, such as `https://contoso.com/signin-oidc`.
You can add and modify redirect URIs in your registered applications at any time. The following restrictions apply to redirect URIs:
You can add and modify redirect URIs in your registered applications at any time
### Token cache for a web app
-The web app sample uses in memory token cache serialization. This implementation is great in samples. It's also good in production applications provided you don't mind if the token cache is lost when the web app is restarted.
+The web app sample uses in-memory token cache serialization. This implementation is great in samples. It's also good in production applications, provided that you don't mind if the token cache is lost when the web app is restarted.
For production environment, we recommend you use a distributed memory cache. For example, Redis cache, NCache, or a SQL Server cache. For details about the distributed memory cache implementations, see [Token cache serialization](../active-directory/develop/msal-net-token-cache-serialization.md). ## Next steps
-* Learn more [about the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-5-B2C#about-the-code)
-* Learn how to [Enable authentication in your own web application using Azure AD B2C](enable-authentication-web-application.md)
-* [Enable authentication in your own web API](enable-authentication-web-api.md)
+* Learn more [about the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-5-B2C#about-the-code).
+* Learn how to [Enable authentication in your own web application by using Azure AD B2C](enable-authentication-web-application.md).
+* Learn how to [Enable authentication in your own web API](enable-authentication-web-api.md).
active-directory-b2c Enable Authentication Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-spa-app-options.md
Title: Enable spa application options using Azure Active Directory B2C
-description: Enable the use of spa application options by using several ways.
+ Title: Enable SPA application options by using Azure Active Directory B2C
+description: This article discusses several ways to enable the use of SPA applications.
-# Configure authentication options in a Single Page application using Azure Active Directory B2C
+# Configure authentication options in a single-page application by using Azure AD B2C
-This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your Single Page Application. Before you start, familiarize yourself with the following article: [Configure authentication in a sample web application](configure-authentication-sample-spa-app.md).
+This article describes how to customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your single-page application (SPA).
+
+Before you start, familiarize yourself with the following article: [Configure authentication in a sample web application](configure-authentication-sample-spa-app.md).
[!INCLUDE [active-directory-b2c-app-integration-custom-domain](../../includes/active-directory-b2c-app-integration-custom-domain.md)]
-To use a custom domain and your tenant ID in the authentication URL, follow the guidance in [Enable custom domains](custom-domain.md). Find your MSAL configuration object and change the **authorities** and **knownAuthorities** to use your custom domain name and tenant ID.
+To use a custom domain and your tenant ID in the authentication URL, follow the guidance in [Enable custom domains](custom-domain.md). Find your Microsoft Authentication Library (MSAL) configuration object and change the *authorities* and *knownAuthorities* to use your custom domain name and tenant ID.
-The following JavaScript shows the MSAL config object before the change:
+The following JavaScript code shows the MSAL configuration object *before* the change:
```Javascript const msalConfig = {
const msalConfig = {
} ```
-The following JavaScript shows the MSAL config object after the change:
+The following JavaScript code shows the MSAL configuration object *after* the change:
```Javascript const msalConfig = {
const msalConfig = {
[!INCLUDE [active-directory-b2c-app-integration-login-hint](../../includes/active-directory-b2c-app-integration-login-hint.md)]
-1. If you're using a custom policy, add the required input claim as described in [Set up direct sign-in](direct-signin.md#prepopulate-the-sign-in-name).
-1. Create an object to store the **login_hint** and pass this object into the **MSAL loginPopup()** method.
+1. If you're using a custom policy, add the required input claim, as described in [Set up direct sign-in](direct-signin.md#prepopulate-the-sign-in-name).
+1. Create an object to store the **login_hint**, and pass this object into the **MSAL loginPopup()** method.
```javascript let loginRequest = {
const msalConfig = {
[!INCLUDE [active-directory-b2c-app-integration-domain-hint](../../includes/active-directory-b2c-app-integration-domain-hint.md)] 1. Check the domain name of your external identity provider. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider).
-1. Create an object to store **extraQueryParameters** and pass this object into the **MSAL loginPopup()** method.
+1. Create an object to store **extraQueryParameters**, and pass this object into the **MSAL loginPopup()** method.
```javascript let loginRequest = {
const msalConfig = {
[!INCLUDE [active-directory-b2c-app-integration-ui-locales](../../includes/active-directory-b2c-app-integration-ui-locales.md)]
-1. [Configure Language customization](language-customization.md).
-1. Create an object to store **extraQueryParameters** and pass this object into the **MSAL loginPopup()** method.
+1. [Configure language customization](language-customization.md).
+1. Create an object to store **extraQueryParameters**, and pass this object into the **MSAL loginPopup()** method.
```javascript let loginRequest = {
const msalConfig = {
[!INCLUDE [active-directory-b2c-app-integration-custom-parameters](../../includes/active-directory-b2c-app-integration-custom-parameters.md)] 1. Configure the [ContentDefinitionParameters](customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) element.
-1. Create an object to store **extraQueryParameters** and pass this object into the **MSAL loginPopup()** method.
+1. Create an object to store **extraQueryParameters**, and pass this object into the **MSAL loginPopup()** method.
```javascript let loginRequest = {
const msalConfig = {
[!INCLUDE [active-directory-b2c-app-integration-id-token-hint](../../includes/active-directory-b2c-app-integration-id-token-hint.md)] 1. In your custom policy, define an [ID token hint technical profile](id-token-hint.md).
-1. Create an object to store **extraQueryParameters** and pass this object into the **MSAL loginPopup()** method.
+1. Create an object to store **extraQueryParameters**, and pass this object into the **MSAL loginPopup()** method.
```javascript let loginRequest = {
const msalConfig = {
myMSALObj.loginPopup(loginRequest); ```
-## Enable Single Logout
+## Enable single logout
-Single logout in Azure AD B2C uses OpenId Connect front channel logout to make logout requests to all applications the user has signed into through Azure AD B2C.
+Single logout in Azure AD B2C uses OpenId Connect front-channel logout to make logout requests to all applications the user has signed into through Azure AD B2C.
-These logout requests are made from the Azure AD B2C logout page, in a hidden Iframe. The Iframes will make HTTP requests to all of the front channel logout endpoints registered for the apps Azure AD B2C has recorded as being logged in.
+These logout requests are made from the Azure AD B2C logout page, in a hidden Iframe. The Iframes make HTTP requests to all the front-channel logout endpoints registered for the apps that Azure AD B2C has recorded as being logged in.
-Your logout endpoint for each application must call the **MSAL logout()** method. MSAL must also be explicitly configured to execute within an Iframe in this scenario by setting `allowRedirectInIframe` to `true`.
+Your logout endpoint for each application must call the **MSAL logout()** method. You must also explicitly configure MSAL to run within an Iframe in this scenario by setting `allowRedirectInIframe` to `true`.
-See the code sample below which sets `allowRedirectInIframe` to `true`:
+The following code sample sets `allowRedirectInIframe` to `true`:
```javascript const msalConfig = {
async function logoutSilent(MSAL) {
## Next steps -- Learn more: [MSAL.js configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md)
+Learn more about [MSAL.js configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md).
active-directory-b2c Enable Authentication Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-spa-app.md
Title: Enable authentication in a SPA application using Azure Active Directory B2C building blocks
-description: The building blocks of Azure Active Directory B2C to sign in and sign up users in a SPA application.
+ Title: Enable authentication in a SPA application by using Azure Active Directory B2C building blocks
+description: This article discusses the building blocks of Azure Active Directory B2C for signing in and signing up users in a SPA application.
-# Enable authentication in your own Single Page Application using Azure Active Directory B2C
+# Enable authentication in your own single-page application by using Azure AD B2C
-This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own Single Page Application (SPA). Learn how create a SPA application with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) authentication library. Use this article with [Configure authentication in a sample SPA application](./configure-authentication-sample-spa-app.md), substituting the sample SPA app with your own SPA app.
+This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own single-page application (SPA). Learn how create a SPA application by using the [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js).
+
+Use this article with [Configure authentication in a sample SPA application](./configure-authentication-sample-spa-app.md), substituting the sample SPA app with your own SPA app.
## Overview
-This article uses Node.js and [Express](https://expressjs.com/), to create a basic Node.js web app. Express is a minimal and flexible Node.js web app framework that provides a set of features for web and mobile applications.
+This article uses Node.js and [Express](https://expressjs.com/) to create a basic Node.js web app. Express is a minimal and flexible Node.js web app framework that provides a set of features for web and mobile applications.
-The [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) authentication library is a Microsoft provided library that simplifies adding authentication and authorization support to SPA apps.
+The [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) authentication library is a Microsoft-provided library that simplifies adding authentication and authorization support to SPA apps.
> [!TIP]
-> The entire MSAL.js code runs on the client side. You can substitute the Node.js and Express server side code with other solutions, such as .NET core, Java, and PHP.
+> The entire MSAL.js code runs on the client side. You can substitute the Node.js and Express server side code with other solutions, such as .NET Core, Java, and Hypertext Preprocessor (PHP) scripting languages.
## Prerequisites
-Review the prerequisites and integration steps in [Configure authentication in a sample SPA application](configure-authentication-sample-spa-app.md) article.
+To review the prerequisites and integration instructions, see [Configure authentication in a sample SPA application](configure-authentication-sample-spa-app.md).
+
+## Step 1: Create an SPA app project
-## Create an SPA app project
+You can use an existing SPA app project or create new one. To create a new project, do the following:
-You can use an existing SPA app project, or create new one. To create a new project, follow these steps:
+1. Open a command shell, and create a new directory (for example, *myApp*). This directory will contain your app code, user interface, and configuration files.
-1. Open command shell, and create a new directory. For example, *myApp*. This directory will contain your app code, user interface, and configuration files.
-1. Enter the directory your created.
-1. Use the `npm init` command to create a `package.json` file for your app. This command prompts you for information about your app. For example, the name and version of your app, and the name of the initial entry point, the `index.js` file. Run the following command, and accept the defaults:
+1. Enter the directory you created.
+
+1. Use the `npm init` command to create a `package.json` file for your app. This command prompts you for information about your app (for example, the name and version of your app, and the name of the initial entry point, the *index.js* file). Run the following command, and accept the default values:
``` npm init ```
-## Install the dependencies
+## Step 2: Install the dependencies
-To install the Express package, in your command shell run the following commands:
+To install the Express package, in your command shell, run the following command:
``` npm install express ``` To locate the app's static files, the server-side code uses the [Path](https://www.npmjs.com/package/path) package.
-To install the Path package, in your command shell run the following commands:
+
+To install the Path package, in your command shell, run the following command:
``` npm install path ```
-## Configure your web server
+## Step 3: Configure your web server
-In your *myApp* folder, create a file named `index.js` containing the following code:
+In your *myApp* folder, create a file named *index.js*, which contains the following code:
```javascript // Initialize express
app.listen(port, () => {
}); ```
-## Create the SPA user interface
+## Step 4: Create the SPA user interface
-In this step, add the SAP app `https://docsupdatetracker.net/index.html` file. This file implements the user interface built with Bootstrap framework, imports script files for configuration, authentication, and web API calls.
+Add the SAP app `https://docsupdatetracker.net/index.html` file. This file implements a user interface that's built with a Bootstrap framework, and it imports script files for configuration, authentication, and web API calls.
-The table below details the resources referenced by the *https://docsupdatetracker.net/index.html* file.
+The resources referenced by the *https://docsupdatetracker.net/index.html* file are detailed in the following table:
|Reference |Definition| ||| |MSAL.js library| MSAL.js authentication JavaScript library [CDN path](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/cdn-usage.md).|
-|[Bootstrap stylesheet](https://getbootstrap.com/) | A free front-end framework for faster and easier web development. The framework includes HTML and CSS based design templates. |
-|[policies.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/policies.js) | Contains the Azure AD B2C custom policies and user-flows. |
+|[Bootstrap stylesheet](https://getbootstrap.com/) | A free front-end framework for faster and easier web development. The framework includes HTML-based and CSS-based design templates. |
+|[policies.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/policies.js) | Contains the Azure AD B2C custom policies and user flows. |
|[authConfig.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/authConfig.js) | Contains authentication configuration parameters.| |[authRedirect.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/authRedirect.js) | Contains the authentication logic. | |[apiConfig.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/apiConfig.js) | Contains web API scopes and the API endpoint location. |
-|[api.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/api.js) | Defines the method to call your API and handle its response|
-|[ui.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/ui.js) | Controls UI elements |
+|[api.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/api.js) | Defines the method to use to call your API and handle its response.|
+|[ui.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/ui.js) | Controls the UI elements. |
+| | |
-To render the SPA index file, in the *myApp* folder, create a file named *https://docsupdatetracker.net/index.html* containing the following HTML snippet.
+To render the SPA index file, in the *myApp* folder, create a file named *https://docsupdatetracker.net/index.html*, which contains the following HTML snippet.
```html <!DOCTYPE html> <html> <head>
- <title>My AAD B2C test app</title>
+ <title>My Azure AD B2C test app</title>
</head> <body>
- <h2>My AAD B2C test app</h2>
+ <h2>My Azure AD B2C test app</h2>
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous" /> <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button> <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button>
To render the SPA index file, in the *myApp* folder, create a file named *index.
</html> ```
-## Configure the authentication library
+## Step 5: Configure the authentication library
-In this section, configure how the MSAL.js library integrates with Azure AD B2C. The MSAL.js library uses a common configuration object to connect to your Azure AD B2C tenants authentication endpoints.
+Configure how the MSAL.js library integrates with Azure AD B2C. The MSAL.js library uses a common configuration object to connect to your Azure AD B2C tenant's authentication endpoints.
-To configure the authentication library, follow these steps:
+To configure the authentication library, do the following:
1. In the *myApp* folder, create a new folder called *App*. 1. Inside the *App* folder, create a new file named *authConfig.js*.
-1. Add following JavaScript code to the *authConfig.js* file:
+1. Add the following JavaScript code to the *authConfig.js* file:
```javascript const msalConfig = {
To configure the authentication library, follow these steps:
}; ```
-1. Replace `<Application-ID>` with your app registration application ID. For more information, see [Configure authentication in a sample SPA application article](./configure-authentication-sample-spa-app.md#23-register-the-spa-app).
+1. Replace `<Application-ID>` with your app registration application ID. For more information, see [Configure authentication in a sample SPA application](./configure-authentication-sample-spa-app.md#step-23-register-the-spa).
> [!TIP] > For more MSAL object configuration options, see the [Authentication options](./enable-authentication-spa-app-options.md) article.
-### Specify your Azure AD B2C user flows
+## Step 6: Specify your Azure AD B2C user flows
-In this step, create the *policies.js* file, which provides information about your Azure AD B2C environment. The MSAL.js library uses this information to create authentication requests to Azure AD B2C.
+Create the *policies.js* file, which provides information about your Azure AD B2C environment. The MSAL.js library uses this information to create authentication requests to Azure AD B2C.
-To specify your Azure AD B2C user flows, follow these steps:
+To specify your Azure AD B2C user flows, do the following:
1. Inside the *App* folder, create a new file named *policies.js*. 1. Add the following code to the *policies.js* file:
To specify your Azure AD B2C user flows, follow these steps:
1. Replace `B2C_1_EditProfile` with your edit profile Azure AD B2C policy name. 1. Replace all instances of `contoso` with your [Azure AD B2C tenant name](./tenant-management.md#get-your-tenant-name).
-## Use the MSAL to sign in the user
+## Step 7: Use the MSAL to sign in the user
-In this step, implement the methods to initialize the sign-in flow, api access token acquisition, and the sign-out methods.
+In this step, implement the methods to initialize the sign-in flow, API access token acquisition, and the sign-out methods.
For more information, see the [MSAL PublicClientApplication class reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html), and [Use the Microsoft Authentication Library (MSAL) to sign in the user](../active-directory/develop/tutorial-v2-javascript-spa.md#use-the-microsoft-authentication-library-msal-to-sign-in-the-user) articles.
-To sign in the user, follow these steps:
+To sign in the user, do the following:
1. Inside the *App* folder, create a new file named *authRedirect.js*. 1. In your *authRedirect.js*, copy and paste the following code:
To sign in the user, follow these steps:
} ```
-## Configure the web API location and scope
+## Step 8: Configure the web API location and scope
-To allow your SPA app to call a web API, provide the web API endpoint location, and the [scopes](./configure-authentication-sample-spa-app.md#app-registration-overview) used to authorize access to the web API.
+To allow your SPA app to call a web API, provide the web API endpoint location and the [scopes](./configure-authentication-sample-spa-app.md#app-registration-overview) to use to authorize access to the web API.
-To configure the web API location and scopes, follow these steps:
+To configure the web API location and scopes, do the following:
1. Inside the *App* folder, create a new file named *apiConfig.js*. 1. In your *apiConfig.js*, copy and paste the following code:
To configure the web API location and scopes, follow these steps:
}; ```
-1. Replace `contoso` with your tenant name. The required scope name can be found as described in the [Configure scopes](./configure-authentication-sample-spa-app.md#22-configure-scopes) article.
+1. Replace `contoso` with your tenant name. The required scope name can be found as described in the [Configure scopes](./configure-authentication-sample-spa-app.md#step-22-configure-scopes) article.
1. Replace the value for `webApi` with your web API endpoint location.
-## Call your web API
+## Step 9: Call your web API
-In this step, define the HTTP request to your API endpoint. The HTTP request is configured to pass the Access Token acquired with MSAL.js into the `Authorization` HTTP header in the request.
+Define the HTTP request to your API endpoint. The HTTP request is configured to pass the access token that was acquired with *MSAL.js* into the `Authorization` HTTP header in the request.
-The code below defines the HTTP `GET` request to the API endpoint, passing the access token within the `Authorization` HTTP header. The API location is defined by the `webApi` key in *apiConfig.js*.
+The following code defines the HTTP `GET` request to the API endpoint, passing the access token within the `Authorization` HTTP header. The API location is defined by the `webApi` key in *apiConfig.js*.
-To call your web API by using the token you acquired, follow these steps:
+To call your web API by using the token you acquired, do the following:
1. Inside the *App* folder, create a new file named *api.js*. 1. Add the following code to the *api.js* file:
To call your web API by using the token you acquired, follow these steps:
} ```
-## Add the UI elements reference
+## Step 10: Add the UI elements reference
-The SPA app uses JavaScript to control the UI elements. For example, display the sign-in and sign-out buttons, render the users ID token claims to the screen.
+The SPA app uses JavaScript to control the UI elements. For example, it displays the sign-in and sign-out buttons, and renders the users ID token claims to the screen.
-To add the UI elements reference, follow these steps:
+To add the UI elements reference, do the following:
-1. Inside the *App* folder, create a new file named *ui.js*.
+1. Inside the *App* folder, create a file named *ui.js*.
1. Add the following code to the *ui.js* file: ```javascript
To add the UI elements reference, follow these steps:
} ```
-## Run your SPA application
+## Step 11: Run your SPA application
In your command shell, run the following commands:
npm install
npm ./index.js ```
-1. Browse to https://localhost:6420.
+1. Go to https://localhost:6420.
1. Select **Sign-in**. 1. Complete the sign-up or sign-in process.
-After you successfully authenticate, you can see the parsed ID token appear on the screen. Select `Call API`, to call your API endpoint.
+After you've authenticated successfully, the parsed ID token is displayed on the screen. Select `Call API` to call your API endpoint.
## Next steps
-* Learn more [about the code sample](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa)
-* Configure [Authentication options in your own SPA application using Azure AD B2C](enable-authentication-spa-app-options.md)
-* [Enable authentication in your own web API](enable-authentication-web-api.md)
+* Learn more about the [code sample](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa).
+* Configure [Authentication options in your own SPA application by using Azure AD B2C](enable-authentication-spa-app-options.md).
+* [Enable authentication in your own web API](enable-authentication-web-api.md).
active-directory-b2c Enable Authentication Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-api.md
Title: Enable authentication in a web API using Azure Active Directory B2C
-description: Using Azure Active Directory B2C to protect a web API.
+ Title: Enable authentication in a web API by using Azure Active Directory B2C
+description: This article discusses how to use Azure Active Directory B2C to protect a web API.
-# Enable authentication in your own web API using Azure Active Directory B2C
+# Enable authentication in your own web API by using Azure AD B2C
-To authorize access to a web API, only serve requests that include a valid Azure AD B2C-issued access token. This article guides you on how to enable Azure AD B2C authorization to your web API. After completing the steps in this article, only users who obtain a valid access token are authorized to call your web API endpoints.
+To authorize access to a web API, serve only requests that include a valid Azure Active Directory B2C (Azure AD B2C)-issued access token. This article shows you how to enable Azure AD B2C authorization to your web API. After you complete the steps in this article, only users who obtain a valid access token will be authorized to call your web API endpoints.
## Prerequisites
-Before you start reading this article, read one of the following articles. These articles guide you how to configure authentication for apps that call web APIs. Then follow the steps in this article to replace the sample web API with your own web API.
+Before you begin, read one of the following articles, which discuss how to configure authentication for apps that call web APIs. Then, follow the steps in this article to replace the sample web API with your own web API.
-- [Configure authentication in a sample ASP.NET core](configure-authentication-sample-web-app-with-api.md)-- [Configure authentication in a sample Single Page application](configure-authentication-sample-spa-app.md)
+- [Configure authentication in a sample ASP.NET Core application](configure-authentication-sample-web-app-with-api.md)
+- [Configure authentication in a sample single-page application (SPA)](configure-authentication-sample-spa-app.md)
## Overview
-Token-based authentication ensures that requests to a web API are accompanied by a valid access token. The app takes the following steps:
+Token-based authentication ensures that requests to a web API are accompanied by a valid access token.
+
+The app does the following:
+
+1. It authenticates users with Azure AD B2C.
+1. It acquires an access token with the required permissions (scopes) for the web API endpoint.
+1. It passes the access token as a bearer token in the authentication header of the HTTP request by using this format:
-1. Authenticates a user with Azure AD B2C.
-1. Acquires an access token with required permission (scopes) for the web API endpoint.
-1. Passes the access token as a bearer token in the authentication header of the HTTP request using this format:
```http Authorization: Bearer <token> ```
-The web API takes the following steps:
+The web API does the following:
+
+1. It reads the bearer token from the authorization header in the HTTP request.
-1. Reads the bearer token from the authorization header in the HTTP request.
-1. Validates the token.
-1. Validates the permissions (scopes) in the token.
-1. The web API reads the claims that are encoded in the token (optional).
-1. The web API responds to the HTTP request.
+1. It validates the token.
+1. It validates the permissions (scopes) in the token.
+1. It reads the claims that are encoded in the token (optional).
+1. It responds to the HTTP request.
### App registration overview To enable your app to sign in with Azure AD B2C and call a web API, you must register two applications in the Azure AD B2C directory. -- The **web, mobile, or SPA application** registration enables your app to sign in with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your application. For example, **App ID: 1**.
+- The *web, mobile, or SPA application* registration enables your app to sign in with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, which uniquely identifies your application (for example, *App ID: 1*).
-- The **web API** registration enables your app to call a protected web API. The registration exposes the web API permissions (scopes). The app registration process generates an *Application ID*, that uniquely identifies your web api. For example, **App ID: 2**. Grant your app (App ID: 1) permissions to the web API scopes (App ID: 2).
+- The *web API* registration enables your app to call a protected web API. The registration exposes the web API permissions (scopes). The app registration process generates an *Application ID*, which uniquely identifies your web API (for example, *App ID: 2*). Grant your app (App ID: 1) permissions to the web API scopes (App ID: 2).
-The following diagrams describe the app registrations and the application architecture.
+The application registrations and the application architecture are described in the following diagram:
-![App registrations and the application architecture for an app with web API.](./media/enable-authentication-web-api/app-with-api-architecture.png)
+![Diagram of the application registrations and the application architecture for an app with web API.](./media/enable-authentication-web-api/app-with-api-architecture.png)
## Prepare your development environment
-In the next steps, you are going to create a new web API project. Select your programming language, ASP.NET Core, or Node.js. Make sure you have a computer that's running either:
+In the next sections, you'll create a new web API project. Select your programming language, ASP.NET Core or Node.js. Make sure you have a computer that's running either of the following:
-#### [ASP.NET Core](#tab/csharpclient)
+# [ASP.NET Core](#tab/csharpclient)
* [Visual Studio Code](https://code.visualstudio.com/download)
-* [C# for Visual Studio Code (latest version)](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
+* [C# for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) (latest version)
* [.NET 5.0 SDK](https://dotnet.microsoft.com/download/dotnet)
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
-* [Visual Studio Code](https://code.visualstudio.com/), or another code editor.
+* [Visual Studio Code](https://code.visualstudio.com/), or another code editor
* [Node.js runtime](https://nodejs.org/en/download/)
+## Step 1: Create a protected web API
-## Create a protected web API
+Create a new web API project. First, select the programming language you want to use, **ASP.NET Core** or **Node.js**.
-In this step, you create a new web API project. Select your desired programming language, **ASP.NET Core**, or **Node.js**.
+# [ASP.NET Core](#tab/csharpclient)
-#### [ASP.NET Core](#tab/csharpclient)
-
-Use the [dotnet new](/dotnet/core/tools/dotnet-new) command. The dotnet new command creates a new folder named **TodoList** with the web api project assets. Enter into the directory, and open [VS Code](https://code.visualstudio.com/).
+Use the [`dotnet new`](/dotnet/core/tools/dotnet-new) command. The `dotnet new` command creates a new folder named *TodoList* with the web API project assets. Open the directory, and then open [Visual Studio Code](https://code.visualstudio.com/).
```dotnetcli dotnet new webapi -o TodoList
cd TodoList
code . ```
-When prompted to **add required assets to the project**, select **Yes**.
-
+When you're prompted to "add required assets to the project," select **Yes**.
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
-Use [Express](https://expressjs.com/) for [Node.js](https://nodejs.org/) to build a web API. To create a web API, follow these steps:
+Use [Express](https://expressjs.com/) for [Node.js](https://nodejs.org/) to build a web API. To create a web API, do the following:
-1. Create a new folder named **TodoList**. Then enter into the folder.
-1. Create a file **app.js**.
-1. Open the command shell, and enter `npm init -y`. This command creates a default **package.json** file for your Node.js project.
-1. In the command shell, enter `npm install express`. This command installs the Express framework.
+1. Create a new folder named *TodoList*.
+1. Under the *TodoList* folder, create a file named *app.js*.
+1. In a command shell, run `npm init -y`. This command creates a default *package.json* file for your Node.js project.
+1. In the command shell, run `npm install express`. This command installs the Express framework.
-## Install the dependencies
+## Step 2: Install the dependencies
-In this section, you add the authentication library to your web API project. The authentication library parses the HTTP authentication header, validates the token, and extracts claims. For more details, review the documentation for the library.
+Add the authentication library to your web API project. The authentication library parses the HTTP authentication header, validates the token, and extracts claims. For more information, review the documentation for the library.
-#### [ASP.NET Core](#tab/csharpclient)
+# [ASP.NET Core](#tab/csharpclient)
To add the authentication library, install the package by running the following command:
To add the authentication library, install the package by running the following
dotnet add package Microsoft.Identity.Web ```
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
To add the authentication library, install the packages by running the following command:
npm install passport-azure-ad
npm install morgan ```
-The [morgen package](https://www.npmjs.com/package/morgan) is an HTTP request logger middleware for node.js.
+The [morgen package](https://www.npmjs.com/package/morgan) is an HTTP request logger middleware for Node.js.
-## Initiate the authentication library
+## Step 3: Initiate the authentication library
-In this section, you add the necessary code to initiate the authentication library.
+Add the necessary code to initiate the authentication library.
-#### [ASP.NET Core](#tab/csharpclient)
+# [ASP.NET Core](#tab/csharpclient)
-Open `Startup.cs`, at the beginning of the class add the following `using` declarations:
+Open *Startup.cs* and then, at the beginning of the class, add the following `using` declarations:
```csharp using Microsoft.AspNetCore.Authentication.JwtBearer; using Microsoft.Identity.Web; ``` -
-Find the `ConfigureServices(IServiceCollection services)` function. Then add the following code snippet before `services.AddControllers();` line of code.
+Find the `ConfigureServices(IServiceCollection services)` function. Then, before the `services.AddControllers();` line of code, add the following code snippet:
```csharp
public void ConfigureServices(IServiceCollection services)
} ```
-Find the `Configure` function. Then add the following code snippet immediately after the `app.UseRouting();` line of code.
+Find the `Configure` function. Then, immediately after the `app.UseRouting();` line of code, add the following code snippet:
```csharp
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
} ```
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
Add the following JavaScript code to your *app.js* file.
app.use((req, res, next) => {
```
-## Add the endpoints
+## Step 4: Add the endpoints
-In this section you add two endpoints to your web API:
+Add two endpoints to your web API:
-- Anonymous `/public` endpoint. This endpoint returns the current date and time. Use this endpoint to debug your web api with anonymous calls.
+- Anonymous `/public` endpoint. This endpoint returns the current date and time. Use it to debug your web API with anonymous calls.
- Protected `/hello` endpoint. This endpoint returns the value of the `name` claim within the access token.
-To add the anonymous endpoint:
+**To add the anonymous endpoint:**
-#### [ASP.NET Core](#tab/csharpclient)
+# [ASP.NET Core](#tab/csharpclient)
-Under the */Controllers* folder, add a `PublicController.cs` file. Then add the following code snippet to the *PublicController.cs* file.
+Under the */Controllers* folder, add a *PublicController.cs* file, and then add to it the following code snippet:
```csharp using System;
namespace TodoList.Controllers
} ```
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
-Add the following JavaScript code to the *app.js* file:
+In the *app.js* file, add the following JavaScript code:
```javascript
app.get('/public', (req, res) => res.send( {'date': new Date() } ));
-To add the protected endpoint:
+**To add the protected endpoint:**
-#### [ASP.NET Core](#tab/csharpclient)
+# [ASP.NET Core](#tab/csharpclient)
-Under the */Controllers* folder, add a `HelloController.cs` file. Then add the following code to the *HelloController.cs* file.
+Under the */Controllers* folder, add a *HelloController.cs* file, and then add to it the following code:
```csharp using Microsoft.AspNetCore.Authorization;
namespace TodoList.Controllers
} ```
-The `HelloController` controller is decorated with the [AuthorizeAttribute](/aspnet/core/security/authorization/simple). The Authorize attribute limits access to that controller authenticated users.
+The `HelloController` controller is decorated with the [AuthorizeAttribute](/aspnet/core/security/authorization/simple), which limits access to authenticated users only.
The controller is also decorated with the `[RequiredScope("tasks.read")]`. The [RequiredScopeAttribute](/dotnet/api/microsoft.identity.web.resource.requiredscopeattribute.-ctor) verifies that the web API is called with the right scopes, `tasks.read`.
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
-Add the following JavaScript code to the *app.js* file.
+In the *app.js* file, add the following JavaScript code:
```javascript // API protected endpoint
app.get('/hello',
); ```
-The `/hello` endpoint first calls the `passport.authenticate()` function. The authentication function limits access to that controller authenticated users.
+The `/hello` endpoint first calls the `passport.authenticate()` function. The authentication function limits access to authenticated users only.
-The authentication function also verifies that the web API is called with the right scopes. The allowed scopes are located in the [configuration file](#configure-the-web-api).
+The authentication function also verifies that the web API is called with the right scopes. The allowed scopes are located in the [configuration file](#step-6-configure-the-web-api).
-## Configure the web server
+## Step 5: Configure the web server
-In a development environment, set the web API to listen on incoming HTTP requests port number. In this example, use HTTP port 6000. The base URI of the web API will be: <http://localhost:6000>
+In a development environment, set the web API to listen on an incoming HTTP requests port number. In this example, use HTTP port 6000. The base URI of the web API is `http://localhost:6000`.
-#### [ASP.NET Core](#tab/csharpclient)
+# [ASP.NET Core](#tab/csharpclient)
-Add the following json snippet to the *appsettings.json* file.
+Add the following JSON snippet to the *appsettings.json* file.
```json "Kestrel": {
Add the following json snippet to the *appsettings.json* file.
} ```
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
Add the following JavaScript code to the *app.js* file.
app.listen(port, () => {
console.log('Listening on port ' + port); }); ```
-
-## Configure the web API
+
-In this section, you add configurations to a configuration file. The file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token the web app passes as a bearer token.
+## Step 6: Configure the web API
-#### [ASP.NET Core](#tab/csharpclient)
+Add configurations to a configuration file. The file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token that the web app passes as a bearer token.
-Under the project root folder, open the `appsettings.json` file. Add the following settings:
+# [ASP.NET Core](#tab/csharpclient)
+Under the project root folder, open the *appsettings.json* file, and then add the following settings:
```json {
Under the project root folder, open the `appsettings.json` file. Add the follow
"SignedOutCallbackPath": "/signout/<your-sign-up-in-policy>", "SignUpSignInPolicyId": "<your-sign-up-in-policy>" },
- // More setting here
+ // More settings here
} ```
-Update the following properties of the app settings:
+In the *appsettings.json* file, update the following properties:
|Section |Key |Value | ||||
-|AzureAdB2C|Instance| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `https://contoso.b2clogin.com`.|
-|AzureAdB2C|Domain| Your Azure AD B2C tenant full [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`.|
-|AzureAdB2C|ClientId| The web API application ID. In the [diagram above](#app-registration-overview), it's the application with *App ID: 2*. For guidance how to get your web API application registration ID, see [Prerequisites](#prerequisites). |
-|AzureAdB2C|SignUpSignInPolicyId|The user flows, or custom policy. For guidance how to get your user flow or policy, see [Prerequisites](#prerequisites). |
-
+|AzureAdB2C|Instance| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com`).|
+|AzureAdB2C|Domain| Your Azure AD B2C tenant full [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso.onmicrosoft.com`).|
+|AzureAdB2C|ClientId| The web API application ID. In the [preceding diagram](#app-registration-overview), it's the application with *App ID: 2*. To learn how to get your web API application registration ID, see [Prerequisites](#prerequisites). |
+|AzureAdB2C|SignUpSignInPolicyId|The user flows, or custom policy. To learn how to get your user flow or policy, see [Prerequisites](#prerequisites). |
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
-Under the project root folder, create a `config.json` file, and add the following JSON snippet.
+Under the project root folder, create a *config.json* file, and then add to it the following JSON snippet:
```json {
Under the project root folder, create a `config.json` file, and add the followin
} ```
-Update the following properties of the app settings:
+In the *config.json* file, update the following properties:
|Section |Key |Value | ||||
-| credentials | tenantName | The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso`.|
-| credentials |clientID | The web API application ID. In the [diagram above](#app-registration-overview), it's the application with *App ID: 2*. For guidance how to get your web API application registration ID, see [Prerequisites](#prerequisites). |
-| credentials | issuer| The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace the `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace the `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
-| policies | policyName | The user flows, or custom policy. The user flows, or custom policy. For guidance how to get your user flow or policy, see [Prerequisites](#prerequisites).|
-| resource | scope | The scopes of your web API application registration. For guidance how to get your web API scope, see [Prerequisites](#prerequisites).|
+| credentials | tenantName | The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso`).|
+| credentials |clientID | The web API application ID. In the [preceding diagram](#app-registration-overview), it's the application with *App ID: 2*. To learn how to get your web API application registration ID, see [Prerequisites](#prerequisites). |
+| credentials | issuer| The token issuer `iss` claim value. By default, Azure AD B2C returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
+| policies | policyName | The user flows, or custom policy. To learn how to get your user flow or policy, see [Prerequisites](#prerequisites).|
+| resource | scope | The scopes of your web API application registration. To learn how to get your web API scope, see [Prerequisites](#prerequisites).|
-
+
-## Run and test the web API
+## Step 7: Run and test the web API
-Finally you run the web API with your Azure AD B2C environment settings.
+Finally, run the web API with your Azure AD B2C environment settings.
-#### [ASP.NET Core](#tab/csharpclient)
+# [ASP.NET Core](#tab/csharpclient)
-In the command shell, run the following command to start the web application:
+In the command shell, start the web app by running the following command:
```bush dotnet run ```
-You should see the following output. This output means your app is up and running and ready to receive requests.
+You should see the following output, which means that your app is up and running and ready to receive requests.
``` Now listening on: http://localhost:6000 ```
-To stop the program, in the command shell press `Ctrl+C`. You can rerun the app using the `node app.js` command.
+To stop the program, in the command shell, select Ctrl+C. You can rerun the app by using the `node app.js` command.
> [!TIP]
-> Alternatively to run the `dotnet run` command, use [VS Code debugger](https://code.visualstudio.com/docs/editor/debugging). VS Code's built-in debugger helps accelerate your edit, compile and debug loop.
-
-Open a browser and go to `http://localhost:6000/public`. In the browser window, you should see the following text displayed the current date and time.
--
+> Alternatively, to run the `dotnet run` command, you can use the [Visual Studio Code debugger](https://code.visualstudio.com/docs/editor/debugging). Visual Studio Code's built-in debugger helps accelerate your edit, compile, and debug loop.
+Open a browser and go to `http://localhost:6000/public`. In the browser window, you should see the following text displayed, along with the current date and time.
-#### [Node.js](#tab/nodejsgeneric)
+# [Node.js](#tab/nodejsgeneric)
-In the command shell, run the following command to start the web application:
+In the command shell, start the web app by running the following command:
```bush node app.js ```
-You should see the following output. This output means your app is up and running and ready to receive requests.
+You should see the following output, which means that your app is up and running and ready to receive requests.
``` Example app listening on port 6000! ```
-To stop the program, in the command shell press `Ctrl+C`. You can rerun the app using the `node app.js` command.
+To stop the program, in the command shell, select Ctrl+C. You can rerun the app by using the `node app.js` command.
> [!TIP]
-> Alternatively to run the `node app.js` command, use [VS Code debugger](https://code.visualstudio.com/docs/editor/debugging). VS Code's built-in debugger helps accelerate your edit, compile and debug loop.
+> Alternatively, to run the `node app.js` command, use the [Visual Studio Code debugger](https://code.visualstudio.com/docs/editor/debugging). Visual Studio Code's built-in debugger helps accelerate your edit, compile, and debug loop.
-Open a browser and go to `http://localhost:6000/public`. In the browser window, you should see the following text displayed the current date and time.
+Open a browser and go to `http://localhost:6000/public`. In the browser window, you should see the following text displayed, along with the current date and time.
-## Calling the web API from your app
+## Step 8: Call the web API from your app
-First try to call the protected web API endpoint without an access token. Open a browser and go to `http://localhost:6000/hello`. The API will return unauthorized HTTP error message, confirming that web API is protected with a bearer token.
+Try to call the protected web API endpoint without an access token. Open a browser and go to `http://localhost:6000/hello`. The API will return an unauthorized HTTP error message, confirming that web API is protected with a bearer token.
Continue to configure your app to call the web API. For guidance, see the [Prerequisites](#prerequisites) section.
Continue to configure your app to call the web API. For guidance, see the [Prere
Get the complete example on GitHub:
-#### [ASP.NET Core](#tab/csharpclient)
+# [ASP.NET Core](#tab/csharpclient)
+* Get the web API by using the [Microsoft identity library](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C/TodoListService).
-* [.NET Core web api using Microsoft identity library](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C/TodoListService)
-
-#### [Node.js](#tab/nodejsgeneric)
-
-* [Node.js Web API using the Passport.js library](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi)
--
+# [Node.js](#tab/nodejsgeneric)
+* Get the web API by using the [Passport.js library](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi).
active-directory-b2c Enable Authentication Web App With Api Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-app-with-api-options.md
Title: Enable web application that calls a web API options using Azure Active Directory B2C
-description: Enable the use of web application that calls a web API options by using several ways.
+ Title: Enable a web application that calls web API options by using Azure Active Directory B2C
+description: This article discusses how to enable the use of a web application that calls web API options in several ways.
-# Configure authentication options in a web application that calls a web API using Azure Active Directory B2C
+# Configure authentication options in a web app that calls a web API by using Azure AD B2C
-This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your web application that calls a web API. Before you start, familiarize yourself with the following articles: [Configure authentication in a sample web application](configure-authentication-sample-web-app-with-api.md) or [Enable authentication in your own web application](enable-authentication-web-app-with-api.md).
+This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your web application that calls a web API. Before you start, familiarize yourself with the following articles:
+* [Configure authentication in a sample web application](configure-authentication-sample-web-app-with-api.md)
+* [Enable authentication in your own web application](enable-authentication-web-app-with-api.md).
[!INCLUDE [active-directory-b2c-app-integration-custom-domain](../../includes/active-directory-b2c-app-integration-custom-domain.md)]
To use a custom domain and your tenant ID in the authentication URL, follow the
- Update the `Instance` entry with your custom domain. - Update the `Domain` entry with your [tenant ID](tenant-management.md#get-your-tenant-id). For more information, see [Use tenant ID](custom-domain.md#optional-use-tenant-id).
-The following JSON shows the app settings before the change:
+The app settings *before* the change are shown in the following JSON code:
-```JSon
+```json
"AzureAdB2C": { "Instance": "https://contoso.b2clogin.com", "Domain": "tenant-name.onmicrosoft.com",
The following JSON shows the app settings before the change:
} ```
-The following JSON shows the app settings after the change:
+The app settings *after* the change are shown in the following JSON code:
-```JSon
+```json
"AzureAdB2C": { "Instance": "https://login.contoso.com", "Domain": "00000000-0000-0000-0000-000000000000",
The following JSON shows the app settings after the change:
## Support advanced scenarios
-The `AddMicrosoftIdentityWebAppAuthentication` method in the Microsoft identity platform API lets developers add code for advanced authentication scenarios or subscribe to OpenIdConnect events. For example, you can subscribe to OnRedirectToIdentityProvider, which allows you to customize the authentication request your app sends to Azure AD B2C.
+The `AddMicrosoftIdentityWebAppAuthentication` method in the Microsoft identity platform API lets developers add code for advanced authentication scenarios or subscribe to OpenIdConnect events. For example, you can subscribe to OnRedirectToIdentityProvider, with which you can customize the authentication request your app sends to Azure AD B2C.
-To support advanced scenarios, open the `Startup.cs`, and in the `ConfigureServices` function, replace the `AddMicrosoftIdentityWebAppAuthentication` with the following code snippet:
+To support advanced scenarios, open the *Startup.cs* file and, in the `ConfigureServices` function, replace the `AddMicrosoftIdentityWebAppAuthentication` with the following code snippet:
```csharp
-// Configuration to sign-in users with Azure AD B2C
+// Configuration to sign in users with Azure AD B2C
//services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAdB2C");
You can pass parameters between your controller and the *OnRedirectToIdentityPro
[!INCLUDE [active-directory-b2c-app-integration-login-hint](../../includes/active-directory-b2c-app-integration-login-hint.md)]
-1. If you're using a custom policy, add the required input claim as described in [Set up direct sign-in](direct-signin.md#prepopulate-the-sign-in-name).
+1. If you're using a custom policy, add the required input claim, as described in [Set up direct sign-in](direct-signin.md#prepopulate-the-sign-in-name).
+ 1. Complete the [Support advanced scenarios](#support-advanced-scenarios) procedure.+ 1. Add the following line of code to the *OnRedirectToIdentityProvider* function: ```csharp
You can pass parameters between your controller and the *OnRedirectToIdentityPro
await Task.CompletedTask.ConfigureAwait(false); } ```
- ```
[!INCLUDE [active-directory-b2c-app-integration-custom-parameters](../../includes/active-directory-b2c-app-integration-custom-parameters.md)]
You can pass parameters between your controller and the *OnRedirectToIdentityPro
## Account controller
-If you want to customize the **Sign-in**, **Sign-up** or **Sign-out** actions, you are encouraged to create your own controller. Having your own controller allows you to pass parameters between your controller and the authentication library. The `AccountController` is part of `Microsoft.Identity.Web.UI` NuGet package, which handles the sign-in and sign-out actions. You can find its implementation in the [Microsoft Identity Web library](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.UI/Areas/MicrosoftIdentity/Controllers/AccountController.cs).
+If you want to customize a *sign-in*, *sign-up*, or *sign-out* action, we encourage you to create your own controller. When you have your own controller, you can pass parameters between your controller and the authentication library. The `AccountController` is part of `Microsoft.Identity.Web.UI` NuGet package, which handles the sign-in and sign-out actions. You can find its implementation in the [Microsoft Identity Web library](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.UI/Areas/MicrosoftIdentity/Controllers/AccountController.cs).
The following code snippet demonstrates a custom `MyAccountController` with the **SignIn** action. The action passes a parameter named `campaign_id` to the authentication library.
In the `_LoginPartial.cshtml` view, change the sign-in link to your controller
<form method="get" asp-area="MicrosoftIdentity" asp-controller="MyAccount" asp-action="SignIn"> ```
-In the `OnRedirectToIdentityProvider` in the `Startup.cs` calls, you can read the custom parameter:
+In the `OnRedirectToIdentityProvider`, in the `Startup.cs` calls, you can read the custom parameter:
```csharp private async Task OnRedirectToIdentityProviderFunc(RedirectContext context)
private async Task OnRedirectToIdentityProviderFunc(RedirectContext context)
## Role-based access control
-With [authorization in ASP.NET Core](/aspnet/core/security/authorization/introduction) you can use [role-based authorization](/aspnet/core/security/authorization/roles), [claims-based authorization](/aspnet/core/security/authorization/claims), or [policy-based authorization](/aspnet/core/security/authorization/policies) to check if the user is authorized to access a protected resource.
+With [authorization in ASP.NET Core](/aspnet/core/security/authorization/introduction) you can use [role-based authorization](/aspnet/core/security/authorization/roles), [claims-based authorization](/aspnet/core/security/authorization/claims), or [policy-based authorization](/aspnet/core/security/authorization/policies) to check to see whether the user is authorized to access a protected resource.
-In the *ConfigureServices* method, add the *AddAuthorization* method, which adds the authorization model. The following example creates a policy named `EmployeeOnly`. The policy checks that a claim `EmployeeNumber` exists. The value of the claim must be one of the following IDs: 1, 2, 3, 4 or 5.
+In the *ConfigureServices* method, add the *AddAuthorization* method, which adds the authorization model. The following example creates a policy named `EmployeeOnly`. The policy checks to see whether a claim `EmployeeNumber` exists. The value of the claim must be one of the following IDs: 1, 2, 3, 4 or 5.
```csharp services.AddAuthorization(options =>
services.AddAuthorization(options =>
}); ```
-Authorization in ASP.NET Core is controlled with [AuthorizeAttribute](/aspnet/core/security/authorization/simple) and its various parameters. In its most basic form, applying the `[Authorize]` attribute to a controller, action, or Razor Page, limits access to that component's authenticated users.
+You control authorization in ASP.NET Core by using [AuthorizeAttribute](/aspnet/core/security/authorization/simple) and its various parameters. When you apply the most basic form of the `[Authorize]` attribute to a controller, action, or Razor Page, you limit access to that component's authenticated users.
-Policies are applied to controllers by using the `[Authorize]` attribute with the policy name. The following code limits access to the `Claims` action to users authorized by the `EmployeeOnly` policy:
+You apply policies to controllers by using the `[Authorize]` attribute with the policy name. The following code limits access to the `Claims` action to users who are authorized by the `EmployeeOnly` policy:
```csharp [Authorize(Policy = "EmployeeOnly")]
public IActionResult Claims()
## Next steps -- Learn more: [Introduction to authorization in ASP.NET Core](/aspnet/core/security/authorization/introduction)
+To learn more, see [Introduction to authorization in ASP.NET Core](/aspnet/core/security/authorization/introduction).
active-directory-b2c Enable Authentication Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-app-with-api.md
Title: Enable authentication in a web that calls a web API using Azure Active Directory B2C building blocks
-description: The building blocks of an ASP.NET web application that calls a web API using Azure Active Directory B2C.
+ Title: Enable authentication in web apps that call a web API by using Azure Active Directory B2C building blocks
+description: This article discusses the building blocks of an ASP.NET web app that calls a web API by using Azure Active Directory B2C.
-# Enable authentication in your own web application that calls a web API using Azure Active Directory B2C
+# Enable authentication in web apps that call a web API by using Azure AD B2C
-This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own ASP.NET web application that calls a web API. Learn how create an ASP.NET Core web application with ASP.NET Core middleware that uses the [OpenID Connect](openid-connect.md) protocol. Use this article with [Configure authentication in a sample web application that calls a web API](configure-authentication-sample-web-app-with-api.md), replace the sample web app with your own web app.
+This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to an ASP.NET web application that calls a web API. Learn how to create an ASP.NET Core web application with ASP.NET Core middleware that uses the [OpenID Connect](openid-connect.md) protocol.
-This article focus on the web application project. For instructions how to create the web API, see the [to do list web API sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C).
+To use this article with [Configure authentication in a sample web app that calls a web API](configure-authentication-sample-web-app-with-api.md), replace the sample web app with your own web app.
+
+This article focuses on the web application project. For instructions on how to create the web API, see the [ToDo list web API sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C).
## Prerequisites
-Review the prerequisites and integration steps in [Configure authentication in a sample web application that calls a web API](configure-authentication-sample-web-app-with-api.md).
+Review the prerequisites and integration steps in [Configure authentication in a sample web app that calls a web API](configure-authentication-sample-web-app-with-api.md).
+
+The following sections step you through how to add Azure Active Directory B2C (Azure AD B2C) authentication to an ASP.NET web application.
-## Create a web app project
+## Step 1: Create a web app project
-You can use an existing ASP.NET MVC web app project or create new one. To create a new project, open a command shell, and enter the following command:
+You can use an existing ASP.NET Model View Controller (MVC) web app project or create new one. To create a new project, open a command shell, and then run the following command:
```dotnetcli dotnet new mvc -o mywebapp ```
-The preceding command:
-
-* Creates a new MVC web app.
-* The `-o mywebapp` parameter creates a directory named *mywebapp* with the source files for the app.
+The preceding command creates a new MVC web app. The `-o mywebapp` parameter creates a directory named *mywebapp* with the source files for the app.
-## Add the authentication libraries
+## Step 2: Add the authentication libraries
First, add the Microsoft Identity Web library. This is a set of ASP.NET Core libraries that simplify adding Azure AD B2C authentication and authorization support to your web app. The Microsoft Identity Web library sets up the authentication pipeline with cookie-based authentication. It takes care of sending and receiving HTTP authentication messages, token validation, claims extraction, and more.
Install-Package Microsoft.Identity.Web
Install-Package Microsoft.Identity.Web.UI ``` ---
-## Initiate the authentication libraries
+## Step 3: Initiate the authentication libraries
The Microsoft Identity Web middleware uses a startup class that runs when the hosting process starts. In this step, you add the necessary code to initiate the authentication libraries.
using Microsoft.Identity.Web.UI;
Because Microsoft Identity Web uses cookie-based authentication to protect your web app, the following code sets the *SameSite* cookie settings. Then it reads the `AzureADB2C` application settings and initiates the middleware controller with its view.
-Replace the `ConfigureServices(IServiceCollection services)` function with the following code snippet.
+Replace the `ConfigureServices(IServiceCollection services)` function with the following code snippet:
```csharp public void ConfigureServices(IServiceCollection services)
public void ConfigureServices(IServiceCollection services)
options.HandleSameSiteCookieCompatibility(); });
- // Configuration to sign-in users with Azure AD B2C
+ // Configuration to sign in users with Azure AD B2C
services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAdB2C") // Enable token acquisition to call downstream web API .EnableTokenAcquisitionToCallDownstreamApi(new string[] { Configuration["TodoList:TodoListScope"] })
public void ConfigureServices(IServiceCollection services)
} ```
-The following code adds the cookie policy, and uses the authentication model. Replace the `Configure` function, with the following code snippet.
+The following code adds the cookie policy, and it uses the authentication model. Replace the `Configure` function with the following code snippet:
```csharp public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
}; ```
-## Add the UI elements
+## Step 4: Add the UI elements
-To add user interface elements, use a partial view. The partial view contains logic for checking whether a user is signed in or not. If the user is not signed in, the partial view renders the sign-in button. If the user is signed in, it shows the user's display name and sign-out button.
+To add user interface elements, use a partial view. The partial view contains logic for checking to see whether a user is signed in. If the user is not signed in, the partial view renders the sign-in button. If the user is signed in, it shows the person's display name and sign-out button.
Create a new file `_LoginPartial.cshtml` inside the `Views/Shared` folder with the following code snippet:
Create a new file `_LoginPartial.cshtml` inside the `Views/Shared` folder with t
{ <ul class="nav navbar-nav navbar-right"> <li class="navbar-text">Hello @User.Identity.Name</li>
- <!-- The Account controller is not defined in this project. Instead, it is part of Microsoft.Identity.Web.UI nuget package and
- it defines some well known actions such as SignUp/In, SignOut and EditProfile-->
+ <!-- The Account controller is not defined in this project. Instead, it is part of Microsoft.Identity.Web.UI nuget package, and it defines some well-known actions, such as SignUp/In, SignOut, and EditProfile. -->
<li class="navbar-btn"> <form method="get" asp-area="MicrosoftIdentity" asp-controller="Account" asp-action="EditProfile"> <button type="submit" class="btn btn-primary" style="margin-right:5px">Edit Profile</button>
else
} ```
-Modify your `Views\Shared\_Layout.cshtml` to include the *_LoginPartial.cshtml* file you added. The *_Layout.cshtml* file is a common layout that provides the user with a consistent experience as they navigate from page to page. The layout includes common user interface elements such as the app header, and footer.
+Modify your `Views\Shared\_Layout.cshtml` to include the *_LoginPartial.cshtml* file you added. The *_Layout.cshtml* file is a common layout that provides the user with a consistent experience as they navigate from page to page. The layout includes common user interface elements such as the app header and footer.
> [!NOTE] > Depending on the .NET Core version and whether you're adding sign-in to an existing app, the UI elements might look different. If so, be sure to include *_LoginPartial* in the proper location within the page layout.
Replace this element with the following Razor code:
The preceding Razor code includes a link to the `Claims` and `TodoList` actions you'll create in the next steps.
-## Add the claims view
+## Step 5: Add the claims view
To view the ID token claims under the `Views/Home` folder, add the `Claims.cshtml` view.
To view the ID token claims under the `Views/Home` folder, add the `Claims.cshtm
</table> ```
-In this step, you add the `Claims` action that links the *Claims.cshtml* view to the *Home* controller. It uses the `[Authorize]` attribute, which limits access to the Claims action to authenticated users.
+In this step, you add the `Claims` action that links the *Claims.cshtml* view to the *Home* controller. It uses the `[Authorize]` attribute, which limits access to the Claims action to authenticated users.
-In the `/Controllers/HomeController.cs` controller, add the following action.
+In the */Controllers/HomeController.cs* controller, add the following action:
```csharp [Authorize]
Add the following `using` declaration at the beginning of the class:
using Microsoft.AspNetCore.Authorization; ```
-## Add the to do list view
+## Step 6: Add the TodoList.cshtml view
-To call the to do web api, you need to have an access token with the right scopes. In this step, you add an action to the `Home` controller. Under the `Views/Home` folder, add the `TodoList.cshtml` view.
+To call the TodoList.cshtml web API, you need to have an access token with the right scopes. In this step, you add an action to the `Home` controller. Under the `Views/Home` folder, add the `TodoList.cshtml` view.
```razor @{
To call the to do web api, you need to have an access token with the right scope
</div> ```
-After you added the view, you add the `TodoList` action that links the *TodoList.cshtml* view to the *Home* controller. It uses the `[Authorize]` attribute, which limits access to the TodoList action to authenticated users.
+After you've added the view, you add the `TodoList` action that links the *TodoList.cshtml* view to the *Home* controller. It uses the `[Authorize]` attribute, which limits access to the TodoList action to authenticated users.
-In the `/Controllers/HomeController.cs` controller, add the following action class member with and inject the token acquisition service into your controller.
+In the */Controllers/HomeController.cs* controller, add the following action class member and inject the token acquisition service into your controller.
```csharp public class HomeController : Controller
public class HomeController : Controller
} ```
-Then add the following action. The action shows you how to call a web API along with the bearer token.
+Now, add the following action, which shows you how to call a web API along with the bearer token.
```csharp [Authorize]
public async Task<IActionResult> TodoListAsync()
} ```
-## Add the app settings
+## Step 7: Add the app settings
-Azure AD B2C identity provider settings are stored in the `appsettings.json` file. Open appsettings.json and add the app settings as described in the [Step 5: Configure the sample web app](configure-authentication-sample-web-app-with-api.md#step-5-configure-the-sample-web-app).
+Azure AD B2C identity provider settings are stored in the *appsettings.json* file. Open *appsettings.json*, and add the app settings, as described in "Step 5: Configure the sample web app" of [Configure authentication in a sample web app that calls a web API by using Azure AD B2C](configure-authentication-sample-web-app-with-api.md#step-5-configure-the-sample-web-app).
-## Run your application
+## Step 8: Run your application
1. Build and run the project.
-1. Browse to https://localhost:5001.
-1. Select **SignIn/Up**.
-1. Complete the sign-up or sign-in process.
+1. Go to https://localhost:5001, and then select **SignIn/Up**.
+1. Complete the sign-in or sign-up process.
-After you successfully authenticate, check your display name in the navigation bar.
+After you've been successfully authenticated in the app, check your display name in the navigation bar.
-* To view the claims the Azure AD B2C token return to your app, select **Claims**.
+* To view the claims that the Azure AD B2C token returns to your app, select **Claims**.
* To view the access token, select **To do list**. ## Next steps
-* Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-web-application-options.md)
-* [Enable authentication in your own web API](enable-authentication-web-api.md)
+Learn how to:
+* [Customize and enhance the Azure AD B2C authentication experience in your web app](enable-authentication-web-application-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Publish App To Azure Ad App Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/publish-app-to-azure-ad-app-gallery.md
# Publish your Azure AD B2C app to the Azure AD app gallery
-The Azure Active Directory (Azure AD) app gallery is a catalog of thousands of apps. The app gallery makes it easy to deploy and configure single sign-on (SSO) and automate user provisioning. You can find popular cloud apps in the gallery, such as Workday, ServiceNow, and Zoom.
+The Azure Active Directory (Azure AD) app gallery is a catalog of thousands of apps. The app gallery makes it easy to deploy and configure single sign-on (SSO) and automate user setup. You can find popular cloud apps in the gallery, such as Workday, ServiceNow, and Zoom.
-This article describes how to publish your Azure AD B2C app in the Azure AD app gallery. When your app is published, it's listed among the options customers can choose from when they're adding apps to their Azure AD tenant.
+This article describes how to publish your Azure Active Directory B2C (Azure AD B2C) app in the Azure AD app gallery. When your app is published, it's listed among the options that customers can choose from when they're adding apps to their Azure AD tenant.
Here are some benefits of adding your Azure AD B2C app to the app gallery:
Here are some benefits of adding your Azure AD B2C app to the app gallery:
- Customers can find your app in the gallery with a quick search. - App configuration is simple and minimal. - Customers get a step-by-step configuration tutorial.-- Customers can assign the app to different users and groups within their organization.
+- Customers can assign the app to various users and groups within their organization.
- The tenant administrator can grant tenant-wide admin consent to your app. ## Sign-in flow overview
-The sign-in flow involves following steps:
+The sign-in flow involves the following steps:
-1. The user navigates to the [My Apps portal](https://myapps.microsoft.com/) and selects your app, which opens the app sign-in URL.
-1. The app sign-in URL starts an authorization request and redirects the user to the Azure AD B2C authorization endpoint.
-1. The user chooses to sign in with Azure AD "Corporate" account. Azure AD B2C takes the user to the Azure AD authorization endpoint, where they sign in with their work account.
-1. If the Azure AD SSO session is active, Azure AD issues an access token without prompting the user to sign in again. If the Azure AD session expires or becomes invalid, the user is prompted to sign in again.
+1. Users go to the [My Apps portal](https://myapps.microsoft.com/) and select your app, which opens the app sign-in URL.
+1. The app sign-in URL starts an authorization request and redirects users to the Azure AD B2C authorization endpoint.
+1. Users choose to sign in with their Azure AD "Corporate" account. Azure AD B2C takes them to the Azure AD authorization endpoint, where they sign in with their work account.
+1. If the Azure AD SSO session is active, Azure AD issues an access token without prompting users to sign in again. If the Azure AD session expires or becomes invalid, users are prompted to sign in again.
-![The sign-in OpenID connect flow.](./media/publish-app-to-azure-ad-app-gallery/app-gallery-sign-in-flow.png)
+![Diagram of the sign-in OpenID connect flow.](./media/publish-app-to-azure-ad-app-gallery/app-gallery-sign-in-flow.png)
-Depending on the user's SSO session and Azure AD identity settings, the user might be prompted to:
+Depending on the users' SSO session and Azure AD identity settings, they might be prompted to:
- Provide their email address or phone number. - Enter their password or sign in with the [Microsoft authenticator app](https://www.microsoft.com/p/microsoft-authenticator/9nblgggzmcj6).-- Complete multi-factor authentication.-- Accept the consent page. Your customer's tenant administrator can [grant tenant-wide admin consent to an app](../active-directory/manage-apps/grant-admin-consent.md). When granted, the consent page won't be presented to the user.
+- Complete multifactor authentication.
+- Accept the consent page. Your customer's tenant administrator can [grant tenant-wide admin consent to an app](../active-directory/manage-apps/grant-admin-consent.md). When consent is granted, the consent page won't be presented to users.
Upon successful sign-in, Azure AD returns a token to Azure AD B2C. Azure AD B2C validates and reads the token claims, and then returns a token to your application.
Upon successful sign-in, Azure AD returns a token to Azure AD B2C. Azure AD B2C
[!INCLUDE [active-directory-b2c-customization-prerequisites-custom-policy](../../includes/active-directory-b2c-customization-prerequisites-custom-policy.md)]
-## Step 1. Register your application in Azure AD B2C
+## Step 1: Register your application in Azure AD B2C
To enable sign-in to your app with Azure AD B2C, register your app in the Azure AD B2C directory. Registering your app establishes a trust relationship between the app and Azure AD B2C.
-If you haven't already done so, [register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). Later, you register this app with the Azure App gallery.
+If you haven't already done so, [register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). Later, you'll register this app with the Azure app gallery.
-## Step 2. Set up sign-in for multi-tenant Azure AD
+## Step 2: Set up sign-in for multitenant Azure AD
-To allow employees and consumers from any Azure AD tenant to sign in using Azure AD B2C, follow the guidance for [setting up sign-in for multi-tenant Azure AD](identity-provider-azure-ad-multi-tenant.md?pivots=b2c-custom-policy).
+To allow employees and consumers from any Azure AD tenant to sign in by using Azure AD B2C, follow the guidance for [setting up sign-in for multitenant Azure AD](identity-provider-azure-ad-multi-tenant.md?pivots=b2c-custom-policy).
-## Step 3. Prepare your app
+## Step 3: Prepare your app
-In your app, copy the URL of the sign-in endpoint. If you use the [web application sample](configure-authentication-sample-web-app.md), the sign-in URL is `https://localhost:5001/MicrosoftIdentity/Account/SignIn?`. This URL is where the Azure AD app gallery takes the user to sign-in to your app.
+In your app, copy the URL of the sign-in endpoint. If you use the [web application sample](configure-authentication-sample-web-app.md), the sign-in URL is `https://localhost:5001/MicrosoftIdentity/Account/SignIn?`. This URL is where the Azure AD app gallery takes users to sign in to your app.
-In production environments, the app registration redirect URI is typically a publicly accessible endpoint where your app is running such as `https://woodgrovedemo.com/Account/SignIn`. The reply URL must begin with `https`.
+In production environments, the app registration redirect URI is ordinarily a publicly accessible endpoint where your app is running, such as `https://woodgrovedemo.com/Account/SignIn`. The reply URL must begin with `https`.
-## Step 4. Publish your Azure AD B2C app
+## Step 4: Publish your Azure AD B2C app
-Finally, add the multi-tenant app to the Azure AD app gallery. Follow the instructions in [Publish your app to the Azure AD app gallery](../active-directory/develop/v2-howto-app-gallery-listing.md). To add your app to the app gallery, follow these steps:
+Finally, add the multitenant app to the Azure AD app gallery. Follow the instructions in [Publish your app to the Azure AD app gallery](../active-directory/develop/v2-howto-app-gallery-listing.md). To add your app to the app gallery, do the following:
1. [Create and publish documentation](../active-directory/develop/v2-howto-app-gallery-listing.md#step-5create-and-publish-documentation). 1. [Submit your app](../active-directory/develop/v2-howto-app-gallery-listing.md#step-6submit-your-app) with the following information:
Finally, add the multi-tenant app to the Azure AD app gallery. Follow the instru
||| |What type of request do you want to submit?| Select **List my application in the gallery**.| |What feature would you like to enable when listing your application in the gallery? | Select **Federated SSO (SAML, WS-Fed & OpenID Connect)**. |
- | Select your application federation protocol| Select, **OpenID Connect & OAuth 2.0**. |
+ | Select your application federation protocol| Select **OpenID Connect & OAuth 2.0**. |
| Application (Client) ID | Provide the ID of [your Azure AD B2C application](#step-1-register-your-application-in-azure-ad-b2c). |
- | Application Sign-on URL|Provide the app sign-in URL as you configured in [Step 3. Prepare your app](#step-3-prepare-your-app).|
+ | Application sign-in URL|Provide the app sign-in URL as it's configured in [Step 3. Prepare your app](#step-3-prepare-your-app).|
| Multitenant| Select **Yes**. |
+ | | |
## Next steps
active-directory-b2c Secure Api Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/secure-api-management.md
# Secure an Azure API Management API with Azure AD B2C
-Learn how to restrict access to your Azure API Management (APIM) API to clients that have authenticated with Azure Active Directory B2C (Azure AD B2C). Follow the steps in this article to create and test an inbound policy in APIM that restricts access to only those requests that include a valid Azure AD B2C-issued access token.
+Learn how to restrict access to your Azure API Management API to clients that have authenticated with Azure Active Directory B2C (Azure AD B2C). Follow the instructions in this article to create and test an inbound policy in Azure API Management that restricts access to only those requests that include a valid Azure AD B2C-issued access token.
## Prerequisites
-You need the following resources in place before continuing with the steps in this article:
+Before you begin, make sure that you have the following resources in place:
-* [Azure AD B2C tenant](tutorial-create-tenant.md)
-* [Application registered](tutorial-register-applications.md) in your tenant
-* [User flows created](tutorial-create-user-flows.md) in your tenant
-* [Published API](../api-management/import-and-publish.md) in Azure API Management
-* [Postman](https://www.getpostman.com/) to test secured access (optional)
+* An [Azure AD B2C tenant](tutorial-create-tenant.md)
+* An [application that's registered in your tenant](tutorial-register-applications.md)
+* [User flows that are created in your tenant](tutorial-create-user-flows.md)
+* A [published API](../api-management/import-and-publish.md) in Azure API Management
+* (Optional) A [Postman platform](https://www.getpostman.com/) to test secured access
## Get Azure AD B2C application ID
-When you secure an API in Azure API Management with Azure AD B2C, you need several values for the [inbound policy](../api-management/api-management-howto-policies.md) that you create in APIM. First, record the application ID of an application you've previously created in your Azure AD B2C tenant. If you're using the application you created in the prerequisites, use the application ID for *webbapp1*.
+When you secure an API in Azure API Management with Azure AD B2C, you need several values for the [inbound policy](../api-management/api-management-howto-policies.md) that you create in Azure API Management. First, record the application ID of an application you've previously created in your Azure AD B2C tenant. If you're using the application you created to satisfy the prerequisites, use the application ID for *webapp1*.
-To register an application in your Azure AD B2C tenant, you can use our new unified **App registrations** experience or our legacy **Applications (Legacy)** experience. [Learn more about the new experience](./app-registrations-training-guide.md).
+To register an application in your Azure AD B2C tenant, you can use our new, unified *App registrations* experience or our legacy *Applications* experience. Learn more about the [new registrations experience](./app-registrations-training-guide.md).
-#### [App registrations](#tab/app-reg-ga/)
+# [App registrations](#tab/app-reg-ga/)
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
-1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**.
-1. Select **App registrations**, then select the **Owned applications** tab.
-1. Record the value in the **Application (client) ID** column for *webapp1* or another application you've previously created.
+1. On the left pane, select **Azure AD B2C**. Alternatively, you can select **All services** and then search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select the **Owned applications** tab.
+1. Record the value in the **Application (client) ID** column for *webapp1* or for another application you've previously created.
-#### [Applications (Legacy)](#tab/applications-legacy/)
+# [Applications (Legacy)](#tab/applications-legacy/)
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
-1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**.
+1. On the left pane, select **Azure AD B2C**. Alternatively, you can select **All services** and then search for and select **Azure AD B2C**.
1. Under **Manage**, select **Applications (Legacy)**.
-1. Record the value in the **APPLICATION ID** column for *webapp1* or another application you've previously created.
+1. Record the value in the **Application ID** column for *webapp1* or for another application you've previously created.
* * *
-## Get token issuer endpoint
+## Get a token issuer endpoint
-Next, get the well-known config URL for one of your Azure AD B2C user flows. You also need the token issuer endpoint URI you want to support in Azure API Management.
+Next, get the well-known config URL for one of your Azure AD B2C user flows. You also need the token issuer endpoint URI that you want to support in Azure API Management.
-1. Browse to your Azure AD B2C tenant in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), go to your Azure AD B2C tenant.
1. Under **Policies**, select **User flows**.
-1. Select an existing policy, for example *B2C_1_signupsignin1*, then select **Run user flow**.
-1. Record the URL in hyperlink displayed under the **Run user flow** heading near the top of the page. This URL is the OpenID Connect well-known discovery endpoint for the user flow, and you use it in the next section when you configure the inbound policy in Azure API Management.
+1. Select an existing policy (for example, *B2C_1_signupsignin1*), and then select **Run user flow**.
+1. Record the URL in the hyperlink that's displayed under the **Run user flow** heading near the top of the page. This URL is the OpenID Connect well-known discovery endpoint for the user flow, and you'll use it in the next section when you configure the inbound policy in Azure API Management.
- ![Well-known URI hyperlink in the Run now page of the Azure portal](media/secure-apim-with-b2c-token/portal-01-policy-link.png)
+ ![Screenshot of the well-known URI hyperlink on the "Run user flow" page of the Azure portal.](media/secure-apim-with-b2c-token/portal-01-policy-link.png)
-1. Select the hyperlink to browse to the OpenID Connect well-known configuration page.
-1. In the page that opens in your browser, record the `issuer` value, for example:
+1. Select the hyperlink to go to the OpenID Connect well-known configuration page.
+1. On the page that opens in your browser, record the `issuer` value. For example:
`https://<tenant-name>.b2clogin.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/v2.0/`
- You use this value in the next section when you configure your API in Azure API Management.
+ You'll use this value in the next section, when you configure your API in Azure API Management.
You should now have two URLs recorded for use in the next section: the OpenID Connect well-known configuration endpoint URL and the issuer URI. For example:
https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1_signupsig
https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/ ```
-## Configure inbound policy in Azure API Management
+## Configure the inbound policy in Azure API Management
-You're now ready to add the inbound policy in Azure API Management that validates API calls. By adding a [JWT validation](../api-management/api-management-access-restriction-policies.md#ValidateJWT) policy that verifies the audience and issuer in an access token, you can ensure that only API calls with a valid token are accepted.
+You're now ready to add the inbound policy in Azure API Management that validates API calls. By adding a [JSON web token (JWT) validation](../api-management/api-management-access-restriction-policies.md#ValidateJWT) policy that verifies the audience and issuer in an access token, you can ensure that only API calls with a valid token are accepted.
-1. Browse to your Azure API Management instance in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), go to your Azure API Management instance.
1. Select **APIs**. 1. Select the API that you want to secure with Azure AD B2C. 1. Select the **Design** tab. 1. Under **Inbound processing**, select **\</\>** to open the policy code editor.
-1. Place the following `<validate-jwt>` tag inside the `<inbound>` policy.
+1. Place the following `<validate-jwt>` tag inside the `<inbound>` policy, and then do the following:
- 1. Update the `url` value in the `<openid-config>` element with your policy's well-known configuration URL.
- 1. Update the `<audience>` element with Application ID of the application you created previously in your B2C tenant (for example, *webapp1*).
- 1. Update the `<issuer>` element with the token issuer endpoint you recorded earlier.
+ a. Update the `url` value in the `<openid-config>` element with your policy's well-known configuration URL.
+ b. Update the `<audience>` element with the application ID of the application you created previously in your B2C tenant (for example, *webapp1*).
+ c. Update the `<issuer>` element with the token issuer endpoint you recorded earlier.
```xml <policies>
You're now ready to add the inbound policy in Azure API Management that validate
## Validate secure API access
-To ensure only authenticated callers can access your API, you can validate your Azure API Management configuration by calling the API with [Postman](https://www.getpostman.com/).
+To ensure that only authenticated callers can access your API, you can validate your Azure API Management configuration by calling the API with [Postman](https://www.getpostman.com/).
-To call the API, you need both an access token issued by Azure AD B2C, and an APIM subscription key.
+To call the API, you need both an access token that's issued by Azure AD B2C and an Azure API Management subscription key.
### Get an access token
-You first need a token issued by Azure AD B2C to use in the `Authorization` header in Postman. You can get one by using the **Run now** feature of your sign-up/sign-in user flow you should have created as one of the prerequisites.
+You first need a token that's issued by Azure AD B2C to use in the `Authorization` header in Postman. You can get one by using the *Run now* feature of the sign-up/sign-in user flow you that you created as one of the prerequisites.
-1. Browse to your Azure AD B2C tenant in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), go to your Azure AD B2C tenant.
1. Under **Policies**, select **User flows**.
-1. Select an existing sign-up/sign-in user flow, for example *B2C_1_signupsignin1*.
+1. Select an existing sign-up/sign-in user flow (for example, *B2C_1_signupsignin1*).
1. For **Application**, select *webapp1*.
-1. For **Reply URL**, choose `https://jwt.ms`.
+1. For **Reply URL**, select `https://jwt.ms`.
1. Select **Run user flow**.
- ![Run user flow page for sign up sign in user flow in Azure portal](media/secure-apim-with-b2c-token/portal-03-user-flow.png)
+ ![Screenshot of the "Run user flow" pane for the sign-up/sign-in user flow in the Azure portal.](media/secure-apim-with-b2c-token/portal-03-user-flow.png)
1. Complete the sign-in process. You should be redirected to `https://jwt.ms`.
-1. Record the encoded token value displayed in your browser. You use this token value for the Authorization header in Postman.
+1. Record the encoded token value that's displayed in your browser. You use this token value for the Authorization header in Postman.
- ![Encoded token value displayed on jwt.ms](media/secure-apim-with-b2c-token/jwt-ms-01-token.png)
+ ![Screenshot of the encoded token value displayed on jwt.ms.](media/secure-apim-with-b2c-token/jwt-ms-01-token.png)
-### Get API subscription key
+### Get an API subscription key
A client application (in this case, Postman) that calls a published API must include a valid API Management subscription key in its HTTP requests to the API. To get a subscription key to include in your Postman HTTP request:
-1. Browse to your Azure API Management service instance in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), go to your Azure API Management service instance.
1. Select **Subscriptions**.
-1. Select the ellipsis for **Product: Unlimited**, then select **Show/hide keys**.
-1. Record the **PRIMARY KEY** for the product. You use this key for the `Ocp-Apim-Subscription-Key` header in your HTTP request in Postman.
+1. Select the ellipsis (**...**) next to **Product: Unlimited**, and then select **Show/hide keys**.
+1. Record the **Primary Key** for the product. You use this key for the `Ocp-Apim-Subscription-Key` header in your HTTP request in Postman.
-![Subscription key page with Show/hide keys selected in Azure portal](media/secure-apim-with-b2c-token/portal-04-api-subscription-key.png)
+![Screenshot of the "Subscription key" page in the Azure portal, with "Show/hide keys" selected.](media/secure-apim-with-b2c-token/portal-04-api-subscription-key.png)
### Test a secure API call
-With the access token and APIM subscription key recorded, you're now ready to test whether you've correctly configured secure access to the API.
+With the access token and Azure API Management subscription key recorded, you're now ready to test whether you've correctly configured secure access to the API.
1. Create a new `GET` request in [Postman](https://www.getpostman.com/). For the request URL, specify the speakers list endpoint of the API you published as one of the prerequisites. For example:
With the access token and APIM subscription key recorded, you're now ready to te
| Key | Value | | | -- |
- | `Authorization` | Encoded token value you recorded earlier, prefixed with `Bearer ` (include the space after "Bearer") |
- | `Ocp-Apim-Subscription-Key` | APIM subscription key you recorded earlier |
+ | `Authorization` | The encoded token value you recorded earlier, prefixed with `Bearer ` (include the space after "Bearer") |
+ | `Ocp-Apim-Subscription-Key` | The Azure API Management subscription key you recorded earlier. |
+ | | |
- Your **GET** request URL and **Headers** should appear similar to:
+ Your **GET** request URL and **Headers** should appear similar to those shown in the following image:
- ![Postman UI showing the GET request URL and headers](media/secure-apim-with-b2c-token/postman-01-headers.png)
+ ![Screenshot of the Postman UI showing the GET request URL and headers.](media/secure-apim-with-b2c-token/postman-01-headers.png)
-1. Select the **Send** button in Postman to execute the request. If you've configured everything correctly, you should be presented with a JSON response with a collection of conference speakers (shown here truncated):
+1. In Postman, select the **Send** button to execute the request. If you've configured everything correctly, you should be given a JSON response with a collection of conference speakers (shown here, truncated):
```json {
With the access token and APIM subscription key recorded, you're now ready to te
### Test an insecure API call
-Now that you've made a successful request, test the failure case to ensure that calls to your API with an *invalid* token are rejected as expected. One way to perform the test is to add or change a few characters in the token value, then execute the same `GET` request as before.
+Now that you've made a successful request, test the failure case to ensure that calls to your API with an *invalid* token are rejected as expected. One way to perform the test is to add or change a few characters in the token value, and then run the same `GET` request as before.
-1. Add several characters to the token value to simulate an invalid token. For example, add "INVALID" to the token value:
+1. Add several characters to the token value to simulate an invalid token. For example, you could add "INVALID" to the token value, as shown here:
- ![Headers section of Postman UI showing INVALID added to token](media/secure-apim-with-b2c-token/postman-02-invalid-token.png)
+ ![Screenshot of the Headers section of Postman UI showing the string INVALID added to token.](media/secure-apim-with-b2c-token/postman-02-invalid-token.png)
1. Select the **Send** button to execute the request. With an invalid token, the expected result is a `401` unauthorized status code:
Now that you've made a successful request, test the failure case to ensure that
} ```
-If you see the `401` status code, you've verified that only callers with a valid access token issued by Azure AD B2C can make successful requests to your Azure API Management API.
+If you see a `401` status code, you've verified that only callers with a valid access token issued by Azure AD B2C can make successful requests to your Azure API Management API.
## Support multiple applications and issuers
-Several applications typically interact with a single REST API. To enable your API to accept tokens intended for multiple applications, add their application IDs to the `<audiences>` element in the APIM inbound policy.
+Several applications typically interact with a single REST API. To enable your API to accept tokens intended for multiple applications, add their application IDs to the `<audiences>` element in the Azure API Management inbound policy.
```xml <!-- Accept tokens intended for these recipient applications -->
Several applications typically interact with a single REST API. To enable your A
</audiences> ```
-Similarly, to support multiple token issuers, add their endpoint URIs to the `<issuers>` element in the APIM inbound policy.
+Similarly, to support multiple token issuers, add their endpoint URIs to the `<issuers>` element in the Azure API Management inbound policy.
```xml <!-- Accept tokens from multiple issuers -->
Similarly, to support multiple token issuers, add their endpoint URIs to the `<i
## Migrate to b2clogin.com
-If you have an APIM API that validates tokens issued by the legacy `login.microsoftonline.com` endpoint, you should migrate the API and the applications that call it to use tokens issued by [b2clogin.com](b2clogin.md).
+If you have an Azure API ManagementM API that validates tokens issued by the legacy `login.microsoftonline.com` endpoint, you should migrate the API and the applications that call it to use tokens issued by [b2clogin.com](b2clogin.md).
You can follow this general process to perform a staged migration:
-1. Add support in your APIM inbound policy for tokens issued by both b2clogin.com and login.microsoftonline.com.
+1. Add support in your Azure API Management inbound policy for tokens issued by both b2clogin.com and login.microsoftonline.com.
1. Update your applications one at a time to obtain tokens from the b2clogin.com endpoint.
-1. Once all of your applications are correctly obtaining tokens from b2clogin.com, remove support for login.microsoftonline.com-issued tokens from the API.
+1. After all your applications are correctly obtaining tokens from b2clogin.com, remove support for login.microsoftonline.com-issued tokens from the API.
-The following example APIM inbound policy illustrates how to accept tokens issued by both b2clogin.com and login.microsoftonline.com. Additionally, it supports API requests from two applications.
+The following example Azure API Management inbound policy illustrates how to accept tokens that are issued by both b2clogin.com and login.microsoftonline.com. Additionally, the policy supports API requests from two applications.
```xml <policies>
The following example APIM inbound policy illustrates how to accept tokens issue
## Next steps
-For additional details on Azure API Management policies, see the [APIM policy reference index](../api-management/api-management-policies.md).
+For additional information about Azure API Management policies, see the [Azure API Management policy reference index](../api-management/api-management-policies.md).
-You can find information about migrating OWIN-based web APIs and their applications to b2clogin.com in [Migrate an OWIN-based web API to b2clogin.com](multiple-token-endpoints.md).
+For information about migrating OWIN-based web APIs and their applications to b2clogin.com, see [Migrate an OWIN-based web API to b2clogin.com](multiple-token-endpoints.md).
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-phone-options.md
For direct authentication using text message, you can [Configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md). SMS-based sign-in is great for Frontline workers. With SMS-based sign-in, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
-Users can also verify themselves using a mobile phone or office phone as secondary form of authentication used during Azure AD Multi-Factor Authentication or self-service password reset (SSPR). Phone call verification is not available for Azure AD tenants with trial subscriptions.
+Users can also verify themselves using a mobile phone or office phone as secondary form of authentication used during Azure AD Multi-Factor Authentication or self-service password reset (SSPR).
+
+> [!NOTE]
+> Phone call verification is not available for Azure AD tenants with trial subscriptions. For example, signing up for a trial EMS licenses, will not provide the capability for phone call verification.
To work properly, phone numbers must be in the format *+CountryCode PhoneNumber*, for example, *+1 4251234567*.
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
Run the following PowerShell cmdlet:
The command returns your current additional authentication rules for your relying party trust. Append the following rules to your current claim rules: ```console
-c:[Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)", Value ==
-
-"YourGroupSID"] => issue(Type = "[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)",
-
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
Value = "AzureMfaAuthentication");-
-not exists([Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)",
-
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
Value=="YourGroupSid"]) => issue(Type = -
-"[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)", Value =
-
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
"AzureMfaServerAuthentication");ΓÇÖ ``` The following example assumes your current claim rules are configured to prompt for MFA when users connect from outside your network. This example includes the additional rules that you need to append. ```PowerShell- Set-AdfsAdditionalAuthenticationRule -AdditionalAuthenticationRules 'c:[type == -
-"[https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork](https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork)", value == "false"] => issue(type =
-
-"[https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod](https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod)", value =
-
-"[https://schemas.microsoft.com/claims/multipleauthn](https://schemas.microsoft.com/claims/multipleauthn)" );
-
- c:[Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)", Value ==
-
-"YourGroupSID"] => issue(Type = "[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)",
-
+"https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"https://schemas.microsoft.com/claims/multipleauthn" );
+ c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+ΓÇ£YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
Value = "AzureMfaAuthentication");-
-not exists([Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)",
-
-Value=="YourGroupSid"]) => issue(Type =
-
-"[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)", Value =
-
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value==ΓÇ£YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
"AzureMfaServerAuthentication");ΓÇÖ- ``` + #### Set per-application claims rule This example modifies claim rules on a specific relying party trust (application), and includes the information you must append. ```PowerShell- Set-AdfsRelyingPartyTrust -TargetName AppA -AdditionalAuthenticationRules 'c:[type == -
-"[https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork](https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork)", value == "false"] => issue(type =
-
-"[https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod](https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod)", value =
-
-"[https://schemas.microsoft.com/claims/multipleauthn](https://schemas.microsoft.com/claims/multipleauthn)" );
-
-c:[Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)", Value ==
-
-"YourGroupSID"] => issue(Type = "[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)",
-
+"https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"https://schemas.microsoft.com/claims/multipleauthn" );
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+ΓÇ£YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
Value = "AzureMfaAuthentication");-
-not exists([Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)",
-
-Value=="YourGroupSid"]) => issue(Type =
-
-"[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)", Value =
-
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value==ΓÇ£YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
"AzureMfaServerAuthentication");ΓÇÖ- ``` + ### Configure Azure AD MFA as an authentication provider in AD FS To configure Azure AD MFA for AD FS, you must configure each AD FS server. If you have multiple AD FS servers in your farm, you can configure them remotely using Azure AD PowerShell.
For example, remove the following from the rule(s):
```console
-c:[Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid"](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid"), Value ==
-
-"YourGroupSID"] => issue(Type = "[https://schemas.microsoft.com/claims/authnmethodsproviders"](https://schemas.microsoft.com/claims/authnmethodsproviders"),
-
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+ΓÇ£**YourGroupSID**"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
Value = "AzureMfaAuthentication");-
-not exists([Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid"](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid"),
-
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
Value=="YourGroupSid"]) => issue(Type =-
-"[https://schemas.microsoft.com/claims/authnmethodsproviders"](https://schemas.microsoft.com/claims/authnmethodsproviders"), Value =
-
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
"AzureMfaServerAuthentication");ΓÇÖ ```
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
The following steps will help create a Conditional Access policy to require thos
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **Select apps**, choose **Microsoft Azure Management**, and select **Select** then **Done**.
-1. Under **Conditions** > **Client apps (Preview)**, under **Select the client apps this policy will apply to** leave all defaults selected and select **Done**.
1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**. 1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create to enable your policy.
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Use the What-If tool to simulate a login from the user to the target application
To make sure that your policy works as expected, the recommended best practice is to test it before rolling it out into production. Ideally, use a test tenant to verify whether your new policy works as intended. For more information, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
+## Known issues
+- If you configure sign-in frequency for mobile devices, authentication after each sign-in frequency internal would be slow (can take 30 seconds on average). Also, it could happen across various apps at the same time.
+- In iOS devices, if an app configures certificates as the first authentication factor and the app has both Sign-in frequency and [Intune mobile application management](/mem/intune/apps/app-lifecycle) policies applied, the end-users will be blocked from signing in to the app when the policy is triggered.
+ ## Next steps
-* If you are ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
+* If you are ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
active-directory Directory Service Limits Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-service-limits-restrictions.md
Previously updated : 12/02/2019 Last updated : 07/29/2021
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/domains-manage.md
Previously updated : 03/12/2021 Last updated : 07/30/2021
# Managing custom domain names in your Azure Active Directory
-A domain name is an important part of the identifier for many Azure Active Directory (Azure AD) resources: it's part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the organization that contains the resource. Only a Global Administrator can manage domains in Azure AD.
+A domain name is an important part of the identifier for many Azure Active Directory (Azure AD) resources: it's part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
## Set the primary domain name for your Azure AD organization
If you want to add a subdomain name such as ΓÇÿeurope.contoso.comΓÇÖ to your org
If you have already added a contoso.com domain to one Azure AD organization, you can also verify the subdomain europe.contoso.com in a different Azure AD organization. When adding the subdomain, you are prompted to add a TXT record in the DNS hosting provider. -- ## What to do if you change the DNS registrar for your custom domain name If you change the DNS registrars, there are no additional configuration tasks in Azure AD. You can continue using the domain name with Azure AD without interruption. If you use your custom domain name with Microsoft 365, Intune, or other services that rely on custom domain names in Azure AD, see the documentation for those services. ## Delete a custom domain name
-You can delete a custom domain name from your Azure AD if your organization no longer uses that domain name, or if you need to use that domain name with another Azure AD.
+You can delete a custom domain name from your Azure AD if your organization no longer uses that domain name, or if you need to use that domain name with another Azure AD organization.
To delete a custom domain name, you must first ensure that no resources in your organization rely on the domain name. You can't delete a domain name from your organization if:
To delete a custom domain name, you must first ensure that no resources in your
* Any group has an email address or proxy address that includes the domain name. * Any application in your Azure AD has an app ID URI that includes the domain name.
-You must change or delete any such resource in your Azure AD organization before you can delete the custom domain name.
+You must change or delete any such resource in your Azure AD organization before you can delete the custom domain name.
+
+> [!Note]
+> To delete the custom domain, use a Global Administrator account that is based on either the default domain (onmicrosoft.com) or a different custom domain (mydomainname.com).
### ForceDelete option You can **ForceDelete** a domain name in the [Azure AD Admin Center](https://aad.portal.azure.com) or using [Microsoft Graph API](/graph/api/domain-forcedelete?view=graph-rest-beta&preserve-view=true). These options use an asynchronous operation and update all references from the custom domain name like ΓÇ£user@contoso.comΓÇ¥ to the initial default domain name such as ΓÇ£user@contoso.onmicrosoft.com.ΓÇ¥
-To call **ForceDelete** in the Azure portal, you must ensure that there are fewer than 1000 references to the domain name, and any references where Exchange is the provisioning service must be updated or removed in the [Exchange Admin Center](https://outlook.office365.com/ecp/). This includes Exchange Mail-Enabled Security Groups and distributed lists; for more information, see [Removing mail-enabled security groups](/Exchange/recipients/mail-enabled-security-groups#Remove%20mail-enabled%20security%20groups&preserve-view=true). Also, the **ForceDelete** operation won't succeed if either of the following is true:
+To call **ForceDelete** in the Azure portal, you must ensure that there are fewer than 1000 references to the domain name, and any references where Exchange is the provisioning service must be updated or removed in the [Exchange Admin Center](https://outlook.office365.com/ecp/). This includes Exchange Mail-Enabled Security Groups and distributed lists. For more information, see [Removing mail-enabled security groups](/Exchange/recipients/mail-enabled-security-groups#Remove%20mail-enabled%20security%20groups&preserve-view=true). Also, the **ForceDelete** operation won't succeed if either of the following is true:
* You purchased a domain via Microsoft 365 domain subscription services * You are a partner administering on behalf of another customer organization
active-directory 4 Secure Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/4-secure-access-groups.md
# Securing external access with groups
-Groups are an essential part of any access control strategy. Azure Active Directory (Azure AD) security groups and Microsoft 365 (M365) Groups can be used as the basis for securing access to resources.
+Groups are an essential part of any access control strategy. Azure Active Directory (Azure AD) security groups and Microsoft 365 Groups can be used as the basis for securing access to resources.
Groups are the best option to use as the basis for the following access control mechanisms:
Groups are the best option to use as the basis for the following access control
* Entitlement Management Access Packages
-* Access to M365 resources, Microsoft Teams, and SharePoint sites
+* Access to Microsoft 365 resources, Microsoft Teams, and SharePoint sites
Groups have the following roles:
As you develop your group strategy to secure external access to your resources,
* You can also [set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md).
- * *By default all users can create M365 Groups and groups are open for all (internal and external) users in your tenant to join*.
+ * *By default all users can create Microsoft 365 Groups and groups are open for all (internal and external) users in your tenant to join*.
* [You can restrict Microsoft 365 Group creation](/microsoft-365/solutions/manage-creation-of-groups) to the members of a particular security group. Use Windows PowerShell to configure this setting.
We recommend a naming convention for security groups that makes the purpose clea
### Types of groups
-Both Azure AD security groups and Microsoft 365 groups can be created from the Azure AD portal or the M365 admin portal. Both types can be used as the basis for securing external access:
+Both Azure AD security groups and Microsoft 365 groups can be created from the Azure AD portal or the Microsoft 365 admin portal. Both types can be used as the basis for securing external access:
|Considerations | Azure AD security groups (manual and dynamic)| Microsoft 365 Groups | | - | - | - | | What can the group contain?| Users<br>Groups<br>Service principles<br>Devices| Users only |
-| Where is the group created?| Azure AD portal<br>M365 portal (if to be mail enabled)<br>PowerShell<br>Microsoft Graph<br>End user portal| M365 portal<br>Azure AD portal<br>PowerShell<br>Microsoft Graph<br>In Microsoft 365 applications |
+| Where is the group created?| Azure AD portal<br>Microsoft 365 portal (if to be mail enabled)<br>PowerShell<br>Microsoft Graph<br>End user portal| Microsoft 365 portal<br>Azure AD portal<br>PowerShell<br>Microsoft Graph<br>In Microsoft 365 applications |
| Who creates by default?| Administrators <br>End-users| Administrators<br>End-users | | Who can be added by default?| Internal users (members)| Tenant members and guests from any organization |
-| What does it grant access to?| Only resources to which it's assigned.| All group-related resources:<br>(Group mailbox, site, team, chats, and other included M365 resources)<br>Any other resources to which group is added |
+| What does it grant access to?| Only resources to which it's assigned.| All group-related resources:<br>(Group mailbox, site, team, chats, and other included Microsoft 365 resources)<br>Any other resources to which group is added |
| Can be used with| Conditional Access<br>Entitlement Management<br>Group licensing| Conditional Access<br>Entitlement Management<br>Sensitivity labels |
Use Microsoft 365 groups to create and manage a set of Microsoft 365 resources,
Azure AD security groups can also be used to:
-* assign licenses for services such as M365, Dynamics 365, and Enterprise Mobility and Security. For more information, see [group-based licensing](./active-directory-licensing-whatis-azure-portal.md).
+* assign licenses for services such as Microsoft 365, Dynamics 365, and Enterprise Mobility and Security. For more information, see [group-based licensing](./active-directory-licensing-whatis-azure-portal.md).
-* assign elevated permissions. For more information, see [Use cloud groups to manage role assignments (preview](../roles/groups-concept.md)).
+* assign elevated permissions. For more information, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
To create a group [in the Azure portal](./active-directory-groups-create-azure-portal.md) navigate to Azure Active Directory, then to Groups. You can also create Azure AD security groups by using [PowerShell cmdlets](../enterprise-users/groups-settings-v2-cmdlets.md).
Hybrid organizations have both an on-premises infrastructure and an Azure AD clo
## Microsoft 365 Groups
-[Microsoft 365 Groups](/microsoft-365/admin/create-groups/office-365-groups) are the foundational membership service that drives all access across M365. They can be created from the [Azure portal](https://portal.azure.com/), or the [M365 portal](https://admin.microsoft.com/). When an M365 group is created, you grant access to a group of resources used to collaborate. See [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups) for a complete listing of these resources.
+[Microsoft 365 Groups](/microsoft-365/admin/create-groups/office-365-groups) are the foundational membership service that drives all access across Microsoft 365. They can be created from the [Azure portal](https://portal.azure.com/), or the [Microsoft 365 portal](https://admin.microsoft.com/). When an Microsoft 365 group is created, you grant access to a group of resources used to collaborate. See [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups) for a complete listing of these resources.
-M365 Groups have the following nuances for their roles
+Microsoft 365 Groups have the following nuances for their roles
* **Owners** - Group owners can add or remove members and have unique permissions like the ability to delete conversations from the shared inbox or change group settings. Group owners can rename the group, update the description or picture and more.
M365 Groups have the following nuances for their roles
-### M365 Group settings
+### Microsoft 365 Group settings
You select email alias, privacy, and whether to enable the group for teams at the time of set-up.
active-directory Howto Export Risk Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-export-risk-data.md
+
+ Title: Export and use Azure Active Directory Identity Protection data
+description: Learn how to investigate using long-term data in Azure Active Directory Identity Protection
+++++ Last updated : 07/30/2021++++++++
+# How To: Export risk data
+
+Azure AD stores reports and security signals for a defined period of time. When it comes to risk information, 90 days may not be long enough.
+
+| Report / Signal | Azure AD Free | Azure AD Premium P1 | Azure AD Premium P2 |
+| | | | |
+| Audit logs | 7 days | 30 days | 30 days |
+| Sign-ins | 7 days | 30 days | 30 days |
+| Azure AD MFA usage | 30 days | 30 days | 30 days |
+| Users at risk | 7 days | 30 days | 90 days |
+| Risky sign-ins | 7 days | 30 days | 90 days |
+
+Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send **RiskyUsers** and **UserRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an Event Hub, or send data to a partner solution. Find these options in the **Azure portal** > **Azure Active Directory**, **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
+
+[ ![Diagnostic settings screen in Azure AD showing existing configuration](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png) ](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png#lightbox)
+
+## Log Analytics
+
+Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
+
+Once enabled you will find access to Log Analytics in the **Azure portal** > **Azure AD** > **Log Analytics**. The tables of most interest to Identity Protection administrators are **AADRiskyUsers** and **AADUserRiskEvents**.
+
+- AADRiskyUsers - Provides data like the **Risky users** report in Identity Protection.
+- AADUserRiskEvents - Provides data like the **Risk detections** report in Identity Protection.
+
+[ ![Log Analytics view showing a query against the AADUserRiskEvents table showing the top 5 events](./media/howto-export-risk-data/log-analytics-view-query-user-risk-events.png) ](./media/howto-export-risk-data/log-analytics-view-query-user-risk-events.png#lightbox)
+
+In the image above, the following query was run to show the most recent five risk detections triggered.
+
+```kusto
+AADUserRiskEvents
+| take 5
+```
+
+Another option is to query the AADRiskyUsers table to see all risky users.
+
+```kusto
+AADRiskyUsers
+```
+
+> [!NOTE]
+> Log Analytics only has visibility into data as it is streamed. Events prior to enabling the sending of events from Azure AD do not appear.
+
+## Storage account
+
+By routing logs to an Azure storage account, you can keep it for longer than the default retention period. For more information, see the article [Tutorial: Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
+
+## Azure Event Hubs
+
+Azure Event Hubs can look at incoming data from sources like Azure AD Identity Protection and provide real-time analysis and correlation. For more information, see the article [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
+
+## Other options
+
+Organizations can choose to [connect Azure AD data to Azure Sentinel](../../sentinel/connect-azure-ad-identity-protection.md) as well for further processing.
+
+Organizations can use the [Microsoft Graph API to programatically interact with risk events](howto-identity-protection-graph-api.md).
+
+## Next steps
+
+- [What is Azure Active Directory monitoring?](../reports-monitoring/overview-monitoring.md)
+- [Install and use the log analytics views for Azure Active Directory](../reports-monitoring/howto-install-use-log-analytics-views.md)
+- [Connect data from Azure Active Directory (Azure AD) Identity Protection](../../sentinel/connect-azure-ad-identity-protection.md)
+- [Azure Active Directory Identity Protection and the Microsoft Graph PowerShell SDK](howto-identity-protection-graph-api.md)
+- [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
For more information on Azure AD Multi-Factor Authentication, see [What is Azure
1. Under **Assignments** 1. **Users** - Choose **All users** or **Select individuals and groups** if limiting your rollout. 1. Optionally you can choose to exclude users from the policy.
- 1. Under **Controls**
- 1. Ensure the checkbox **Require Azure AD MFA registration** is checked and choose **Select**.
1. **Enforce Policy** - **On** 1. **Save**
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/tenant-restrictions.md
Previously updated : 6/2/2021 Last updated : 7/30/2021
Large organizations that emphasize security want to move to cloud services like Microsoft 365, but need to know that their users only can access approved resources. Traditionally, companies restrict domain names or IP addresses when they want to manage access. This approach fails in a world where software as a service (or SaaS) apps are hosted in a public cloud, running on shared domain names like [outlook.office.com](https://outlook.office.com/) and [login.microsoftonline.com](https://login.microsoftonline.com/). Blocking these addresses would keep users from accessing Outlook on the web entirely, instead of merely restricting them to approved identities and resources.
-The Azure Active Directory (Azure AD) solution to this challenge is a feature called tenant restrictions. With tenant restrictions, organizations can control access to SaaS cloud applications, based on the Azure AD tenant the applications use for single sign-on. For example, you may want to allow access to your organization's Microsoft 365 applications, while preventing access to other organizations' instances of these same applications.  
+The Azure Active Directory (Azure AD) solution to this challenge is a feature called tenant restrictions. With tenant restrictions, organizations can control access to SaaS cloud applications, based on the Azure AD tenant the applications use for single sign-on. For example, you may want to allow access to your organization's Microsoft 365 applications, while preventing access to other organizations' instances of these same applications.
With tenant restrictions, organizations can specify the list of tenants that their users are permitted to access. Azure AD then only grants access to these permitted tenants.
For each outgoing request to login.microsoftonline.com, login.microsoft.com, and
The headers should include the following elements: -- For *Restrict-Access-To-Tenants*, use a value of \<permitted tenant list\>, which is a comma-separated list of tenants you want to allow users to access. Any domain that is registered with a tenant can be used to identify the tenant in this list, as well as the directory ID itself. For an example of all three ways of describing a tenant, the name/value pair to allow Contoso, Fabrikam, and Microsoft looks like: `Restrict-Access-To-Tenants: contoso.com,fabrikam.onmicrosoft.com,72f988bf-86f1-41af-91ab-2d7cd011db47`
+- For *Restrict-Access-To-Tenants*, use a value of \<permitted tenant list\>, which is a comma-separated list of tenants you want to allow users to access. Any domain that is registered with a tenant can be used to identify the tenant in this list, as well as the directory ID itself. For an example of all three ways of describing a tenant, the name/value pair to allow Contoso, Fabrikam, and Microsoft looks like: `Restrict-Access-To-Tenants: contoso.com,fabrikam.onmicrosoft.com,72f988bf-86f1-41af-91ab-2d7cd011db47`
-- For *Restrict-Access-Context*, use a value of a single directory ID, declaring which tenant is setting the tenant restrictions. For example, to declare Contoso as the tenant that set the tenant restrictions policy, the name/value pair looks like: `Restrict-Access-Context: 456ff232-35l2-5h23-b3b3-3236w0826f3d`. You **must** use your own directory ID in this spot in order to get logs for these authentications.
+- For *Restrict-Access-Context*, use a value of a single directory ID, declaring which tenant is setting the tenant restrictions. For example, to declare Contoso as the tenant that set the tenant restrictions policy, the name/value pair looks like: `Restrict-Access-Context: 456ff232-35l2-5h23-b3b3-3236w0826f3d`. You **must** use your own directory ID in this spot in order to get logs for these authentications.
> [!TIP] > You can find your directory ID in the [Azure Active Directory portal](https://aad.portal.azure.com/). Sign in as an administrator, select **Azure Active Directory**, then select **Properties**.
Microsoft 365 applications must meet two criteria to fully support tenant restri
1. The client used supports modern authentication. 2. Modern authentication is enabled as the default authentication protocol for the cloud service.
-Refer to [Updated Office 365 modern authentication](https://www.microsoft.com/en-us/microsoft-365/blog/2015/03/23/office-2013-modern-authentication-public-preview-announced/) for the latest information on which Office clients currently support modern authentication. That page also includes links to instructions for enabling modern authentication on specific Exchange Online and Skype for Business Online tenants. SharePoint Online already enables Modern authentication by default.
+Refer to [Updated Office 365 modern authentication](https://www.microsoft.com/en-us/microsoft-365/blog/2015/03/23/office-2013-modern-authentication-public-preview-announced/) for the latest information on which Office clients currently support modern authentication. That page also includes links to instructions for enabling modern authentication on specific Exchange Online and Skype for Business Online tenants. SharePoint Online already enables Modern authentication by default. Teams only supports modern auth, and does not support legacy auth, so this bypass concern does not apply to Teams.
Microsoft 365 browser-based applications (the Office Portal, Yammer, SharePoint sites, Outlook on the Web, and more) currently support tenant restrictions. Thick clients (Outlook, Skype for Business, Word, Excel, PowerPoint, and more) can enforce tenant restrictions only when using modern authentication.
active-directory Tutorial Windows Vm Access Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md
ms.devlang: na
na Previously updated : 01/14/2020 Last updated : 07/29/2021
if (accessToken != null) {
} ```
+>[!NOTE]
+>You can use managed identities while working with other programming options using our [SDKs](qs-configure-sdk-windows-vm.md).
+ Alternatively, a quick way to test the end to end setup without having to write and deploy an app on the VM is using PowerShell. 1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, click **Connect**.
In this tutorial, you learned how to use a system-assigned managed identity to a
> [!div class="nextstepaction"] > [Azure SQL Database](../../azure-sql/database/sql-database-paas-overview.md)+
active-directory Groups Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/groups-features.md
# Management capabilities for privileged access Azure AD groups (preview)
-In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use cloud groups to manage role assignments in Azure Active Directory (preview)](../roles/groups-concept.md).
+In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
>[!Important] > To assign a privileged access group to a role for administrative access to Exchange, Security and Compliance center, or SharePoint, use the Azure AD portal **Roles and Administrators** experience and not in the Privileged Access Groups experience to make the user or group eligible for activation into the group.
active-directory Pim Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-configure.md
Privileged Identity Management supports the following scenarios:
## Managing privileged access Azure AD groups (preview)
-In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use cloud groups to manage role assignments in Azure Active Directory (preview)](../roles/groups-concept.md).
+In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
>[!Important] > To assign a privileged access group to a role for administrative access to Exchange, Security and Compliance center, or SharePoint, use the Azure AD portal **Roles and Administrators** experience and not in the Privileged Access Groups experience to make the user or group eligible for activation into the group.
active-directory Pim Resource Roles Start Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-start-access-review.md
The need for access to privileged Azure resource roles by employees changes over
[!INCLUDE [Azure AD Premium P2 license](../../../includes/active-directory-p2-license.md)] For more information about licenses for PIM, refer to [License requirements to use Privileged Identity Management](subscription-requirements.md). > [!Note]
-> Currently, you can scope an access review to service principals with access to Azure AD and Azure resource roles (Preview) with an Azure Active Directory Premium P2 edition active in your tenant. The licensing model for service principals will be finalized for general availability of this feature and additional licenses may be required.
+> Currently, you can scope an access review to service principals with access to Azure AD and Azure resource roles (Preview) with an Azure Active Directory Premium P2 edition active in your tenant. The licensing model for service principals will be finalized for general availability of this feature and additional licenses may be required.
## Prerequisite role
The need for access to privileged Azure resource roles by employees changes over
1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is assigned to one of the prerequisite roles. 1. Select **Identity Governance**
-
+ 1. In the left menu, select **Azure resources** under **Azure AD Privileged Identity Management**. 1. Select the resource you want to manage, such as a subscription.
+ ![Azure resources - Select a resource to create an access review](./media/pim-resource-roles-start-access-review/access-review-select-resource.png)
+ 1. Under Manage, select **Access reviews**. ![Azure resources - Access reviews list showing the status of all reviews](./media/pim-resource-roles-start-access-review/access-reviews.png)
active-directory Reference Azure Monitor Sign Ins Log Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md
na Previously updated : 05/21/2021 Last updated : 07/30/2021
This article describes the Azure Active Directory (Azure AD) sign-in log schema
"riskState":"none", "riskEventTypes":[], "resourceDisplayName":"windows azure service management api",
- "resourceId":"797f4846-ba00-4fd7-ba43-dac1f8f63013",
- "authenticationMethodsUsed":[]
- }
+ "resourceId":"797f4846-ba00-4fd7-ba43-dac1f8f63013"
+ }
} ```
This article describes the Azure Active Directory (Azure AD) sign-in log schema
| Location | - | Provides the location of the sign-in activity. | | Properties | - | Lists all the properties that are associated with sign-ins.| ++ ## Next steps * [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-assign-roles.md
User Administrator | Can manage all aspects of users and groups, including res
The following security principals can be assigned to a role with an administrative unit scope: * Users
-* Role-assignable Azure AD groups (preview)
+* Role-assignable Azure AD groups
* Service Principal Name (SPN) ## Assign a scoped role
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-assign-role.md
Previously updated : 05/14/2021 Last updated : 07/30/2021
This section describes how an IT admin can assign Azure Active Directory (Azure
- Azure AD Premium P1 or P2 license - Privileged Role Administrator or Global Administrator-- AzureADPreview module when using PowerShell
+- AzureAD module when using PowerShell
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
Title: Use Azure AD groups to manage role assignments (preview) - Azure Active Directory
+ Title: Use Azure AD groups to manage role assignments - Azure Active Directory
description: Use Azure AD groups to simplify role assignment management in Azure Active Directory.
Previously updated : 06/24/2021 Last updated : 07/30/2021
-# Use Azure AD groups to manage role assignments (preview)
-
-> [!IMPORTANT]
-> Role-assignable groups is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Use Azure AD groups to manage role assignments
Azure Active Directory (Azure AD) lets you target Azure AD groups for role assignments. Assigning roles to groups can simplify the management of role assignments in Azure AD with minimal effort from your Global Administrators and Privileged Role Administrators.
Role-assignable groups are designed to help prevent potential breaches by having
- Only Global Administrators and Privileged Role Administrators can create a role-assignable group. - The membership type for role-assignable groups must be Assigned and can't be an Azure AD dynamic group. Automated population of dynamic groups could lead to an unwanted account being added to the group and thus assigned to the role. - By default, only Global Administrators and Privileged Role Administrators can manage the membership of a role-assignable group, but you can delegate the management of role-assignable groups by adding group owners.
+- RoleManagement.ReadWrite.All Microsoft Graph permission is required to be able to be able to manage the membership of such groups; Group.ReadWrite.All won't work.
- To prevent elevation of privilege, only a Privileged Authentication Administrator or a Global Administrator can change the credentials or reset MFA for members and owners of a role-assignable group. - Group nesting is not supported. A group can't be added as a member of a role-assignable group.
Using this feature requires an Azure AD Premium P1 license. To also use Privileg
## Next steps - [Create a role-assignable group](groups-create-eligible.md)-- [Assign Azure AD roles to groups](groups-assign-role.md)
+- [Assign Azure AD roles to groups](groups-assign-role.md)
active-directory Groups Create Eligible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-create-eligible.md
Previously updated : 05/14/2021 Last updated : 07/30/2021
You can only assign a role to a group that was created with the ΓÇÿisAssignableT
- Azure AD Premium P1 or P2 license - Privileged Role Administrator or Global Administrator-- AzureADPreview module when using PowerShell
+- AzureAD module when using PowerShell
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
For this type of group, `isPublic` will always be false and `isSecurityEnabled`
```powershell #Basic set up
-Install-Module -Name AzureADPreview
-Import-Module -Name AzureADPreview
-Get-Module -Name AzureADPreview
+Install-Module -Name AzureAD
+Import-Module -Name AzureAD
+Get-Module -Name AzureAD
#Connect to Azure AD. Sign in as Privileged Role Administrator or Global Administrator. Only these two roles can create a role-assignable group. Connect-AzureAD
active-directory Groups Remove Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-remove-assignment.md
Title: Remove role assignments from a group in Azure Active Directory | Microsoft Docs
-description: Preview custom Azure AD roles for delegating identity management. Manage Azure roles in the Azure portal, PowerShell, or Graph API.
+ Title: Remove role assignments from a group in Azure Active Directory
+description: Remove role assignments from a group in Azure Active Directory using the Azure portal, PowerShell, or Microsoft Graph API.
Previously updated : 05/14/2021 Last updated : 07/30/2021
This article describes how an IT admin can remove Azure AD roles assigned to gro
- Azure AD Premium P1 or P2 license - Privileged Role Administrator or Global Administrator-- AzureADPreview module when using PowerShell
+- AzureAD module when using PowerShell
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-view-assignments.md
This section describes how the roles assigned to a group can be viewed using Azu
## Prerequisites -- AzureADPreview module when using PowerShell
+- AzureAD module when using PowerShell
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Users in this role can create application registrations when the "Users can regi
> | Actions | Description | > | | | > | microsoft.directory/applications/createAsOwner | Create all types of applications, and creator is added as the first owner |
-> | microsoft.directory/appRoleAssignments/createAsOwner | Create application role assignments, with creator as the first owner |
> | microsoft.directory/oAuth2PermissionGrants/createAsOwner | Create OAuth 2.0 permission grants, with creator as the first owner | > | microsoft.directory/servicePrincipals/createAsOwner | Create service principals, with creator as the first owner |
The [Authentication Administrator](#authentication-administrator) and [Privilege
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/organization/strongAuthentication/read | Read the strong authentication property for an organization |
-> | microsoft.directory/organization/strongAuthentication/update | Update strong auth properties of an organization |
+> | microsoft.directory/organization/strongAuthentication/allTasks | Manage all aspects of strong authentication properties of an organization |
> | microsoft.directory/userCredentialPolicies/create | Create credential policies for users | > | microsoft.directory/userCredentialPolicies/delete | Delete credential policies for users | > | microsoft.directory/userCredentialPolicies/standard/read | Read standard properties of credential policies for users |
Users with this role have all permissions in the Azure Information Protection se
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
Users in this role can enable, disable, and delete devices in Azure AD and read
> | Actions | Description | > | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/devices/delete | Delete devices from Azure AD | > | microsoft.directory/devices/disable | Disable devices in Azure AD |
In | Can do
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
Users in this role can manage the Desktop Analytics service. This includes the a
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.desktopAnalytics/allEntities/allTasks | Manage all aspects of Desktop Analytics |
Users in this role can read basic directory information. This role should be use
> | microsoft.directory/subscribedSkus/standard/read | Read basic properties on subscriptions | > | microsoft.directory/users/standard/read | Read basic properties on users | > | microsoft.directory/users/appRoleAssignments/read | Read application role assignments for users |
+> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users |
> | microsoft.directory/users/directReports/read | Read the direct reports for users |
+> | microsoft.directory/users/licenseDetails/read | Read license details of users |
> | microsoft.directory/users/manager/read | Read manager of users | > | microsoft.directory/users/memberOf/read | Read the group memberships of users | > | microsoft.directory/users/oAuth2PermissionGrants/read | Read delegated permission grants on users | > | microsoft.directory/users/ownedDevices/read | Read owned devices of users | > | microsoft.directory/users/ownedObjects/read | Read owned objects of users |
+> | microsoft.directory/users/photo/read | Read photo of users |
> | microsoft.directory/users/registeredDevices/read | Read registered devices of users |
+> | microsoft.directory/users/scopedRoleMemberOf/read | Read user's membership of an Azure AD role, that is scoped to an administrative unit |
## Directory Synchronization Accounts
Do not use. This role is automatically assigned to the Azure AD Connect service,
> | microsoft.directory/applications/owners/update | Update owners of applications | > | microsoft.directory/applications/permissions/update | Update exposed permissions and required permissions on all types of applications | > | microsoft.directory/applications/policies/update | Update policies of applications |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/organization/dirSync/update | Update the organization directory sync property | > | microsoft.directory/policies/create | Create policies in Azure AD | > | microsoft.directory/policies/delete | Delete policies in Azure AD |
Users in this role can read and update basic information of users, groups, and s
> | microsoft.directory/users/disable | Disable users | > | microsoft.directory/users/enable | Enable users | > | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/inviteGuest | Invite guest users |
> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users | > | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users |
+> | microsoft.directory/users/photo/update | Update photo of users |
> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users | ## Domain Name Administrator
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/applications/allProperties/allTasks | Create and delete applications, and read and update all properties | > | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates |
-> | microsoft.directory/appRoleAssignments/allProperties/allTasks | Create and delete appRoleAssignments, and read and update all properties |
> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties | > | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policies | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
Users in this role can read settings and administrative information across Micro
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/applications/applicationProxy/read | Read all application proxy properties |
+> | microsoft.directory/accessReviews/allProperties/read | |
+> | microsoft.directory/administrativeUnits/allProperties/read | |
+> | microsoft.directory/applications/allProperties/read | Read all properties (including privileged properties) on all types of applications |
> | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/cloudAppSecurity/allProperties/read | |
> | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors | > | microsoft.directory/connectorGroups/allProperties/read | Read all properties of application proxy connector groups |
+> | microsoft.directory/contacts/allProperties/read | |
+> | microsoft.directory/devices/allProperties/read | Read all properties on devices |
+> | microsoft.directory/directoryRoles/allProperties/read | |
+> | microsoft.directory/directoryRoleTemplates/allProperties/read | |
+> | microsoft.directory/domains/allProperties/read | Read all properties of domains |
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management |
+> | microsoft.directory/groups/allProperties/read | Read all properties (including privileged properties) on Security groups and Microsoft 365 groups, including role-assignable groups |
+> | microsoft.directory/groupSettings/allProperties/read | |
+> | microsoft.directory/groupSettingTemplates/allProperties/read | |
+> | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
+> | microsoft.directory/loginOrganizationBranding/allProperties/read | |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/read | |
+> | microsoft.directory/organization/allProperties/read | |
+> | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies |
+> | microsoft.directory/policies/allProperties/read | |
+> | microsoft.directory/conditionalAccessPolicies/allProperties/read | |
+> | microsoft.directory/crossTenantAccessPolicies/allProperties/read | |
> | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies |
-> | microsoft.directory/groups/hiddenMembers/read | Read hidden members of Security groups and Microsoft 365 groups, including role-assignable groups |
-> | microsoft.directory/organization/strongAuthentication/read | Read the strong authentication property for an organization |
-> | microsoft.directory/policies/standard/read | Read basic properties on policies |
-> | microsoft.directory/policies/owners/read | Read owners of policies |
-> | microsoft.directory/policies/policyAppliedTo/read | Read policies.policyAppliedTo property |
-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
-> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies |
-> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
+> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs |
-> | microsoft.directory/servicePrincipals/authentication/read | Read authentication properties on service principals |
+> | microsoft.directory/roleAssignments/allProperties/read | |
+> | microsoft.directory/roleDefinitions/allProperties/read | |
+> | microsoft.directory/scopedRoleMemberships/allProperties/read | |
+> | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the getAvailableExtentionProperties service action |
+> | microsoft.directory/servicePrincipals/allProperties/read | Read all properties (including privileged properties) on servicePrincipals |
+> | microsoft.directory/servicePrincipalCreationPolicies/standard/read | Read standard properties of service principal creation policies |
> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties |
-> | microsoft.directory/users/strongAuthentication/read | Read the strong authentication property for users |
+> | microsoft.directory/subscribedSkus/allProperties/read | |
+> | microsoft.directory/users/allProperties/read | Read all properties of users |
> | microsoft.directory/verifiableCredentials/configuration/contracts/cards/allProperties/read | Read a verifiable credential card | > | microsoft.directory/verifiableCredentials/configuration/contracts/allProperties/read | Read a verifiable credential contract | > | microsoft.directory/verifiableCredentials/configuration/allProperties/read | Read configuration required to create and manage verifiable credentials |
Users in this role can manage Azure Active Directory B2B guest user invitations
> | microsoft.directory/users/inviteGuest | Invite guest users | > | microsoft.directory/users/standard/read | Read basic properties on users | > | microsoft.directory/users/appRoleAssignments/read | Read application role assignments for users |
+> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users |
> | microsoft.directory/users/directReports/read | Read the direct reports for users |
+> | microsoft.directory/users/licenseDetails/read | Read license details of users |
> | microsoft.directory/users/manager/read | Read manager of users | > | microsoft.directory/users/memberOf/read | Read the group memberships of users | > | microsoft.directory/users/oAuth2PermissionGrants/read | Read delegated permission grants on users | > | microsoft.directory/users/ownedDevices/read | Read owned devices of users | > | microsoft.directory/users/ownedObjects/read | Read owned objects of users |
+> | microsoft.directory/users/photo/read | Read photo of users |
> | microsoft.directory/users/registeredDevices/read | Read registered devices of users |
+> | microsoft.directory/users/scopedRoleMemberOf/read | Read user's membership of an Azure AD role, that is scoped to an administrative unit |
## Helpdesk Administrator
This role can create and manage all security groups. However, Intune Administrat
> | microsoft.directory/groups.security/visibility/update | Update the visibility property on Security groups, excluding role-assignable groups | > | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users |
+> | microsoft.directory/users/photo/update | Update photo of users |
> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.intune/allEntities/allTasks | Manage all aspects of Microsoft Intune | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests |
Users with this role have global permissions to manage settings within Microsoft
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users in this role can add, remove, and update license assignments on users, gro
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/groups/assignLicense | Assign product licenses to groups for group-based licensing | > | microsoft.directory/groups/reprocessLicenseAssignment | Reprocess license assignments for group-based licensing | > | microsoft.directory/users/assignLicense | Manage user licenses |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users | > | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/photo/update | Update photo of users |
> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users | > | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/photo/update | Update photo of users |
> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
Users with this role can manage role assignments in Azure Active Directory, as w
> | Actions | Description | > | | | > | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and manage administrative units (including members) |
-> | microsoft.directory/appRoleAssignments/allProperties/allTasks | Create and delete appRoleAssignments, and read and update all properties |
> | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policies | > | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directory roles, and read and update all properties | > | microsoft.directory/groupsAssignableToRoles/create | Create role-assignable groups |
Users with this role can view usage reporting data and the reports dashboard in
> | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
-> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Windows Defender ATP and EDR | Assign roles<br>Manage machine groups<br>Configur
> | | | > | microsoft.directory/applications/policies/update | Update policies of applications | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
Users with this role can manage alerts and have global read-only access on secur
> | Actions | Description | > | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
Windows Defender ATP and EDR | View and investigate alerts. When you turn on rol
> | Actions | Description | > | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.directory/groups/hiddenMembers/read | Read hidden members of Security groups and Microsoft 365 groups, including role-assignable groups | > | microsoft.directory/groups.unified/create | Create Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups, excluding role-assignable groups |
Users in this role can manage aspects of the Microsoft Teams workload related to
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Users in this role can troubleshoot communication issues within Microsoft Teams
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
Users in this role can troubleshoot communication issues within Microsoft Teams
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policies |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
Users with this role can create users, and manage all aspects of users with some
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/appRoleAssignments/create | Create application role assignments |
-> | microsoft.directory/appRoleAssignments/delete | Delete application role assignments |
-> | microsoft.directory/appRoleAssignments/basic/update | Update basic properties of application role assignments |
> | microsoft.directory/contacts/create | Create contacts | > | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts |
Users with this role can create users, and manage all aspects of users with some
> | microsoft.directory/users/basic/update | Update basic properties on users | > | microsoft.directory/users/manager/update | Update manager for users | > | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/photo/update | Update photo of users |
> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/prerequisites.md
Previously updated : 05/13/2021 Last updated : 07/30/2021
To use PowerShell commands to do the following:
You must have the following module installed: -- [AzureAD](https://www.powershellgallery.com/packages/AzureAD) version 2.0.2.130 or later
+- [AzureAD](https://www.powershellgallery.com/packages/AzureAD) version 2.0.2.137 or later
#### Check AzureAD version
You should see output similar to the following:
```powershell Version Name Repository Description - - - --
-2.0.2.130 AzureAD PSGallery Azure Active Directory V2 General Availability M...
+2.0.2.137 AzureAD PSGallery Azure Active Directory V2 General Availability M...
``` #### Install AzureAD
To use AzureAD, follow these steps to make sure it is imported into the current
```powershell ModuleType Version Name ExportedCommands - - - -
- Binary 2.0.2.130 AzureAD {Add-AzureADApplicationOwner, Add-AzureADDeviceRegisteredO...
+ Binary 2.0.2.137 AzureAD {Add-AzureADApplicationOwner, Add-AzureADDeviceRegisteredO...
``` ## AzureADPreview module
active-directory Benq Iam Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/benq-iam-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with BenQ IAM | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and BenQ IAM.
++++++++ Last updated : 07/30/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with BenQ IAM
+
+In this tutorial, you'll learn how to integrate BenQ IAM with Azure Active Directory (Azure AD). When you integrate BenQ IAM with Azure AD, you can:
+
+* Control in Azure AD who has access to BenQ IAM.
+* Enable your users to be automatically signed-in to BenQ IAM with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* BenQ IAM single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* BenQ IAM supports **SP and IDP** initiated SSO.
+
+## Add BenQ IAM from the gallery
+
+To configure the integration of BenQ IAM into Azure AD, you need to add BenQ IAM from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **BenQ IAM** in the search box.
+1. Select **BenQ IAM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for BenQ IAM
+
+Configure and test Azure AD SSO with BenQ IAM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in BenQ IAM.
+
+To configure and test Azure AD SSO with BenQ IAM, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure BenQ IAM SSO](#configure-benq-iam-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create BenQ IAM test user](#create-benq-iam-test-user)** - to have a counterpart of B.Simon in BenQ IAM that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **BenQ IAM** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://service-portaltest.benq.com/saml/init/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://service-portaltest.benq.com/saml/consume/<ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://service-portal.benq.com/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [BenQ IAM Client support team](mailto:benqcare.us@benq.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. BenQ IAM application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, BenQ IAM application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | | |
+ | displayName | user.displayname |
+ | externalId | user.objectid |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up BenQ IAM** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to BenQ IAM.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **BenQ IAM**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure BenQ IAM SSO
+
+To configure single sign-on on **BenQ IAM** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [BenQ IAM support team](mailto:benqcare.us@benq.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create BenQ IAM test user
+
+In this section, you create a user called Britta Simon in BenQ IAM. Work with [BenQ IAM support team](mailto:benqcare.us@benq.com) to add the users in the BenQ IAM platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to BenQ IAM Sign on URL where you can initiate the login flow.
+
+* Go to BenQ IAM Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the BenQ IAM for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the BenQ IAM tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the BenQ IAM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure BenQ IAM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cirrus Identity Bridge For Azure Ad Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md
Previously updated : 07/23/2021 Last updated : 07/30/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Cirrus Identity Bridge for Azure AD supports **SP** initiated SSO.
+* Cirrus Identity Bridge for Azure AD supports **SP** and **IDP** initiated SSO.
## Add Cirrus Identity Bridge for Azure AD from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.cirrusidentity.com/bridge`
- b. In the **Sign on URL** text box, type a value using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<NAME>.proxy.cirrusidentity.com/module.php/saml/sp/saml2-acs.php/<NAME>_proxy`
+
+1. Click Set additional URLs and perform the following step if you wish to configure the application in SP initiated mode:
+
+ In the **Sign on URL** text box, type a value using the following pattern:
`<CUSTOMER_LOGIN_URL>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Cirrus Identity Bridge for Azure AD Client support team](https://www.cirrusidentity.com/resources/service-desk) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier,Reply URL and Sign on URL. Contact [Cirrus Identity Bridge for Azure AD Client support team](https://www.cirrusidentity.com/resources/service-desk) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. Cirrus Identity Bridge for Azure AD application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In addition to above, Cirrus Identity Bridge for Azure AD application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
- | Name | Source attribute|
+ | Name | Source Attribute|
| | | | displayname | user.displayname |
In this section, you create a user called Britta Simon in Cirrus Identity Bridge
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Cirrus Identity Bridge for Azure AD Sign-on URL where you can initiate the login flow.
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Cirrus Identity Bridge for Azure AD Sign on URL where you can initiate the login flow.
* Go to Cirrus Identity Bridge for Azure AD Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Cirrus Identity Bridge for Azure AD tile in the My Apps, this will redirect to Cirrus Identity Bridge for Azure AD Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Cirrus Identity Bridge for Azure AD for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Cirrus Identity Bridge for Azure AD tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Cirrus Identity Bridge for Azure AD for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Cirrus Identity Bridge for Azure AD you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Cirrus Identity Bridge for Azure AD you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Directprint Io Cloud Print Administration Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/directprint-io-cloud-print-administration-tutorial.md
Previously updated : 07/19/2021 Last updated : 07/30/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* directprint.io Cloud Print Administration supports **IDP** initiated SSO.
+* directprint.io Cloud Print Administration supports **Just In Time** user provisioning.
+ ## Add directprint.io Cloud Print Administration from the gallery To configure the integration of directprint.io Cloud Print Administration into Azure AD, you need to add directprint.io Cloud Print Administration from the gallery to your list of managed SaaS apps.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section the application is pre-configured in IDP initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/copy-metadataurl.png)
1. On the **Set up directprint.io Cloud Print Administration** section, copy the appropriate URL(s) based on your requirement.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure directprint.io Cloud Print Administration SSO
-To configure single sign-on on **directprint.io Cloud Print Administration** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [directprint.io Cloud Print Administration support team](mailto:support@directprint.io). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **directprint.io Cloud Print Administration** side, you need to send the **App Federation Metadata Url** to [directprint.io Cloud Print Administration support team](mailto:support@directprint.io). They set this setting to have the SAML SSO connection set properly on both sides.
### Create directprint.io Cloud Print Administration test user
-In this section, you create a user called Britta Simon in directprint.io Cloud Print Administration. Work with [directprint.io Cloud Print Administration support team](mailto:support@directprint.io) to add the users in the directprint.io Cloud Print Administration platform. Users must be created and activated before you use single sign-on.
+In this section, a user called B.Simon is created in directprint.io Cloud Print Administration. directprint.io Cloud Print Administration supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in directprint.io Cloud Print Administration, a new one is created after authentication.
## Test SSO
active-directory Fresh Relevance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/fresh-relevance-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Fresh Relevance | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Fresh Relevance.
++++++++ Last updated : 07/26/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Fresh Relevance
+
+In this tutorial, you'll learn how to integrate Fresh Relevance with Azure Active Directory (Azure AD). When you integrate Fresh Relevance with Azure AD, you can:
+
+* Control in Azure AD who has access to Fresh Relevance.
+* Enable your users to be automatically signed-in to Fresh Relevance with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Fresh Relevance single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Fresh Relevance supports **IDP** initiated SSO.
+
+* Fresh Relevance supports **Just In Time** user provisioning.
+
+## Add Fresh Relevance from the gallery
+
+To configure the integration of Fresh Relevance into Azure AD, you need to add Fresh Relevance from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Fresh Relevance** in the search box.
+1. Select **Fresh Relevance** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Fresh Relevance
+
+Configure and test Azure AD SSO with Fresh Relevance using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Fresh Relevance.
+
+To configure and test Azure AD SSO with Fresh Relevance, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Fresh Relevance SSO](#configure-fresh-relevance-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Fresh Relevance test user](#create-fresh-relevance-test-user)** - to have a counterpart of B.Simon in Fresh Relevance that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Fresh Relevance** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Metadata file](common/upload-metadata.png)
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![image](common/browse-upload-metadata.png)
+
+ c. Once the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section:
+
+ > [!Note]
+ > If the **Identifier** and **Reply URL** values are not getting auto populated, then fill in the values manually according to your requirement.
+
+ d. In the **Relay State** textbox, type a value using the following pattern:
+ `<ID>`
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Fresh Relevance.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Fresh Relevance**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Fresh Relevance SSO
+
+1. Log in to your Fresh Relevance company site as an administrator.
+
+1. Go to **Settings** > **All Settings** > **Security and Privacy** and click **SAML/Azure AD Single Sign-On**.
+
+ ![Screenshot shows settings of SAML account.](./media/fresh-relevance-tutorial/settings.png "Account")
+
+1. In the **SAML/Single Sign-On Configuration** page, **Enable SAML SSO for this account** checkbox and click **Create new IdP Configuration** button.
+
+ ![Screenshot shows to create new IdP Configuration.](./media/fresh-relevance-tutorial/configuration.png "Configuration")
+
+1. In the **SAML IdP Configuration** page, perform the following steps:
+
+ ![Screenshot shows SAML IdP Configuration Page.](./media/fresh-relevance-tutorial/metadata.png "SAML Configuration")
+
+ ![Screenshot shows the IdP Metadata XML.](./media/fresh-relevance-tutorial/mapping.png "Metadata XML")
+
+ a. Copy **Entity ID** value, paste this value into the **Identifier (Entity ID)** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ b. Copy **Assertion Consumer Service(ACS) URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ c. Copy **RelayState Value** and paste this value into the **Relay State** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ d. Click **Download SP Metadata XML** and upload the metadata file in the **Basic SAML Configuration** section in the Azure portal.
+
+ e. Copy **App Federation Metadata Url** from the Azure portal into Notepad and paste the content into the **IdP Metadata XML** textbox and click **Save** button.
+
+ f. If successful, information such as the **Entity ID** of your IdP will be displayed in the **IdP Entity ID** textbox.
+
+ g. In the **Attribute Mapping** section, fill the required fields manually which you have copied from the Azure portal.
+
+ h. In the **General Configuration** section, enable **Allow Just In Time(JIT)Account Creation** and click **Save**.
+
+ > [!NOTE]
+ > If these parameters are not correctly mapped, login/account creation will not be successful and an error will be shown. To temporarily show enhanced attribute debugging information on sign-on failure, enable **Show Debugging Information** checkbox.
+
+### Create Fresh Relevance test user
+
+In this section, a user called Britta Simon is created in Fresh Relevance. Fresh Relevance supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Fresh Relevance, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Fresh Relevance for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Fresh Relevance tile in the My Apps, you should be automatically signed in to the Fresh Relevance for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Fresh Relevance you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Paloaltoadmin Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/paloaltoadmin-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| | | > [!NOTE]
- > The _adminrole_ value should be same as the role name which is configured in the **Palo Alto Networks** as mentioned in step 9.
+ > The **Name** value, shown above as _adminrole_, should be the same value as the _Admin role attribute_, which is configured in step 12 of the **[Configure Palo Alto Networks - Admin UI SSO](#configure-palo-alto-networksadmin-ui-sso)** section. The **Source Attribute** value, shown above as _customadmin_, should be the same value as the _Admin Role Profile Name_, which is configured in step 9 of the the **[Configure Palo Alto Networks - Admin UI SSO](#configure-palo-alto-networksadmin-ui-sso)** section.
> [!NOTE] > For more information about the attributes, see the following articles:
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Palo Alto Networks - Admin UI you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Palo Alto Networks - Admin UI you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-autoscaler.md
This article showed you how to automatically scale the number of AKS nodes. You
[aks-scale-apps]: tutorial-kubernetes-scale.md [aks-support-policies]: support-policies.md [aks-upgrade]: upgrade-cluster.md
-[aks-view-master-logs]: ./view-control-plane-logs.md#enable-resource-logs
+[aks-view-master-logs]: monitor-aks.md#configure-monitoring
[autoscaler-profile-properties]: #using-the-autoscaler-profile [azure-cli-install]: /cli/azure/install-azure-cli [az-aks-show]: /cli/azure/aks#az_aks_show
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-azure-cni.md
This article shows you how to use *Azure CNI* networking to create and use a vir
* `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read` * The subnet assigned to the AKS node pool cannot be a [delegated subnet](../virtual-network/subnet-delegation-overview.md).
+* If you provide your own subnet, you have to manage the Network Security Groups (NSG) associated with that subnet. AKS will not modify any of the NSGs associated with that subnet. You also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR ranges.
## Plan IP addressing for your cluster
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-kubenet.md
With *Azure CNI*, each pod receives an IP address in the IP subnet, and can dire
* Route tables and user-defined routes are required for using kubenet, which adds complexity to operations. * Direct pod addressing isn't supported for kubenet due to kubenet design. * Unlike Azure CNI clusters, multiple kubenet clusters can't share a subnet.
+* If you provide your own subnet, you have to manage the Network Security Groups (NSG) associated with that subnet. AKS will not modify any of the NSGs associated with that subnet. You also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR.
* Features **not supported on kubenet** include: * [Azure network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy), but Calico network policies are supported on kubenet * [Windows node pools](./windows-faq.md)
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-secrets-store-driver.md
az extension update --name aks-preview
## Create an AKS cluster with Secrets Store CSI Driver support
-> [!NOTE]
-> If you plan to provide access to the cluster via a user-assigned or system-assigned managed identity, enable Azure Active Directory on your cluster with the flag `enable-managed-identity`. See [Use managed identities in Azure Kubernetes Service][aks-managed-identity] for more.
- First, create an Azure resource group: ```azurecli-interactive
-az group create -n myResourceGroup -l westus
+az group create -n myResourceGroup -l eastus2
```
-To create an AKS cluster with Secrets Store CSI Driver capability, use the [az aks create][az-aks-create] command with the addon `azure-keyvault-secrets-provider`:
+To create an AKS cluster with Secrets Store CSI Driver capability, use the [az aks create][az-aks-create] command with the addon `azure-keyvault-secrets-provider`.
```azurecli-interactive
-az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-secrets-provider
+az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-secrets-provider --enable-managed-identity
```
-## Upgrade an existing AKS cluster with Secrets Store CSI Driver support
+A user-assigned managed identity is created by the addon for the purpose of accessing Azure resources, named `azurekeyvaultsecretsprovider-*`. We can use this identity to connect to the Azure Key Vault where our secrets will be stored. Take note of the identity's `clientId` in the output:
+
+```json
+...,
+ "addonProfiles": {
+ "azureKeyvaultSecretsProvider": {
+ ...,
+ "identity": {
+ "clientId": "<client-id>",
+ ...
+ }
+ }
+```
-> [!NOTE]
-> If you plan to provide access to the cluster via a user-assigned or system-assigned managed identity, enable Azure Active Directory on your cluster with the flag `enable-managed-identity`. See [Use managed identities in Azure Kubernetes Service][aks-managed-identity] for more.
+## Upgrade an existing AKS cluster with Secrets Store CSI Driver support
To upgrade an existing AKS cluster with Secrets Store CSI Driver capability, use the [az aks enable-addons][az-aks-enable-addons] command with the addon `azure-keyvault-secrets-provider`:
To upgrade an existing AKS cluster with Secrets Store CSI Driver capability, use
az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup ```
+As stated above, the addon creates a user-assigned managed identity that can be used to authenticate to Azure Key Vault.
+ ## Verify Secrets Store CSI Driver installation
-These commands will install the Secrets Store CSI Driver and the Azure Key Vault provider on your nodes. Verify by listing all pods with the secrets-store-csi-driver and secrets-store-provider-azure labels in the kube-system namespace and ensuring your output looks similar to the following:
+The above will install the Secrets Store CSI Driver and the Azure Key Vault provider on your nodes. Verify completion by listing all pods with the secrets-store-csi-driver and secrets-store-provider-azure labels in the kube-system namespace, and ensure your output looks similar to the following:
```bash kubectl get pods -n kube-system -l 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'
kube-system aks-secrets-store-provider-azure-6pqmv 1/1 Running 0
kube-system aks-secrets-store-provider-azure-f5qlm 1/1 Running 0 4m25s ``` - ## Enabling and disabling autorotation > [!NOTE]
az aks update -g myResourceGroup -n myAKSCluster2 --disable-secret-rotation
## Create or use an existing Azure Key Vault
-In addition to an AKS cluster, you will need an Azure Key Vault resource containing the secret content. To deploy an Azure Key Vault instance, follow these steps:
+In addition to an AKS cluster, you will need an Azure Key Vault resource containing the secret content. Keep in mind that the Key Vault's name must be globally unique.
-1. [Create a key vault][create-key-vault]
-2. [Set a secret in a key vault][set-secret-key-vault]
+```azurecli
+az keyvault create -n <keyvault-name> -g myResourceGroup -l eastus2
+```
+
+Azure Key Vault can store keys, secrets, and certificates. In this example, we'll set a plain text secret called `ExampleSecret`:
+
+```azurecli
+az keyvault secret set --vault-name <keyvault-name> -n ExampleSecret --value MyAKSExampleSecret
+```
Take note of the following properties for use in the next section: - Name of secret object in Key Vault-- Secret content type (secret, key, cert)-- Name of Key Vault resource
+- Object type (secret, key, or certificate)
+- Name of your Azure Key Vault resource
- Azure Tenant ID the Subscription belongs to ## Provide identity to access Azure Key Vault
-The example in this article uses a Service Principal, but the Azure Key Vault provider offers four methods of access. Review them and choose the one that best fits your use case. Be aware additional steps may be required depending on the chosen method, such as granting the Service Principal permissions to get secrets from key vault.
+Use the values from the previous steps to set permissions, allowing the addon-created managed identity to access keyvault objects:
-- [Service Principal][service-principal-access]-- [Pod Identity][pod-identity-access]-- [User-assigned Managed Identity][ua-mi-access]-- [System-assigned Managed Identity][sa-mi-access]
+```azurecli
+az keyvault set-policy -n <keyvault-name> --<object-type>-permissions get --spn <client-id>
+```
## Create and apply your own SecretProviderClass object
-To use and configure the Secrets Store CSI driver for your AKS cluster, create a SecretProviderClass custom resource.
-
-Here is an example making use of a Service Principal to access the key vault:
+To use and configure the Secrets Store CSI driver for your AKS cluster, create a SecretProviderClass custom resource. Ensure the `objects` array matches the objects you've store in the Azure Key Vault instance:
```yml apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata:
- name: azure-kvname
+ name: <keyvault-name>
spec: provider: azure parameters:
- usePodIdentity: "false" # [OPTIONAL] if not provided, will default to "false"
- keyvaultName: "kvname" # the name of the KeyVault
- cloudName: "" # [OPTIONAL for Azure] if not provided, azure environment will default to AzurePublicCloud
+ keyvaultName: "<keyvault-name>" # The name of the Azure Key Vault
+ useVMManagedIdentity: "true"
+ userAssignedIdentityID: "<client-id>" # The clientId of the addon-created managed identity
+ cloudName: "" # [OPTIONAL for Azure] if not provided, Azure environment will default to AzurePublicCloud
objects: | array: - |
- objectName: secret1
- objectType: secret # object types: secret, key or cert
- objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
- - |
- objectName: key1
- objectType: key
- objectVersion: ""
- tenantId: "<tenant-id>" # the tenant ID of the KeyVault
+ objectName: <secret-name> # In this example, 'ExampleSecret'
+ objectType: secret # Object types: secret, key or cert
+ objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
+ tenantId: "<tenant-id>" # the tenant ID containing the Azure Key Vault instance
``` For more information, see [Create your own SecretProviderClass Object][sample-secret-provider-class]. Be sure to use the values you took note of above.
kubectl apply -f ./new-secretproviderclass.yaml
## Update and apply your cluster's deployment YAML
-To ensure your cluster is using the new custom resource, update the deployment YAML. For a more comprehensive example, take a look at a [sample deployment][sample-deployment] using Service Principal to access Azure Key Vault. Be sure to follow any additional steps from your chosen method of key vault access.
+To ensure your cluster is using the new custom resource, update the deployment YAML. For example:
```yml kind: Pod
spec:
driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes:
- secretProviderClass: "azure-kvname"
- nodePublishSecretRef: # Only required when using service principal mode
- name: secrets-store-creds # Only required when using service principal mode. The name of the Kubernetes secret that contains the service principal credentials to access keyvault.
+ secretProviderClass: "<keyvault-name>"
``` Apply the updated deployment to the cluster:
After the pod starts, the mounted content at the volume path specified in your d
## show secrets held in secrets-store kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
-## print a test secret 'secret1' held in secrets-store
-kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/secret1
+## print a test secret 'ExampleSecret' held in secrets-store
+kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
``` ## Disable Secrets Store CSI Driver on an existing AKS Cluster
After learning how to use the CSI Secrets Store Driver with an AKS Cluster, see
[kube-csi]: https://kubernetes-csi.github.io/docs/ [key-vault-provider-install]: https://azure.github.io/secrets-store-csi-driver-provider-azure/getting-started/installation [sample-secret-provider-class]: https://azure.github.io/secrets-store-csi-driver-provider-azure/getting-started/usage/#create-your-own-secretproviderclass-object
-[service-principal-access]: https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/service-principal-mode/
-[pod-identity-access]: https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/pod-identity-mode/
-[ua-mi-access]: https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/user-assigned-msi-mode/
-[sa-mi-access]: https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/system-assigned-msi-mode/
-[sample-deployment]: https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/examples/service-principal/pod-inline-volume-service-principal.yaml
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/faq.md
The issue has been resolved by Kubernetes v1.20, refer [Kubernetes 1.20: Granula
FIPS-enabled nodes are currently available in preview on Linux-based node pools. For more details, see [Add a FIPS-enabled node pool (preview)](use-multiple-node-pools.md#add-a-fips-enabled-node-pool-preview).
+## Can I configure NSGs with AKS?
+
+If you provide your own subnet, you have to manage the Network Security Groups (NSG) associated with that subnet. AKS will only modify the NSGs at the NIC level and will not modify any of the NSGs associated with that subnet. If you're using CNI, you also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR ranges. If you're using kubenet, you also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR.
++ <!-- LINKS - internal --> [aks-upgrade]: ./upgrade-cluster.md
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
[azure-disk]: ./azure-disks-dynamic-pv.md [azure-files]: ./azure-files-dynamic-pv.md [container-health]: ../azure-monitor/containers/container-insights-overview.md
-[aks-master-logs]: ./view-control-plane-logs.md
+[aks-master-logs]: monitor-aks-reference.md#resource-logs
[aks-supported versions]: supported-kubernetes-versions.md [concepts-clusters-workloads]: concepts-clusters-workloads.md [kubernetes-rbac]: concepts-identity.md#kubernetes-rbac
aks Kubelet Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubelet-logs.md
If you need additional troubleshooting information from the Kubernetes master, s
<!-- LINKS - internal --> [aks-ssh]: ssh.md
-[aks-master-logs]: ./view-control-plane-logs.md
+[aks-master-logs]: monitor-aks-reference.md#resource-logs
[aks-quickstart-cli]: kubernetes-walkthrough.md [aks-quickstart-portal]: kubernetes-walkthrough-portal.md
-[aks-master-logs]: ./view-control-plane-logs.md
+[aks-master-logs]: monitor-aks-reference.md#resource-logs
[azure-container-logs]: ../azure-monitor/containers/container-insights-overview.md
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/monitor-aks-reference.md
+
+ Title: Monitoring AKS data reference
+description: Important reference material needed when you monitor AKS
++ Last updated : 07/29/2021+++
+# Monitoring AKS data reference
+
+See [Monitoring AKS](monitor-aks.md) for details on collecting and analyzing monitoring data for AKS.
+
+## Metrics
+
+The following table lists the platform metrics collected for AKS. Follow each link for a detailed list of the metrics for each particular type.
+
+|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| Managed clusters | [Microsoft.ContainerService/managedClusters](/azure/azure-monitor/essentials/metrics-supported#microsoftcontainerservicemanagedclusters)
+| Connected clusters | [microsoft.kubernetes/connectedClusters](/azure/azure-monitor/essentials/metrics-supported#microsoftkubernetesconnectedclusters)
+| Virtual machines| [Microsoft.Compute/virtualMachines](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachines) |
+| Virtual machine scale sets | [Microsoft.Compute/virtualMachineScaleSets](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachinescalesets)|
+| Virtual machine scale sets virtual machines | [Microsoft.Compute/virtualMachineScaleSets/virtualMachines](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachinescalesetsvirtualmachines)|
+
+For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+
+## Metric dimensions
+
+The following table lists [dimensions](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics) for AKS metrics.
+
+<!-- listed here /azure/azure-monitor/essentials/metrics-supported#microsoftcontainerservicemanagedclusters-->
+
+| Dimension Name | Description |
+| - | -- |
+| requestKind | Used by metrics such as *Inflight Requests* to split by type of request. |
+| condition | Used by metrics such as *Statuses for various node conditions*, *Number of pods in Ready state* to split by condition type. |
+| status | Used by metrics such as *Statuses for various node conditions* to split by status of the condition. |
+| status2 | Used by metrics such as *Statuses for various node conditions* to split by status of the condition. |
+| node | Used by metrics such as *CPU Usage Millicores* to split by the name of the node. |
+| phase | Used by metrics such as *Number of pods by phase* to split by the phase of the pod. |
+| namespace | Used by metrics such as *Number of pods by phase* to split by the namespace of the pod. |
+| pod | Used by metrics such as *Number of pods by phase* to split by the name of the pod. |
+| nodepool | Used by metrics such as *Disk Used Bytes* to split by the name of the nodepool. |
+| device | Used by metrics such as *Disk Used Bytes* to split by the name of the device. |
+
+## Resource logs
+
+The following table lists the resource log categories you can collect for AKS. These are the logs for AKS control plane components. See [Configure monitoring](monitor-aks.md#configure-monitoring) for information on creating a diagnostic setting to collect these logs and recommendations on which to enable. See [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md#resource-logs) for query examples.
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+
+| Category | Description |
+|:|:|
+| cluster-autoscale | Understand why the AKS cluster is scaling up or down, which may not be expected. This information is also useful to correlate time intervals where something interesting may have happened in the cluster. |
+| guard | Managed Azure Active Directory and Azure RBAC audits. For managed Azure AD, this includes token in and user info out. For Azure RBAC, this includes access reviews in and out. |
+| kube-apiserver | Logs from the API server. |
+| kube-audit | Audit log data for every audit event including get, list, create, update, delete, patch, and post. |
+| kube-audit-admin | Subset of the kube-audit log category. Significantly reduces the number of logs by excluding the get and list audit events from the log. |
+| kube-controller-manager | Gain deeper visibility of issues that may arise between Kubernetes and the Azure control plane. A typical example is the AKS cluster having a lack of permissions to interact with Azure. |
+| kube-scheduler | Logs from the scheduler. |
+| AllMetrics | Includes all platform metrics. Sends these values to Log Analytics workspace where it can be evaluated with other data using log queries. |
+
+## Azure Monitor Logs tables
+
+This section refers to all of the Azure Monitor Logs tables relevant to AKS and available for query by Log Analytics.
+++
+|Resource Type | Notes |
+|-|--|
+| [Kubernetes services](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) | Follow this link for a list of all tables used by AKS and a description of their structure. |
++
+For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
++
+## Activity log
+
+The following table lists a few example operations related to AKS that may be created in the [Activity log](../azure-monitor/essentials/activity-log.md). Use the Activity log to track information such as when a cluster is created or had its configuration change. You can either view this information in the portal or create an Activity log alert to be proactively notified when an event occurs.
+
+| Operation | Description |
+|:|:|
+| Microsoft.ContainerService/managedClusters/write | Create or update managed cluster |
+| Microsoft.ContainerService/managedClusters/delete | Delete Managed Cluster |
+| Microsoft.ContainerService/managedClusters/listClusterMonitoringUserCredential/action | List clusterMonitoringUser credential |
+| Microsoft.ContainerService/managedClusters/listClusterAdminCredential/action | List clusterAdmin credential |
+| Microsoft.ContainerService/managedClusters/agentpools/write | Create or Update Agent Pool |
+
+For a complete list of possible log entries, see [Microsoft.ContainerService Resource Provider options](/azure/role-based-access-control/resource-provider-operations#microsoftcontainerservice).
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+
+## See also
+
+- See [Monitoring Azure AKS](monitor-aks.md) for a description of monitoring Azure AKS.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/monitor-aks.md
+
+ Title: Monitor Azure Kubernetes Service (AKS) with Azure Monitor
+description: Describes how to use Azure Monitor monitor the health and performance of AKS clusters and their workloads.
++++ Last updated : 07/29/2021+++
+# Monitoring Azure Kubernetes Service (AKS) machines with Azure Monitor
+This scenario describes how to use Azure Monitor to monitor the health and performance of Azure Kubernetes Service (AKS). It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
+
+The [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/) defines the [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements) you should focus on for your Azure resources. This scenario focuses on Health and Status monitoring using Azure Monitor.
+
+## Scope of the scenario
+This scenario is intended for customers using Azure Monitor to monitor AKS. It does not include the following, although this content may be added in subsequent updates to the scenario.
+
+- Monitoring of Kubernetes clusters outside of Azure except for referring to existing content for Azure Arc enabled Kubernetes.
+- Monitoring of AKS with tools other than Azure Monitor except to fill gaps in Azure Monitor and Container Insights.
+
+> [!NOTE]
+> Azure Monitor was designed to monitor the availability and performance of cloud resources. While the operational data stored in Azure Monitor may be useful for investigating security incidents, other services in Azure were designed to monitor security. Security monitoring for AKS is done with [Azure Sentinel](../sentinel/overview.md) and [Azure Security Center](../security-center/security-center-introduction.md). See [Monitor virtual machines with Azure Monitor - Security monitoring](../azure-monitor/vm/monitor-virtual-machine-security.md) for a description of the security monitoring tools in Azure and their relationship to Azure Monitor.
+>
+> For information on using the security services to monitor AKS, see [Azure Defender for Kubernetes - the benefits and features](../security-center/defender-for-kubernetes-introduction.md) and [Connect Azure Kubernetes Service (AKS) diagnostics logs to Azure Sentinel](../sentinel/connect-azure-kubernetes-service.md).
+## Container insights
+AKS generates [platform metrics and resource logs](monitor-aks-reference.md), like any other Azure resource, that you can use to monitor its basic health and performance. Enable [Container insights](../azure-monitor/containers/container-insights-overview.md) to expand on this monitoring. Container insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS in addition to other cluster configurations. Container insights provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios.
+
+[Prometheus](https://prometheus.io/) and [Grafana](https://www.prometheus.io/docs/visualization/grafan) has native integration with AKS, collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics, and many native Azure Monitor insights are built-up on top of Prometheus metrics. Container insights complements and completes E2E monitoring of AKS including log collection which Prometheus as stand-alone tool doesnΓÇÖt provide. Many customers use Prometheus integration and Azure Monitor together for E2E monitoring.
+
+Learn more about using Container insights at [Container insights overview](../azure-monitor/containers/container-insights-overview.md). [Monitor layers of AKS with Container insights](#monitor-layers-of-aks-with-container-insights) below introduces various features of Container insights and the monitoring scenarios that they support.
+++
+## Configure monitoring
+The following sections describe the steps required to configure full monitoring of your AKS cluster using Azure Monitor.
+### Create Log Analytics workspace
+You require at least one Log Analytics workspace to support Container insights and to collect and analyze other telemetry about your AKS cluster. There is no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md) for details.
+
+If you're just getting started with Azure Monitor, then start with a single workspace and consider creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Azure Security Center and Azure Sentinel](../azure-monitor/vm/monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data.
+
+See [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md) for details on logic that you should consider for designing a workspace configuration.
+
+### Enable container insights
+When you enable Container insights for your AKS cluster, it deploys a containerized version of the [Log Analytics agent](../agents/../azure-monitor/agents/log-analytics-agent.md) that sends data to Azure Monitor. There are multiple methods to enable it depending whether you're working with a new or existing AKS cluster. See [Enable Container insights](../azure-monitor/containers/container-insights-onboard.md) for prerequisites and configuration options.
++
+### Configure collection from Prometheus
+Container insights allows you to collect certain Prometheus metrics in your Log Analytics workspace without requiring a Prometheus server. You can analyze this data using Azure Monitor features along with other data collected by Container insights. See [Configure scraping of Prometheus metrics with Container insights](../azure-monitor/containers/container-insights-prometheus-integration.md) for details on this configuration.
++
+### Collect resource logs
+The logs for AKS control plane components are implemented in Azure as [resource logs](../azure-monitor/essentials/resource-logs.md). Container insights doesn't currently use these logs, so you do need to create your own log queries to view and analyze them. See [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md#resource-logs) for details on the structure of these logs and how to write queries for them.
+
+You need to create a diagnostic setting to collect resource logs. Create multiple diagnostic settings to send different sets of logs to different locations. See [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md) to create diagnostic settings for your AKS cluster.
+
+There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](/monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md) for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
+
+If you're unsure about which resource logs to initially enable, use the recommendations in the following table which are based on the most common customer requirements. Enable the other categories if you later find that you require this information.
+
+| Category | Enable? | Destination |
+|:|:|:|
+| cluster-autoscale | Enable if autoscale is enabled | Log Analytics workspace |
+| guard | Enable if Azure Active Directory is enabled | Log Analytics workspace |
+| kube-apiserver | Enable | Log Analytics workspace |
+| kube-audit | Enable | Azure storage. This keeps costs to a minimum yet retains the audit logs if they're required by an auditor. |
+| kube-audit-admin | Enable | Log Analytics workspace |
+| kube-controller-manager | Enable | Log Analytics workspace |
+| kube-scheduler | Disable | |
+| AllMetrics | Enable | Log Analytics workspace |
+++++
+## Access Azure Monitor features
+
+Access Azure Monitor features for all AKS clusters in your subscription from the **Monitoring** menu in the Azure portal or for a single AKS cluster from the **Monitor** section of the **Kubernetes services** menu. The screenshot below shows the cluster's **Monitor** menu.
++
+| Menu option | Description |
+|:|:|
+| Insights | Opens container insights for the current cluster. Select **Containers** from the **Monitor** menu to open container insights for all clusters. |
+| Alerts | Views alerts for the current cluster. |
+| Metrics | Open metrics explorer with the scope set to the current cluster. |
+| Diagnostic settings | Create diagnostic settings for the cluster to collect resource logs. |
+| Advisor | recommendations Recommendations for the current cluster from Azure Advisor. |
+| Logs | Open Log Analytics with the scope set to the current cluster to analyze log data and access prebuilt queries. |
+| Workbooks | Open workbook gallery for Kubernetes service. |
++++
+## Monitor layers of AKS with Container insights
+Because of the wide variance in Kubernetes implementations, each customer will have unique requirements for AKS monitoring. The approach you take should be based on factors including scale, topology, organizational roles, and multi-cluster tenancy. This section presents a common strategy that is a bottoms-up approach starting from infrastructure up through applications. Each layer has distinct monitoring requirements. These layers are illustrated in the following diagram and discussed in more detail in the following sections.
++
+### Level 1 - Cluster level components
+Cluster level includes the following components.
+
+| Component | Monitoring requirements |
+|:|:|
+| Node | Understand the readiness status and performance of CPU, memory, and disk for each node and proactively monitor their usage trends before deploying any workloads. |
++
+Use existing views and reports in Container Insights to monitor cluster level components. The **Cluster** view gives you a quick view of the performance of the nodes in your cluster including their CPU and memory utilization. Use the **Nodes** view to view the health of each node in addition to the health and performance of the pods running on each. See [Monitor your Kubernetes cluster performance with Container insights](../azure-monitor/containers/container-insights-analyze.md) for details on using this view and analyzing node health and performance.
++
+Use **Node** workbooks in Container Insights to analyze disk capacity and IO in addition to GPU usage. See [Node workbooks](../azure-monitor/containers/container-insights-reports.md#node-workbooks) for a description of these workbooks.
+++
+For troubleshooting scenarios, you may need to access the AKS nodes directly for maintenance or immediate log collection. For security purposes, the AKS nodes aren't exposed to the internet but you can `kubectl debug` to SSH to the AKS nodes. See [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](/ssh.md) for details on this process.
+++
+### Level 2 - Managed AKS components
+Managed AKS level includes the following components.
+
+| Component | Monitoring |
+|:|:|
+| API Server | Monitor the status of API server, identifying any increase in request load and bottlenecks if the service is down. |
+| Kubelet | Monitoring Kubelet helps in troubleshooting of pod management issues, pods not starting, nodes not ready or pods getting killed. |
+
+Azure Monitor and container insights don't yet provide full monitoring for the API server. You can use metrics explorer to view the **Inflight Requests** counter, but you should refer to metrics in Prometheus for a complete view of API Server performance. This includes such values as request latency and workqueue processing time. A Grafana dashboard that provides views of the critical metrics for the API server is available at [Grafana Labs](https://grafana.com/grafan)
++
+Use the **Kubelet** workbook to view the health and performance of each kubelet. See [Resource Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#resource-monitoring-workbooks) for details on this workbooks. For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](/kubelet-logs.md).
++
+### Resource logs
+Use [log queries with resource logs](../azure-monitor/containers/container-insights-log-query.md#resource-logs) to analyze control plane logs generated by AKS components.
+
+### Level 3 - Kubernetes objects and workloads
+Kubernetes objects and workloads level include the following components.
+
+| Component | Monitoring requirements |
+|:|:|
+| Deployments | Monitor actual vs desired state of the deployment and the status and resource utilization of the pods running on them. |
+| Pods | Monitor status and resource utilization, including CPU and memory, of the pods running on your AKS cluster. |
+| Containers | Monitor the resource utilization, including CPU and memory, of the containers running on your AKS cluster. |
++
+Use existing views and reports in Container Insights to monitor containers and pods. Use the **Nodes** and **Controllers** views to view the health and performance of the pods running on them and drill down to the health and performance of their containers. View the health and performance for containers directly from the **Containers** view. See [Monitor your Kubernetes cluster performance with Container insights](../azure-monitor/containers/container-insights-analyze.md) for details on using this view and analyzing container health and performance.
++
+Use the **Deployment** workbook in Container insights to view metrics collected for deployments. See [Deployment & HPA metrics with Container insights](../azure-monitor/containers/container-insights-deployment-hpa-metrics.md) for details.
+
+> [!NOTE]
+> Deployments view in Container insights is currently in public preview.
++
+#### Live data
+In troubleshooting scenarios, Container insights provides access to live AKS container logs (stdout/stderror), events, and pod metrics. See [How to view Kubernetes logs, events, and pod metrics in real-time](../azure-monitor/containers/container-insights-livedata-overview.md) for details on using this feature.
++
+### Level 4- Applications
+The application level includes the application workloads running in the AKS cluster.
+
+| Component | Monitoring requirements |
+|:|:|
+| Applications | Monitor microservice application deployments to identify application failures and latency issues. Includes such information as request rates, response times, and exceptions. |
+
+Application Insights provides complete monitoring of applications running on AKS and other environments. If you have a Java application, you can provide monitoring without instrumenting your code following [Zero instrumentation application monitoring for Kubernetes - Azure Monitor Application Insights](../azure-monitor/app/kubernetes-codeless.md). For complete monitoring though, you should configure code-based monitoring depending on your application.
+
+- [ASP.NET Applications](../azure-monitor/app/asp-net.md)
+- [ASP.NET Core Applications](../azure-monitor/app/asp-net-core.md)
+- [.NET Console Applications](../azure-monitor/app/console.md)
+- [Java](../azure-monitor/app/java-in-process-agent.md)
+- [Node.js](../azure-monitor/app/nodejs.md)
+- [Python](../azure-monitor/app/opencensus-python.md)
+- [Other platforms](../azure-monitor/app/platforms.md)
+
+See [What is Application Insights?](../azure-monitor/app/app-insights-overview.md)
+
+### Level 5- External components
+Components external to AKS include the following.
+
+| Component | Monitoring requirements |
+|:|:|
+| Service Mesh, Ingress, Egress | Metrics based on component. |
+| Database and work queues | Metrics based on component. |
+
+Monitor external components such as Service Mesh, Ingress, Egress with Prometheus and Grafana or other proprietary tools. Monitor databases and other Azure resources using other features of Azure Monitor.
+
+## Analyze metric data with metrics explorer
+Use metrics explorer when you want to perform custom analysis of metric data collected for your containers. Metrics explorer allows you plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. Create a metrics alert to proactively notify you when a metric value crosses a threshold, and pin charts to dashboards for use by different members of your organization.
+
+See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this feature. For a list of the platform metrics collected for AKS, see [Monitoring AKS data reference metrics](/monitor-aks-reference.md#metrics). When Container insights is enabled for a cluster, [addition metric values](../azure-monitor/containers/container-insights-update-metrics.md) are available.
++++
+## Analyze log data with Log Analytics
+Use Log Analytics when you want to analyze resource logs or dig deeper into the data used to create the views in Container insights. Log Analytics allows you to perform custom analysis of your log data.
+
+See [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md) for details on using log queries to analyze data collected by Container insights. See [Using queries in Azure Monitor Log Analytics](../azure-monitor/logs/queries.md) for information on using these queries and [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md) for a complete tutorial on using Log Analytics to run queries and work with their results.
+
+For a list of the tables collected for AKS that you can analyze in metrics explorer, see [Monitoring AKS data reference logs](/monitor-aks-reference.md#azure-monitor-logs-tables).
++
+In addition to Container insights data, you can use log queries to analyze resource logs from AKS. For a list of the log categories available, see [AKS data reference resource logs](/monitor-aks-reference.md#resource-logs). You must create a diagnostic setting to collect each category as described in [Configure monitoring](#configure-monitoring) before that data will be collected.
++++
+## Alerts
+[Alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. There are no preconfigured alert rules for AKS clusters, but you can create your own based on data collected by Container insights.
+
+> [!IMPORTANT]
+> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you create any alert rules.
++
+### Choosing the alert type
+The most common types of alert rules in Azure Monitor are [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [log query alerts](../azure-monitor/alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario will depend on where the data is located that you're alerting on. You may have cases though where data for a particular alerting scenario is available in both Metrics and Logs, and you need to determine which rule type to use.
+
+It's typically the best strategy to use metric alerts instead of log alerts when possible since they're more responsive and stateful. You can create a metric alert on any values you can analyze in metrics explorer. If the logic for your alert rule requires data in Logs, or if it requires more complex logic, then you can use a log query alert rule.
+
+For example, if you want to alert when an application workload is consuming excessive CPU then you can create a metric alert using the CPU metric. If you need an alert when a particular message is found in a control plane log, then you'll require a log alert.
+### Metric alert rules
+Metric alert rules use the same metric values as metrics explorer. In fact, you can create an alert rule directly from metrics explorer with the data you're currently analyzing. You can use any of the values in [AKS data reference metrics](monitor-aks-reference.md#metrics) for metric alert rules.
+
+Container insights includes a feature in public preview that creates a recommended set of metric alert rules for your AKS cluster. This feature creates new metric values (also in preview) used by the alert rules that you can also use in metrics explorer. See [Recommended metric alerts (preview) from Container insights](../azure-monitor/containers/container-insights-metric-alerts.md) for details on this feature and on creating metric alerts for AKS.
++
+### Log alerts rules
+Use log alert rules to generate an alert from the results of a log query. This may be data collected by Container insights or from AKS resource logs. See [How to create log alerts from Container insights](../azure-monitor/containers/container-insights-log-alerts.md) for details on log alert rules for AKS and a set of sample queries designed for alert rules. You can also refer to [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md) for details on log queries that could be modified for alert rules.
+
+### Virtual machine alerts
+AKS relies on a virtual machine scale set that must be healthy to run AKS workloads. You can alert on critical metrics such as CPU, memory, and storage for the virtual machines using the guidance at [Monitor virtual machines with Azure Monitor: Alerts](../azure-monitor/vm/monitor-virtual-machine-alerts.md).
+
+### Prometheus alerts
+For those conditions where Azure Monitor either doesn't have the data required for an alerting condition, or where the alerting may not be responsive enough, you should configure alerts in Prometheus. One example is alerting for the API server. Azure Monitor doesn't collect critical information for the API server including whether it's available or experiencing a bottleneck. You can create a log query alert using the data from the kube-apiserver resource log category, but this can take up to several minutes before you receive an alert which may not be sufficient for your requirements.
++
+## Next steps
+
+- See [Monitoring AKS data reference](monitor-aks-reference.md) for a reference of the metrics, logs, and other important values created by AKS.
+
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
The Public DNS option can be leveraged to simplify routing options for your Priv
![Public DNS](https://user-images.githubusercontent.com/50749048/124776520-82629600-df0d-11eb-8f6b-71c473b6bd01.png)
-1. By specifying "None" for the Private DNS Zone when a private cluster is provisioned, a private endpoint (1) and a public DNS zone (2) are created in the cluster-managed resource group. The cluster uses an A record in the private zone to resolve the IP of the private endpoint for communication to the API server.
+1. By specifying `--enable-public-fqdn` when you provision a private cluster, you create an additional A record for the new FQDN in the AKS public DNS zone. The agentnode still uses the A record in the private zone to resolve the IP address of the private endpoint for communication to the API server.
+
+2. If you use both `--enable-public-fqdn` and `--private-dns-zone none`, the cluster public FQDN and private FQDN have the same value. The value is in the AKS public DNS zone `hcp.{REGION}.azmk8s.io`. It's a breaking change for the private DNS zone mode cluster.
### Register the `EnablePrivateClusterPublicFQDN` preview feature
az provider register --namespace Microsoft.ContainerService
### Create a private AKS cluster with a Public DNS address ```azurecli-interactive
-az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone none --enable-public-fqdn
+az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <private-dns-zone-mode> --enable-public-fqdn
``` ## Options for connecting to the private cluster
aks Security Hardened Vm Host Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-hardened-vm-host-image.md
As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI
> [!Note] > This document is scoped to Linux agents in AKS only.
-AKS clusters are deployed on host VMs, which run a security-optimized OS used for containers running on AKS. This host OS is based on an **Ubuntu 16.04.LTS** image with more [security hardening](#security-hardening-features) and optimizations applied.
+AKS clusters are deployed on host VMs, which run a security-optimized OS used for containers running on AKS. This host OS is based on an **Ubuntu 18.04.5 LTS** image with more [security hardening](#security-hardening-features) and optimizations applied.
The goal of the security hardened host OS is to reduce the surface area of attack and optimize for the deployment of containers in a secure manner.
aks Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ssh.md
If you need more troubleshooting data, you can [view the kubelet logs][view-kube
<!-- INTERNAL LINKS --> [view-kubelet-logs]: kubelet-logs.md
-[view-master-logs]: ./view-control-plane-logs.md
+[view-master-logs]: monitor-aks-reference.md#resource-logs
[aks-quickstart-cli]: kubernetes-walkthrough.md [aks-quickstart-portal]: kubernetes-walkthrough-portal.md [install-azure-cli]: /cli/azure/install-azure-cli
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/troubleshooting.md
AKS is investigating the capability to mutate active labels on a node pool to im
<!-- LINKS - internal -->
-[view-master-logs]: ./view-control-plane-logs.md
+[view-master-logs]: monitor-aks-reference.md#resource-logs
[cluster-autoscaler]: cluster-autoscaler.md
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
Azure Active Directory pod-managed identities uses Kubernetes primitives to asso
> [!NOTE] >The feature described in this document, pod-managed identities (preview), will be replaced with pod-managed identities V2 (preview).
-> If you have an existing installation of AADPODIDENTITY, you must remove the existing installation. Enabling this feature means that the MIC component isn't needed.
+> If you have an existing installation of AADPODIDENTITY, there will be a migration option to V2. More details on the migration will follow as we get closer to Public Preview slated for Q2 2022. Enabling this feature means that the MIC component isn't needed.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
aks View Control Plane Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/view-control-plane-logs.md
- Title: View Azure Kubernetes Service (AKS) controller logs
-description: Learn how to enable and view the logs for the Kubernetes control plane in Azure Kubernetes Service (AKS)
-- Previously updated : 01/27/2020---
-# Enable and review Kubernetes control plane logs in Azure Kubernetes Service (AKS)
-
-With Azure Kubernetes Service (AKS), the control plane components such as the *kube-apiserver* and *kube-controller-manager* are provided as a managed service. You create and manage the nodes that run the *kubelet* and container runtime, and deploy your applications through the managed Kubernetes API server. To help troubleshoot your application and services, you may need to view the logs generated by these control plane components. This article shows you how to use Azure Monitor logs to enable and query the logs from the Kubernetes control plane components.
-
-## Before you begin
-
-This article requires an existing AKS cluster running in your Azure account. If you do not already have an AKS cluster, create one using the [Azure CLI][cli-quickstart] or [Azure portal][portal-quickstart]. Azure Monitor logs works with both Kubernetes RBAC, Azure RBAC, and non-RBAC enabled AKS clusters.
-
-## Enable resource logs
-
-To help collect and review data from multiple sources, Azure Monitor logs provides a query language and analytics engine that provides insights to your environment. A workspace is used to collate and analyze the data, and can integrate with other Azure services such as Application Insights and Security Center. To use a different platform to analyze the logs, you can instead choose to send resource logs to an Azure storage account or event hub. For more information, see [What is Azure Monitor logs?][log-analytics-overview].
-
-Azure Monitor logs are enabled and managed in the Azure portal. To enable log collection for the Kubernetes control plane components in your AKS cluster, open the Azure portal in a web browser and complete the following steps:
-
-1. Select the resource group for your AKS cluster, such as *myResourceGroup*. Don't select the resource group that contains your individual AKS cluster resources, such as *MC_myResourceGroup_myAKSCluster_eastus*.
-
-2. On the left-hand side, choose **Diagnostic settings**.
-
-3. Select your AKS cluster, such as *myAKSCluster*, then choose to **Add diagnostic setting**.
- :::image type="content" source="media\view-control-plane-logs\select-add-diagnostic-setting.PNG" alt-text="Screenshot of Azure portal in a browser window showing Diagnostic settings, indicating 'Add diagnostic setting' should be selected":::
-
-4. Enter a name, such as *myAKSClusterLogs*, then select the option to **Send to Log Analytics workspace**.
-
-5. Select an existing workspace or create a new one. If you create a workspace, provide a workspace name, a resource group, and a location.
-
-6. In the list of available logs, select the logs you wish to enable. For this example, enable the *kube-audit* and *kube-audit-admin* logs. Common logs include the *kube-apiserver*, *kube-controller-manager*, and *kube-scheduler*. You can return and change the collected logs once Log Analytics workspaces are enabled.
-
-7. When ready, select **Save** to enable collection of the selected logs.
- :::image type="content" source="media\view-control-plane-logs\settings-selected.PNG" alt-text="Screenshot of Azure portal's 'Add diagnostic setting' screen. A destination of 'Send to Log Analytics workspace' and logs 'kube-audit' and 'kube-audit-admin' are selected":::
-
-## Log categories
-
-In addition to entries written by Kubernetes, your project's audit logs also have entries from AKS.
-
-Audit logs are recorded into three categories: *kube-audit*, *kube-audit-admin*, and *guard*.
--- The *kube-audit* category contains all audit log data for every audit event, including *get*, *list*, *create*, *update*, *delete*, *patch*, and *post*.-- The *kube-audit-admin* category is a subset of the *kube-audit* log category. *kube-audit-admin* reduces the number of logs significantly by excluding the *get* and *list* audit events from the log.-- The *guard* category is managed Azure AD and Azure RBAC audits. For managed Azure AD: token in, user info out. For Azure RBAC: access reviews in and out.-
-## Schedule a test pod on the AKS cluster
-
-To generate some logs, create a new pod in your AKS cluster. The following example YAML manifest can be used to create a basic NGINX instance. Create a file named `nginx.yaml` in an editor of your choice and paste the following content:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: nginx
-spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
-```
-
-Create the pod with the [kubectl create][kubectl-create] command and specify your YAML file, as shown in the following example:
-
-```
-$ kubectl create -f nginx.yaml
-
-pod/nginx created
-```
-
-## View collected logs
-
-It may take up to 10 minutes for the diagnostics logs to be enabled and appear.
-
-> [!NOTE]
-> If you need all audit log data for compliance or other purposes, collect and store it in inexpensive storage such as blob storage. Use the *kube-audit-admin* log category to collect and save a meaningful set of audit log data for monitoring and alerting purposes.
-
-In the Azure portal, navigate to your AKS cluster, and select **Logs** on the left-hand side. Close the *Example Queries* window if it appears.
-
-On the left-hand side, choose **Logs**. To view the *kube-audit* logs, enter the following query in the text box:
-
-```
-AzureDiagnostics
-| where Category == "kube-audit"
-| project log_s
-```
-
-Many logs are likely returned. To scope down the query to view the logs about the NGINX pod created in the previous step, add an additional *where* statement to search for *nginx* as shown in the following example query:
-
-```
-AzureDiagnostics
-| where Category == "kube-audit"
-| where log_s contains "nginx"
-| project log_s
-```
-
-To view the *kube-audit-admin* logs, enter the following query in the text box:
-
-```
-AzureDiagnostics
-| where Category == "kube-audit-admin"
-| project log_s
-```
-
-In this example, the query shows all create jobs in *kube-audit-admin*. There are likely many results returned, to scope down the query to view the logs about the NGINX pod created in the previous step, add an additional *where* statement to search for *nginx* as shown in the following example query.
-
-```
-AzureDiagnostics
-| where Category == "kube-audit-admin"
-| where log_s contains "nginx"
-| project log_s
-```
--
-For more information on how to query and filter your log data, see [View or analyze data collected with log analytics log search][analyze-log-analytics].
-
-## Log event schema
-
-AKS logs the following events:
-
-* [AzureActivity][log-schema-azureactivity]
-* [AzureDiagnostics][log-schema-azurediagnostics]
-* [AzureMetrics][log-schema-azuremetrics]
-* [ContainerImageInventory][log-schema-containerimageinventory]
-* [ContainerInventory][log-schema-containerinventory]
-* [ContainerLog][log-schema-containerlog]
-* [ContainerNodeInventory][log-schema-containernodeinventory]
-* [ContainerServiceLog][log-schema-containerservicelog]
-* [Heartbeat][log-schema-heartbeat]
-* [InsightsMetrics][log-schema-insightsmetrics]
-* [KubeEvents][log-schema-kubeevents]
-* [KubeHealth][log-schema-kubehealth]
-* [KubeMonAgentEvents][log-schema-kubemonagentevents]
-* [KubeNodeInventory][log-schema-kubenodeinventory]
-* [KubePodInventory][log-schema-kubepodinventory]
-* [KubeServices][log-schema-kubeservices]
-* [Perf][log-schema-perf]
-
-## Log Roles
-
-| Role | Description |
-|--|-|
-| *aksService* | The display name in audit log for the control plane operation (from the hcpService) |
-| *masterclient* | The display name in audit log for MasterClientCertificate, the certificate you get from az aks get-credentials |
-| *nodeclient* | The display name for ClientCertificate, which is used by agent nodes |
-
-## Next steps
-
-In this article, you learned how to enable and review the logs for the Kubernetes control plane components in your AKS cluster. To monitor and troubleshoot further, you can also [view the Kubelet logs][kubelet-logs] and [enable SSH node access][aks-ssh].
-
-<!-- LINKS - external -->
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
-
-<!-- LINKS - internal -->
-[cli-quickstart]: kubernetes-walkthrough.md
-[portal-quickstart]: kubernetes-walkthrough-portal.md
-[log-analytics-overview]: ../azure-monitor/logs/log-query-overview.md
-[analyze-log-analytics]: ../azure-monitor/logs/log-analytics-tutorial.md
-[kubelet-logs]: kubelet-logs.md
-[aks-ssh]: ssh.md
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[log-schema-azureactivity]: /azure/azure-monitor/reference/tables/azureactivity
-[log-schema-azurediagnostics]: /azure/azure-monitor/reference/tables/azurediagnostics
-[log-schema-azuremetrics]: /azure/azure-monitor/reference/tables/azuremetrics
-[log-schema-containerimageinventory]: /azure/azure-monitor/reference/tables/containerimageinventory
-[log-schema-containerinventory]: /azure/azure-monitor/reference/tables/containerinventory
-[log-schema-containerlog]: /azure/azure-monitor/reference/tables/containerlog
-[log-schema-containernodeinventory]: /azure/azure-monitor/reference/tables/containernodeinventory
-[log-schema-containerservicelog]: /azure/azure-monitor/reference/tables/containerservicelog
-[log-schema-heartbeat]: /azure/azure-monitor/reference/tables/heartbeat
-[log-schema-insightsmetrics]: /azure/azure-monitor/reference/tables/insightsmetrics
-[log-schema-kubeevents]: /azure/azure-monitor/reference/tables/kubeevents
-[log-schema-kubehealth]: /azure/azure-monitor/reference/tables/kubehealth
-[log-schema-kubemonagentevents]: /azure/azure-monitor/reference/tables/kubemonagentevents
-[log-schema-kubenodeinventory]: /azure/azure-monitor/reference/tables/kubenodeinventory
-[log-schema-kubepodinventory]: /azure/azure-monitor/reference/tables/kubepodinventory
-[log-schema-kubeservices]: /azure/azure-monitor/reference/tables/kubeservices
-[log-schema-perf]: /azure/azure-monitor/reference/tables/perf
aks View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/view-metrics.md
- Title: View cluster metrics for Azure Kubernetes Service (AKS)
-description: View cluster metrics for Azure Kubernetes Service (AKS).
-- Previously updated : 03/30/2021--
-# View cluster metrics for Azure Kubernetes Service (AKS)
-
-AKS provides a set of metrics for the control plane, including the API Server and cluster autoscaler, and cluster nodes. These metrics allow you to monitor the health of your cluster and troubleshoot issues. You can view the metrics for your cluster using the Azure portal.
-
-> [!NOTE]
-> These AKS cluster metrics overlap with a subset of the [metrics provided by Kubernetes][kubernetes-metrics].
-
-## View metrics for your AKS cluster using the Azure portal
-
-To view the metrics for your AKS cluster:
-
-1. Sign in to the [Azure portal][azure-portal] and navigate to your AKS cluster.
-1. On the left side under *Monitoring*, select *Metrics*.
-1. Create a chart for the metrics you want to view. For example, create a chart:
- 1. For *Scope*, choose your cluster.
- 1. For *Metric Namespace*, choose *Container service (managed) standard metrics*.
- 1. For *Metric*, under *Pods* choose *Number of Pods by phase*.
- 1. For *Aggregation* choose *Avg*.
--
-The above example shows the metrics for the average number of pods for the *myAKSCluster*.
-
-## Available metrics
-
-The following cluster metrics are available:
-
-| Name | Group | ID | Description |
-| | | | - |
-| Inflight Requests | API Server (preview) |apiserver_current_inflight_requests | Maximum number of currently active inflight requests on the API Server per request kind. |
-| Cluster Health | Cluster Autoscaler (preview) | cluster_autoscaler_cluster_safe_to_autoscale | Determines whether or not cluster autoscaler will take action on the cluster. |
-| Scale Down Cooldown | Cluster Autoscaler (preview) | cluster_autoscaler_scale_down_in_cooldown | Determines if the scale down is in cooldown - No nodes will be removed during this timeframe. |
-| Unneeded Nodes | Cluster Autoscaler (preview) | cluster_autoscaler_unneeded_nodes_count | Cluster auotscaler marks those nodes as candidates for deletion and are eventually deleted. |
-| Unschedulable Pods | Cluster Autoscaler (preview) | cluster_autoscaler_unschedulable_pods_count | Number of pods that are currently unschedulable in the cluster. |
-| Total number of available cpu cores in a managed cluster | Nodes | kube_node_status_allocatable_cpu_cores | Total number of available CPU cores in a managed cluster. |
-| Total amount of available memory in a managed cluster | Nodes | kube_node_status_allocatable_memory_bytes | Total amount of available memory in a managed cluster. |
-| Statuses for various node conditions | Nodes | kube_node_status_condition | Statuses for various node conditions |
-| CPU Usage Millicores | Nodes (preview) | node_cpu_usage_millicores | Aggregated measurement of CPU utilization in millicores across the cluster. |
-| CPU Usage Percentage | Nodes (preview) | node_cpu_usage_percentage | Aggregated average CPU utilization measured in percentage across the cluster. |
-| Memory RSS Bytes | Nodes (preview) | node_memory_rss_bytes | Container RSS memory used in bytes. |
-| Memory RSS Percentage | Nodes (preview) | node_memory_rss_percentage | Container RSS memory used in percent. |
-| Memory Working Set Bytes | Nodes (preview) | node_memory_working_set_bytes | Container working set memory used in bytes. |
-| Memory Working Set Percentage | Nodes (preview) | node_memory_working_set_percentage | Container working set memory used in percent. |
-| Disk Used Bytes | Nodes (preview) | node_disk_usage_bytes | Disk space used in bytes by device. |
-| Disk Used Percentage | Nodes (preview) | node_disk_usage_percentage | Disk space used in percent by device. |
-| Network In Bytes | Nodes (preview) | node_network_in_bytes | Network received bytes. |
-| Network Out Bytes | Nodes (preview) | node_network_out_bytes | Network transmitted bytes. |
-| Number of pods in Ready state | Pods | kube_pod_status_ready | Number of pods in *Ready* state. |
-| Number of pods by phase | Pods | kube_pod_status_phase | Number of pods by phase. |
-
-> [!IMPORTANT]
-> Metrics in preview can be updated or changed, including their names and descriptions, while in preview.
-
-## Next steps
-
-In addition to the cluster metrics for AKS, you can also use Azure Monitor with your AKS cluster. For more information on using Azure Monitor with AKS, see [Azure Monitor for containers][aks-azure-monitory].
-
-[aks-azure-monitory]: ../azure-monitor/containers/container-insights-overview.md
-[azure-portal]: https://portal.azure.com/
-[kubernetes-metrics]: https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/zone-redundancy.md
In the portal, optionally enable zone redundancy when you add a location to your
* Learn more about [deploying an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md). * You can also enable zone redundancy using an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-simple-zones). * Learn more about [Azure services that support availability zones](../availability-zones/az-region.md).
-* Learn more about building for [reliability](/azure/architecture/framework/resiliency/overview) in Azure.
+* Learn more about building for [reliability](/azure/architecture/framework/resiliency/app-design) in Azure.
app-service Configure Authentication Customize Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-customize-sign-in-out.md
To redirect the user post-sign-in to a custom URL, use the `post_login_redirect_
<a href="/.auth/login/<provider>?post_login_redirect_url=/Home/Index">Log in</a> ```
-## Validate tokens from providers
+## Client-directed sign-in
-In a client-directed sign-in, the application signs in the user to the provider manually and then submits the authentication token to App Service for validation (see [Authentication flow](overview-authentication-authorization.md#authentication-flow)). This validation itself doesn't actually grant you access to the desired app resources, but a successful validation will give you a session token that you can use to access app resources.
+In a client-directed sign-in, the application signs in the user to the identity provider using a provider-specific SDK. The application code then submits the resulting authentication token to App Service for validation (see [Authentication flow](overview-authentication-authorization.md#authentication-flow)) using an HTTP POST request. The [Azure Mobile Apps SDKs](https://github.com/Azure/azure-mobile-apps) use this sign-in flow. This validation itself doesn't actually grant you access to the desired app resources, but a successful validation will give you a session token that you can use to access app resources.
-To validate the provider token, App Service app must first be configured with the desired provider. At runtime, after you retrieve the authentication token from your provider, post the token to `/.auth/login/<provider>` for validation. For example:
+To validate the provider token, App Service app must first be configured with the desired provider. At runtime, after you retrieve the authentication token from your provider, post the token to `/.auth/login/<provider>` for validation. For example:
``` POST https://<appname>.azurewebsites.net/.auth/login/aad HTTP/1.1
The token format varies slightly according to the provider. See the following ta
| Provider value | Required in request body | Comments | |-|-|-|
-| `aad` | `{"access_token":"<access_token>"}` | |
-| `microsoftaccount` | `{"access_token":"<token>"}` | The `expires_in` property is optional. <br/>When requesting the token from Live services, always request the `wl.basic` scope. |
-| `google` | `{"id_token":"<id_token>"}` | The `authorization_code` property is optional. When specified, it can also optionally be accompanied by the `redirect_uri` property. |
+| `aad` | `{"access_token":"<access_token>"}` | The `id_token`, `refresh_token`, and `expires_in` properties are optional. |
+| `microsoftaccount` | `{"access_token":"<access_token>"}` or `{"authentication_token": "<token>"`| `authentication_token` is preferred over `access_token`. The `expires_in` property is optional. <br/> When requesting the token from Live services, always request the `wl.basic` scope. |
+| `google` | `{"id_token":"<id_token>"}` | The `authorization_code` property is optional. Providing an `authorization_code` value will add an access token and a refresh token to the token store. When specified, `authorization_code` can also optionally be accompanied by a `redirect_uri` property. |
| `facebook`| `{"access_token":"<user_access_token>"}` | Use a valid [user access token](https://developers.facebook.com/docs/facebook-login/access-tokens) from Facebook. | | `twitter` | `{"access_token":"<access_token>", "access_token_secret":"<acces_token_secret>"}` | | | | | |
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-custom-container.md
Multi-container is currently in preview. The following App Service platform feat
- Managed Identities - CORS - VNET integration is not supported for Docker Compose scenarios
+- Docker Compose on Azure App Services currently has a limit of 4,000 characters at this time.
### Docker Compose options
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-authentication-authorization.md
Title: Authentication and authorization
description: Find out about the built-in authentication and authorization support in Azure App Service and Azure Functions, and how it can help secure your app against unauthorized access. ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5 Previously updated : 03/29/2021 Last updated : 07/21/2021
App Service can be used for authentication with or without restricting access to
## How it works
-[Feature architecture on Windows (non-container deployment)](#feature-architecture-on-windows-non-container-deployment))
-
-[Feature architecture on Linux and containers](#feature-architecture-on-linux-and-containers)
+[Feature architecture](#feature-architecture)
[Authentication flow](#authentication-flow)
App Service can be used for authentication with or without restricting access to
[Logging and tracing](#logging-and-tracing)
-#### Feature architecture on Windows (non-container deployment)
+### Feature architecture
-The authentication and authorization module runs in the same sandbox as your application code. When it's enabled, every incoming HTTP request passes through it before being handled by your application code.
+The authentication and authorization middleware component is a feature of the platform that runs on the same VM as your application. When it's enabled, every incoming HTTP request passes through it before being handled by your application.
:::image type="content" source="media/app-service-authentication-overview/architecture.png" alt-text="An architecture diagram showing requests being intercepted by a process in the site sandbox which interacts with identity providers before allowing traffic to the deployed site" lightbox="media/app-service-authentication-overview/architecture.png":::
-This module handles several things for your app:
+The platform middleware handles several things for your app:
-- Authenticates users with the specified provider-- Validates, stores, and refreshes tokens
+- Authenticates users and clients with the specified identity provider(s)
+- Validates, stores, and refreshes OAuth tokens issued by the configured identity provider(s)
- Manages the authenticated session-- Injects identity information into request headers
+- Injects identity information into HTTP request headers
+
+The module runs separately from your application code and can be configured using Azure Resource Manager settings or using [a configuration file](configure-authentication-file-based.md). No SDKs, specific programming languages, or changes to your application code are required.
+
+#### Feature architecture on Windows (non-container deployment)
-The module runs separately from your application code and is configured using app settings. No SDKs, specific languages, or changes to your application code are required.
+The authentication and authorization module runs as a native [IIS module](/iis/get-started/introduction-to-iis/iis-modules-overview) in the same sandbox as your application. When it's enabled, every incoming HTTP request passes through it before being handled by your application.
#### Feature architecture on Linux and containers The authentication and authorization module runs in a separate container, isolated from your application code. Using what's known as the [Ambassador pattern](/azure/architecture/patterns/ambassador), it interacts with the incoming traffic to perform similar functionality as on Windows. Because it does not run in-process, no direct integration with specific language frameworks is possible; however, the relevant information that your app needs is passed through using request headers as explained below.
-#### Authentication flow
+### Authentication flow
The authentication flow is the same for all providers, but differs depending on whether you want to sign in with the provider's SDK:
The table below shows the steps of the authentication flow.
| Step | Without provider SDK | With provider SDK | | - | - | - | | 1. Sign user in | Redirects client to `/.auth/login/<provider>`. | Client code signs user in directly with provider's SDK and receives an authentication token. For information, see the provider's documentation. |
-| 2. Post-authentication | Provider redirects client to `/.auth/login/<provider>/callback`. | Client code [posts token from provider](configure-authentication-customize-sign-in-out.md#validate-tokens-from-providers) to `/.auth/login/<provider>` for validation. |
+| 2. Post-authentication | Provider redirects client to `/.auth/login/<provider>/callback`. | Client code [posts token from provider](configure-authentication-customize-sign-in-out.md#client-directed-sign-in) to `/.auth/login/<provider>` for validation. |
| 3. Establish authenticated session | App Service adds authenticated cookie to response. | App Service returns its own authentication token to client code. | | 4. Serve authenticated content | Client includes authentication cookie in subsequent requests (automatically handled by browser). | Client code presents authentication token in `X-ZUMO-AUTH` header (automatically handled by Mobile Apps client SDKs). |
For client browsers, App Service can automatically direct all unauthenticated us
<a name="authorization"></a>
-#### Authorization behavior
+### Authorization behavior
In the [Azure portal](https://portal.azure.com), you can configure App Service with a number of behaviors when incoming request is not authenticated. The following headings describe the options.
With this option, you don't need to write any authentication code in your app. F
> [!NOTE] > By default, any user in your Azure AD tenant can request a token for your application from Azure AD. You can [configure the application in Azure AD](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) if you want to restrict access to your app to a defined set of users.
-#### Token store
+### Token store
App Service provides a built-in token store, which is a repository of tokens that are associated with the users of your web apps, APIs, or native mobile apps. When you enable authentication with any provider, this token store is immediately available to your app. If your application code needs to access data from these providers on the user's behalf, such as:
The ID tokens, access tokens, and refresh tokens are cached for the authenticate
If you don't need to work with tokens in your app, you can disable the token store in your app's **Authentication / Authorization** page.
-#### Logging and tracing
+### Logging and tracing
If you [enable application logging](troubleshoot-diagnostic-logs.md), you will see authentication and authorization traces directly in your log files. If you see an authentication error that you didn't expect, you can conveniently find all the details by looking in your existing application logs. If you enable [failed request tracing](troubleshoot-diagnostic-logs.md), you can see exactly what role the authentication and authorization module may have played in a failed request. In the trace logs, look for references to a module named `EasyAuthModule_32/64`.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview.md
description: Learn how Azure App Service helps you develop and host web applicat
ms.assetid: 94af2caf-a2ec-4415-a097-f60694b860b3 Previously updated : 07/06/2020 Last updated : 07/21/2021
If you need to create another web app with an outdated runtime version that is n
### Limitations
+> [!NOTE]
+> Linux and Windows App Service plans can now share resource groups. This limitation has been lifted from the platform and existing resource groups have been updated to support this.
+>
+ - App Service on Linux is not supported on [Shared](https://azure.microsoft.com/pricing/details/app-service/plans/) pricing tier. -- You can't mix Windows and Linux apps in the same App Service plan. -- Historically, you can't mix Windows and Linux apps in the same resource group. However, all resource groups created on or after January 21, 2021 do support this scenario. For resource groups created before January 21, 2021, the ability to add mixed platform deployments will be rolled out across Azure regions (including National cloud regions) soon. - The Azure portal shows only features that currently work for Linux apps. As features are enabled, they're activated on the portal. - When deployed to built-in images, your code and content are allocated a storage volume for web content, backed by Azure Storage. The disk latency of this volume is higher and more variable than the latency of the container filesystem. Apps that require heavy read-only access to content files may benefit from the custom container option, which places files in the container filesystem instead of on the content volume.
application-gateway Application Gateway Key Vault Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-key-vault-common-errors.md
On the other hand, if a certificate object is permanently deleted, you will need
**Resolution:** You will encounter this issue upon enabling Key Vault Firewall for restricted access. You can still configure your Application Gateway in a restricted network of Key Vault in the following manner. 1. Under Key VaultΓÇÖs Networking blade 2. Choose Private endpoint and selected networks in "Firewall and Virtual Networks" tab
-3. Then using Virtual Networks, add your Application GatewayΓÇÖs virtual network and Subnet. During the process also configure ΓÇÿMicrosoft.KeyVault' service endpoint by selecting its checkbox.
-4. Finally, select ΓÇ£YesΓÇ¥ to allow Trusted Services to bypass Key VaultΓÇÖs firewall.
+3. Finally, select ΓÇ£YesΓÇ¥ to allow Trusted Services to bypass Key VaultΓÇÖs firewall.
:::image type="content" source="./media/application-gateway-key-vault-common-errors/key-vault-restricted-access.png" alt-text="Key Vault Has Restricted Access."::: </br></br>
application-gateway Certificates For Backend Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/certificates-for-backend-authentication.md
Previously updated : 06/17/2020 Last updated : 07/30/2021
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-hotpatch.md
Last updated 02/22/2021-+
az provider register --namespace Microsoft.Compute
During the preview, [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled automatically for all VMs created with a supported _Windows Server Azure Edition_ image. With automatic VM guest patching enabled: * Patches classified as Critical or Security are automatically downloaded and applied on the VM. * Patches are applied during off-peak hours in the VM's time zone.
-* Patch orchestration is managed by Azure and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-patching).
+* Patch orchestration is managed by Azure and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates).
* Virtual machine health, as determined through platform health signals, is monitored to detect patching failures. ### How does automatic VM guest patching work?
With Hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most mont
The VM is assessed automatically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
-Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-patching). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM will be assessed and applicable patches will be installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
+Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM will be assessed and applicable patches will be installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
Definition updates and other patches not classified as Critical or Security won't be installed through automatic VM guest patching.
azure-arc Azure Data Studio Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/azure-data-studio-dashboards.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
[Azure Data Studio](/sql/azure-data-studio/what-is) provides an experience similar to the Azure portal for viewing information about your Azure Arc resources. These views are called **dashboards** and have a layout and options similar to what you could see about a given resource in the Azure portal, but give you the flexibility of seeing that information locally in your environment in cases where you don't have a connection available to Azure. ## Connecting to a data controller
azure-arc Backup Restore Postgresql Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/backup-restore-postgresql-hyperscale.md
Previously updated : 06/02/2021 Last updated : 07/30/2021 # Back up and restore Azure Arc-enabled PostgreSQL Hyperscale server groups
+> [!IMPORTANT]
+> Backup and restore of Azure Arc-enabled PostgreSQL Hyperscale server is not supported in the current preview release.
+ [!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)] [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
azure-arc Change Postgresql Port https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/change-postgresql-port.md
Previously updated : 11/02/2020 Last updated : 07/30/2021
# Change the port on which the server group is listening Changing the port is a standard edit operation of the server group. In order to change the port, run the following command:
-```console
- azdata arc postgres server edit -n <server group name> --port <desired port number>
+```azurecli
+ az postgres arc-server edit -n <server group name> --port <desired port number> --k8s-namespace <namespace> --use-k8s
``` For example, let's assume the name of your server group is _postgres01_ and you would like it to listen on port _866_. You would run the following command:
-```console
- azdata arc postgres server edit -n postgres01 --port 866
+```azurecli
+ az postgres arc-server edit -n postgres01 --port 866 --k8s-namespace <namespace> --use-k8s
``` ## Verify that the port was changed To verify that the port was changed, run the following command to show the configuration of your server group:
-```console
-azdata arc postgres server show -n <server group name>
+```azurecli
+az postgres arc-server show -n <server group name> --k8s-namespace <namespace> --use-k8s
``` In the output of that command, look at the port number displayed for the item "port" in the "service" section of the specifications of your server group. Alternatively, you can verify in the item externalEndpoint of the status section of the specifications of your server group that the IP address is followed by the port number you configured. As an illustration, if we continue the example above, you would run the command:
-```console
-azdata arc postgres server show -n postgres01
+```azurecli
+az postgres arc-server show -n postgres01 --k8s-namespace <namespace> --use-k8s
``` and you would see port 866 referred to here:
azure-arc Configure Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/configure-managed-instance.md
Previously updated : 07/13/2021 Last updated : 07/30/2021
This article explains how to configure Azure Arc-enabled SQL managed instance. ## Configure resources
az sql mi-arc edit --help
The following example sets the cpu core and memory requests and limits. ```azurecli
-az sql mi-arc edit --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI>
+az sql mi-arc edit --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s
``` To view the changes made to the SQL managed instance, you can use the following commands to view the configuration yaml file: ```azurecli
-az sql mi-arc show -n <NAME_OF_SQL_MI>
+az sql mi-arc show -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s
``` ## Configure Server options
To change any of these settings, follow these steps:
traceflag0 = 1204 ```
-1. Copy `mssql-custom.conf` file to `/var/opt/mssql` in the `mssql-miaa` container in the `master-0` pod. Replace `<namespaceName>` with the big data cluster name.
+1. Copy `mssql-custom.conf` file to `/var/opt/mssql` in the `mssql-miaa` container in the `master-0` pod. Replace `<namespaceName>` with the Arc namespace name.
```bash kubectl cp mssql-custom.conf master-0:/var/opt/mssql/mssql-custom.conf -c mssql-server -n <namespaceName> ```
-1. Restart SQL Server instance. Replace `<namespaceName>` with the big data cluster name.
+1. Restart SQL Server instance. Replace `<namespaceName>` with the Arc namespace name.
```bash kubectl exec -it master-0 -c mssql-server -n <namespaceName> -- /bin/bash
To change any of these settings, follow these steps:
**Known limitations** - The steps above require Kubernetes cluster admin permissions-- This is subject to change throughout preview
azure-arc Configure Security Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/configure-security-postgres-hyperscale.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
You can use the standard Postgres way to create users or roles. However, if you
### Change the password of the _postgres_ administrative user Azure Arc-enabled PostgreSQL Hyperscale comes with the standard Postgres administrative user _postgres_ for which you set the password when you create your server group. The general format of the command to change its password is:
-```console
-azdata arc postgres server edit --name <server group name> --admin-password
+```azurecli
+az postgres arc-server edit --name <server group name> --admin-password --k8s-namespace <namespace> --use-k8s
``` Where `--admin-password` is a boolean that relates to the presence of a value in the AZDATA_PASSWORD **session** environment variable.
If the AZDATA_PASSWORD **session** environment variable exists but has not value
1. Delete the AZDATA_PASSWORD **session** environment variable or delete its value 2. Run the command:
- ```console
- azdata arc postgres server edit --name <server group name> --admin-password
+
+ ```azurecli
+ az postgres arc-server edit --name <server group name> --admin-password --k8s-namespace <namespace> --use-k8s
``` For example
- ```console
- azdata arc postgres server edit -n postgres01 --admin-password
+ ```azurecli
+ az postgres arc-server edit -n postgres01 --admin-password --k8s-namespace <namespace> --use-k8s
``` You will be prompted to enter the password and to confirm it: ```console
If the AZDATA_PASSWORD **session** environment variable exists but has not value
#### Change the password of the postgres administrative user using the AZDATA_PASSWORD **session** environment variable: 1. Set the value of the AZDATA_PASSWORD **session** environment variable to what you want to password to be. 2. Run the command:
- ```console
- azdata arc postgres server edit --name <server group name> --admin-password
+ ```azurecli
+ az postgres arc-server edit --name <server group name> --admin-password --k8s-namespace <namespace> --use-k8s
``` For example
- ```console
- azdata arc postgres server edit -n postgres01 --admin-password
+ ```azurecli
+ az postgres arc-server edit -n postgres01 --admin-password --k8s-namespace <namespace> --use-k8s
``` As the password is being updated, the output of the command shows:
azure-arc Configure Server Parameters Postgresql Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/configure-server-parameters-postgresql-hyperscale.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
This document describes the steps to set the database engine settings of your Po
The general format of the command to configure the database engine settings is:
-```console
-azdata arc postgres server edit -n <server group name>, [{--engine-settings, -e}] [{--replace-engine-settings, --re}] {'<parameter name>=<parameter value>, ...'}
+```azurecli
+az postgres arc-server edit -n <server group name>, [{--engine-settings, -e}] [{--replace-settings , --re}] {'<parameter name>=<parameter value>, ...'} --k8s-namespace <namespace> --use-k8s
``` ## Show current custom values ### With [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] command
-```console
-azdata arc postgres server show -n <server group name>
+```azurecli
+az postgres arc-server show -n <server group name> --k8s-namespace <namespace> --use-k8s
``` For example:
-```console
-azdata arc postgres server show -n postgres01
+```azurecli
+az postgres arc-server show -n postgres01 --k8s-namespace <namespace> --use-k8s
``` This command returns the spec of the server group in which you would see the parameters you set. If there is no engine\settings section, it means that all parameters are running on their default value:
Follow the below steps.
Run:
- ```console
- azdata arc postgres server show -n <server group name>
+ ```azurecli
+ az postgres arc-server show -n <server group name> --k8s-namespace <namespace> --use-k8s
``` For example:
- ```console
- azdata arc postgres server show -n postgres01
+ ```azurecli
+ az postgres arc-server show -n postgres01 --k8s-namespace <namespace> --use-k8s
``` This command returns the spec of the server group in which you would see the parameters you set. If there is no engine\settings section, it means that all parameters are running on their default value:
The below commands set the parameters of the Coordinator node and the Worker nod
### Set a single parameter
-```console
-azdata arc server edit -n <server group name> -e <parameter name>=<parameter value>
+```azurecli
+az postgres arc-server edit -n <server group name> --engine-settings <parameter name>=<parameter value> --k8s-namespace <namespace> --use-k8s
``` For example:
-```console
-azdata arc postgres server edit -n postgres01 -e shared_buffers=8MB
+```azurecli
+az postgres arc-server edit -n postgres01 --engine-settings shared_buffers=8MB --k8s-namespace <namespace> --use-k8s
``` ### Set multiple parameters with a single command
-```console
-azdata arc postgres server edit -n <server group name> -e '<parameter name>=<parameter value>, <parameter name>=<parameter value>,...'
+```azurecli
+az postgres arc-server edit -n <server group name> --engine-settings '<parameter name>=<parameter value>, <parameter name>=<parameter value>, --k8s-namespace <namespace> --use-k8s...'
``` For example:
-```console
-azdata arc postgres server edit -n postgres01 -e 'shared_buffers=8MB, max_connections=50'
+```azurecli
+az postgres arc-server edit -n postgres01 --engine-settings 'shared_buffers=8MB, max_connections=50' --k8s-namespace <namespace> --use-k8s
``` ### Reset a parameter to its default value
To reset a parameter to its default value, set it without indicating a value.
For example:
-```console
-azdata arc postgres server edit -n postgres01 -e shared_buffers=
+```azurecli
+az postgres arc-server edit -n postgres01 --k8s-namespace <namespace> --use-k8s --engine-settings shared_buffers=
``` ### Reset all parameters to their default values
-```console
-azdata arc postgres server edit -n <server group name> -e '' -re
+```azurecli
+az postgres arc-server edit -n <server group name> --engine-settings '' -re --k8s-namespace <namespace> --use-k8s
``` For example:
-```console
-azdata arc postgres server edit -n postgres01 -e '' -re
+```azurecli
+az postgres arc-server edit -n postgres01 --engine-settings '' -re --k8s-namespace <namespace> --use-k8s
``` ## Special considerations ### Set a parameter which value contains a comma, space, or special character
-```console
-azdata arc postgres server edit -n <server group name> -e '<parameter name>="<parameter value>"'
+```azurecli
+az postgres arc-server edit -n <server group name> --engine-settings '<parameter name>="<parameter value>"' --k8s-namespace <namespace> --use-k8s
``` For example:
-```console
-azdata arc postgres server edit -n postgres01 -e 'custom_variable_classes = "plpgsql,plperl"'
+```azurecli
+az postgres arc-server edit -n postgres01 --engine-settings 'custom_variable_classes = "plpgsql,plperl"' --k8s-namespace <namespace> --use-k8s
``` ### Pass an environment variable in a parameter value
The environment variable should be wrapped inside "''" so that it doesn't get re
For example:
-```console
-azdata arc postgres server edit -n postgres01 -e 'search_path = "$user"'
+```azurecli
+az postgres arc-server edit -n postgres01 --engine-settings 'search_path = "$user"' --k8s-namespace <namespace> --use-k8s
``` ## Next steps
azure-arc Connect Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/connect-managed-instance.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Connect to Azure Arc-enabled SQL Managed Instance This article explains how you can connect to your Azure Arc-enabled SQL Managed Instance. ## View Azure Arc-enabled SQL Managed Instances
az network nsg list -g azurearcvm-rg --query "[].{NSGName:name}" -o table
Once you have the name of the NSG, you can add a firewall rule using the following command. The example values here create an NSG rule for port 30913 and allows connection from **any** source IP address. This is not a security best practice! You can lock things down better by specifying a -source-address-prefixes value that is specific to your client IP address or an IP address range that covers your team's or organization's IP addresses.
-Replace the value of the `--destination-port-ranges` parameter below with the port number you got from the `azdata sql instance list`F command above.
+Replace the value of the `--destination-port-ranges` parameter below with the port number you got from the `az sql mi-arc list` command above.
```azurecli az network nsg rule create -n db_port --destination-port-ranges 30913 --source-address-prefixes '*' --nsg-name azurearcvmNSG --priority 500 -g azurearcvm-rg --access Allow --description 'Allow port through for db access' --destination-address-prefixes '*' --direction Inbound --protocol Tcp --source-port-ranges '*'
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/connectivity.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Connectivity modes and requirements ## Connectivity modes
Some Azure-attached services are only available when they can be directly reache
||**Indirectly connected**|**Directly connected**|**Never connected**| ||||| |**Description**|Indirectly connected mode offers most of the management services locally in your environment with no direct connection to Azure. A minimal amount of data must be sent to Azure for inventory and billing purposes _only_. It is exported to a file and uploaded to Azure at least once per month. No direct or continuous connection to Azure is required. Some features and services which require a connection to Azure will not be available.|Directly connected mode offers all of the available services when a direct connection can be established with Azure. Connections are always initiated _from_ your environment to Azure and use standard ports and protocols such as HTTPS/443.|No data can be sent to or from Azure in any way.|
-|**Current availability**| Available in preview.|Available in preview.|Not currently supported.|
+|**Current availability**| Available |Available in preview.|Not currently supported.|
|**Typical use cases**|On-premises data centers that donΓÇÖt allow connectivity in or out of the data region of the data center due to business or regulatory compliance policies or out of concerns of external attacks or data exfiltration. Typical examples: Financial institutions, health care, government. <br/><br/>Edge site locations where the edge site doesnΓÇÖt typically have connectivity to the Internet. Typical examples: oil/gas or military field applications. <br/><br/>Edge site locations that have intermittent connectivity with long periods of outages. Typical examples: stadiums, cruise ships. | Organizations who are using public clouds. Typical examples: Azure, AWS or Google Cloud.<br/><br/>Edge site locations where Internet connectivity is typically present and allowed. Typical examples: retail stores, manufacturing.<br/><br/>Corporate data centers with more permissive policies for connectivity to/from their data region of the datacenter to the Internet. Typical examples: Non-regulated businesses, small/medium sized businesses|Truly "air-gapped" environments where no data under any circumstances can come or go from the data environment. Typical examples: top secret government facilities.| |**How data is sent to Azure**|There are three options for how the billing and inventory data can be sent to Azure:<br><br> 1) Data is exported out of the data region by an automated process that has connectivity to both the secure data region and Azure.<br><br>2) Data is exported out of the data region by an automated process within the data region, automatically copied to a less secure region, and an automated process in the less secure region uploads the data to Azure.<br><br>3) Data is manually exported by a user within the secure region, manually brought out of the secure region, and manually uploaded to Azure. <br><br>The first two options are an automated continuous process that can be scheduled to run frequently so there is minimal delay in the transfer of data to Azure subject only to the available connectivity to Azure.|Data is automatically and continuously sent to Azure.|Data is never sent to Azure.|
Some Azure-attached services are only available when they can be directly reache
|**Self-service provisioning**|Supported<br/>Creation can be done through Azure Data Studio, the appropriate CLI, or Kubernetes native tools (helm, kubectl, oc, etc.), or using Azure Arc-enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates. **Pending availability of directly connected mode** |**Elastic scalability**|Supported|Supported<br/>**Pending availability of directly connected mode**| |**Billing**|Supported<br/>Billing data is periodically exported out and sent to Azure.|Supported<br/>Billing data is automatically and continuously sent to Azure and reflected in near real time. **Pending availability of directly connected mode**|
-|**Inventory management**|Supported<br/>Inventory data is periodically exported out and sent to Azure.<br/><br/>Use client tools like Azure Data Studio, Azure Data CLI, or `kubectl` to view and manage inventory locally.|Supported<br/>Inventory data is automatically and continuously sent to Azure and reflected in near real time. As such, you can manage inventory directly from the Azure portal. **Pending availability of directly connected mode**|
+|**Inventory management**|Supported<br/>Inventory data is periodically exported out and sent to Azure.<br/><br/>Use client tools like Azure Data Studio, Azure Data CLI, or `kubectl` to view and manage inventory locally.|Supported<br/>Inventory data is automatically and continuously sent to Azure and reflected in near real time. As such, you can manage inventory directly from the Azure portal.|
|**Automatic upgrades and patching**|Supported<br/>The data controller must either have direct access to the Microsoft Container Registry (MCR) or the container images need to be pulled from MCR and pushed to a local, private container registry that the data controller has access to.|Supported<br/>**Pending availability of directly connected mode**|
-|**Automatic backup and restore**|Supported<br/>Automatic local backup and restore.|Supported<br/>In addition to automated local backup and restore, you can _optionally_ send backups to Azure Backup for long-term, off-site retention. **Pending availability of directly connected mode**|
+|**Automatic backup and restore**|Supported<br/>Automatic local backup and restore.|Supported<br/>In addition to automated local backup and restore, you can _optionally_ send backups to Azure Backup for long-term, off-site retention. **Pending availability in directly connected mode**|
|**Monitoring**|Supported<br/>Local monitoring using Grafana and Kibana dashboards.|Supported<br/>In addition to local monitoring dashboards, you can _optionally_ send monitoring data and logs to Azure Monitor for at-scale monitoring of multiple sites in one place. **Pending availability of directly connected mode**|
-|**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory for connectivity to database instances. Use K8s authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Azure Active Directory. **Pending availability of directly connected mode**|
-|**Role-based access control (RBAC)**|Use Kubernetes RBAC on Kubernetes API. Use SQL and Postgres RBAC for database instances.|You can optionally integrate with Azure Active Directory and Azure RBAC. **Pending availability of directly connected mode**|
+|**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory (AD is not currently supported, will be in preview soon) for connectivity to database instances. Use K8s authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Azure Active Directory. **Pending availability in directly connected mode**|
+|**Role-based access control (RBAC)**|Use Kubernetes RBAC on Kubernetes API. Use SQL and Postgres RBAC for database instances.|You can use Azure Active Directory and Azure RBAC.|
|**Azure Defender**|Not supported|Planned for future| ## Connectivity requirements **Some functionality requires a connection to Azure.**
-**All communication with Azure is always initiated from your environment.** This is true even for operations, which are initiated by a user in the Azure portal. In that case, there is effectively a task, which is queued up in Azure. An agent in your environment initiates the communication with Azure to see what tasks are in the queue, runs the tasks, and reports back the status/completion/fail to Azure.
+**All communication with Azure is always initiated from your environment.** This is true even for operations which are initiated by a user in the Azure portal. In that case, there is effectively a task, which is queued up in Azure. An agent in your environment initiates the communication with Azure to see what tasks are in the queue, runs the tasks, and reports back the status/completion/fail to Azure.
|**Type of Data**|**Direction**|**Required/Optional**|**Additional Costs**|**Mode Required**|**Notes**| |||||||
-|**Container images**|Microsoft Container Registry -> Customer|Required|No|Indirect or direct|Container images are the method for distributing the software. In an environment which can connect to the Microsoft Container Registry (MCR) over the Internet, the container images can be pulled directly from MCR. In the event that the deployment environment doesnΓÇÖt have direct connectivity, you can pull the images from MCR and push them to a private container registry in the deployment environment. At creation time, you can configure the creation process to pull from the private container registry instead of MCR. This also applies to automated updates.|
+|**Container images**|Microsoft Container Registry -> Customer|Required|No|Indirect or direct|Container images are the method for distributing the software. In an environment which can connect to the Microsoft Container Registry (MCR) over the Internet, the container images can be pulled directly from MCR. In the event that the deployment environment doesnΓÇÖt have direct connectivity, you can pull the images from MCR and push them to a private container registry in the deployment environment. At creation time, you can configure the creation process to pull from the private container registry instead of MCR. This will also apply to automated updates.|
|**Resource inventory**|Customer environment -> Azure|Required|No|Indirect or direct|An inventory of data controllers, database instances (PostgreSQL and SQL) is kept in Azure for billing purposes and also for purposes of creating an inventory of all data controllers and database instances in one place which is especially useful if you have more than one environment with Azure Arc data services. As instances are provisioned, deprovisioned, scaled out/in, scaled up/down the inventory is updated in Azure.|
-|**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. There is no cost for Azure Arc-enabled data services during the preview period.|
+|**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. |
|**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/en-us/pricing/details/monitor/))|Indirect or direct|You may want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.|
-|**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC then local Kubernetes RBAC can be used. **Pending availability of directly connected mode**|
-|**Azure Active Directory (AD)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you may already be paying for Azure AD|Direct only|If you want to use Azure AD for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure AD for authentication, you can us Active Directory Federation Services (ADFS) over Active Directory. **Pending availability of directly connected mode**|
-|**Backup and restore**|Customer environment -> Customer environment|Required|No|Direct or indirect|The backup and restore service can be configured to point to local storage classes. |
-|**Azure backup - long term retention**| Customer environment -> Azure | Optional| Yes for Azure storage | Direct only |You may want to send backups that are taken locally to Azure Backup for long-term, off-site retention of backups and bring them back to the local environment for restore. **Pending availability of directly connected mode**|
-|**Azure Defender security services**|Customer environment -> Azure -> Customer environment|Optional|Yes|Direct only|**Pending availability of directly connected mode**|
-|**Provisioning and configuration changes from Azure portal**|Customer environment -> Azure -> Customer environment|Optional|No|Direct only|Provisioning and configuration changes can be done locally using Azure Data Studio or the appropriate CLI. In directly connected mode, you will also be able to provision and make configuration changes from the Azure portal. **Pending availability of directly connected mode**|
+|**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC then local Kubernetes RBAC can be used.|
+|**Azure Active Directory (AAD) (Future)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you may already be paying for Azure AD|Direct only|If you want to use Azure AD for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure AD for authentication, you can us Active Directory Federation Services (ADFS) over Active Directory. **Pending availability in directly connected mode**|
+|**Backup and restore**|Customer environment -> Customer environment|Required|No|Direct or indirect|The backup and restore service can be configured to point to local storage classes. **Pending availability in directly connected mode**|
+|**Azure backup - long term retention (Future)**| Customer environment -> Azure | Optional| Yes for Azure storage | Direct only |You may want to send backups that are taken locally to Azure Backup for long-term, off-site retention of backups and bring them back to the local environment for restore. **Pending availability in directly connected mode**|
+|**Azure Defender security services (Future)**|Customer environment -> Azure -> Customer environment|Optional|Yes|Direct only|**Pending availability in directly connected mode**|
+|**Provisioning and configuration changes from Azure portal**|Customer environment -> Azure -> Customer environment|Optional|No|Direct only|Provisioning and configuration changes can be done locally using Azure Data Studio or the appropriate CLI. In directly connected mode, you will also be able to provision and make configuration changes from the Azure portal.|
## Details on internet addresses, ports, encryption, and proxy server support
-Currently, in the preview phase, only the indirectly connected mode is supported. In this mode, there are only three connections required to services available on the Internet. These connections include:
+Currently, only the indirectly connected mode is generally available. In this mode, there are only three connections required to services available on the Internet. These connections include:
- [Microsoft Container Registry (MCR)](#microsoft-container-registry-mcr) - [Azure Resource Manager APIs](#azure-resource-manager-apis)
Azure Active Directory
### Azure monitor APIs
-Azure Data Studio, and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
+Azure Data Studio and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
#### Connection source
-A computer running Azure CLI that is uploading monitoring metrics or logs to Azure Monitor.
+A computer running Azure CLI that is uploading monitoring metrics or logs to Azure Monitor.
#### Connection target
Yes
Azure Active Directory > [!NOTE]
-> For now, all browser HTTPS/443 connections to the Grafana and Kibana dashboards to the data controller API are SSL encrypted using self-signed certificates. A feature will be available in the future that will allow you to provide your own certificates for encryption of these SSL connections.
+> For now, all browser HTTPS/443 connections to the data controller for running the command `az arcdata dc export` and Grafana and Kibana dashboards are SSL encrypted using self-signed certificates. A feature will be available in the future that will allow you to provide your own certificates for encryption of these SSL connections.
Connectivity from Azure Data Studio to the Kubernetes API server uses the Kubernetes authentication and encryption that you have established. Each user that is using Azure Data Studio or CLI must have an authenticated connection to the Kubernetes API to perform many of the actions related to Azure Arc-enabled data services.
azure-arc Create Custom Configuration Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-custom-configuration-template.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Create custom configuration templates This article explains how to create a custom configuration template for Azure Arc-enabled data controller. -
-One of required parameters during deployment of a data controller, whether in direct mode or indirect mode, is the `--profile-name` parameter. Currently, the available list of built-in profiles can be found via running the query:
+One of required parameters during deployment of a data controller in indirectly connected mode, is the `az arcdata dc create --profile-name` parameter. Currently, the available list of built-in profiles can be found via running the query:
```azurecli
-azdata arc dc config list
+az arcdata dc config list
``` These profiles are template JSON files that have various settings for the Azure Arc-enabled data controller such as docker registry and repository settings, storage classes for data and logs, storage size for data and logs, security, service type etc. and can be customized to your environment.
+However, in some cases, you may want to customize those configuration templates to meet your requirements and pass the customized configuration template using the `--path` parameter to the `az arcdata dc create` command rather than pass a preconfigured configuration template using the `--profile-name` parameter.
+ ## Create custom.json file
-Run `azdata arc dc config init` to initiate a control.json file with pre-defined settings based on your distribution of Kubernetes cluster.
-For instance, a template control.json file for a Kubernetes cluster based on upstream kubeadm can be created as follows:
+Run `az arcdata dc config init` to initiate a control.json file with pre-defined settings based on your distribution of Kubernetes cluster.
+For instance, a template control.json file for a Kubernetes cluster based on the `azure-arc-kubeadm` template in a subdirectory called `custom` in the current working directory can be created as follows:
```azurecli
-azdata arc dc config init --source azure-arc-kubeadm --path custom
+az arcdata dc config init --source azure-arc-kubeadm --path custom
``` The created control.json file can be edited in any editor such as Visual Studio Code to customize the settings appropriate for your environment.
-## Use custom control.json file to deploy Azure Arc-enabled data controller using azdata CLI
+## Use custom control.json file to deploy Azure Arc-enabled data controller using Azure CLI (az)
-Once the template file is updated, the file can be applied during Azure Arc-enabled data controller create as follows:
+Once the template file is created, the file can be applied during Azure Arc-enabled data controller create command as follows:
```azurecli
-azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+az arcdata dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --k8s-namespace <namespace> --use-k8s
#Example:
-#azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription ID> --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --path ./custom --namespace arc --name arc --subscription <subscription ID> --resource-group my-resource-group --location eastus --connectivity-mode indirect --k8s-namespace <namespace> --use-k8s
``` ## Use custom control.json file for deploying Azure Arc data controller using Azure portal
azure-arc Create Data Controller Direct Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-direct-azure-portal.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Create Azure Arc data controller from Azure portal - Direct connectivity mode This article describes how to deploy the Azure Arc data controller in direct connect mode during the current preview of this feature.
Azure Arc data controller create flow can be launched from the Azure portal in o
- From the search bar in Azure portal, search for "Azure Arc data controllers", and select "+ Create" - From the Overview page of your Azure Arc-enabled Kubernetes cluster,
- - Select "Extensions (preview)" under Settings.
+ - Select "Extensions " under Settings.
- Select "Add" from the Extensions overview page and then select "Azure Arc data controller" - Select Create from the Azure Arc data controller marketplace gallery
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-direct-cli.md
Previously updated : 07/13/2021 Last updated : 07/30/2021
This article describes how to create the Azure Arc data controller in **direct** connectivity mode using CLI, during the current preview of this feature. ## Complete prerequisites
You can verify if the Arc enabled data services extension is created either fro
#### Azure portal 1. Login to the Azure portal and browse to the resource group where the Kubernetes connected cluster resource is located. 1. Select the Arc enabled kubernetes cluster (Type = "Kubernetes - Azure Arc") where the extension was deployed.
-1. In the navigation on the left side, under **Settings**, select "Extensions (preview)".
+1. In the navigation on the left side, under **Settings**, select "Extensions".
1. You should see the extension that was just created earlier in an "Installed" state. :::image type="content" source="media/deploy-data-controller-direct-mode-prerequisites/dc-extensions-dashboard.png" alt-text="Extensions dashboard":::
azure-arc Create Data Controller Indirect Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-indirect-azure-data-studio.md
Previously updated : 07/13/2021 Last updated : 07/30/2021
You can create a data controller using Azure Data Studio through the deployment wizard and notebooks. ## Prerequisites
azure-arc Create Data Controller Indirect Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-indirect-azure-portal.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Create Azure Arc data controller from Azure portal - Indirect connectivity mode ## Introduction
Many of the creation experiences for Azure Arc start in the Azure portal even th
When you use the indirect connect mode of Azure Arc-enabled data services, you can use the Azure portal to generate a notebook for you that can then be downloaded and run in Azure Data Studio against your Kubernetes cluster.
+ [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
+ When you use direct connect mode, you can provision the data controller directly from the Azure portal. You can read more about [connectivity modes](connectivity.md). ## Use the Azure portal to create an Azure Arc data controller
azure-arc Create Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-indirect-cli.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Create Azure Arc data controller using the CLI ## Prerequisites Review the topic [Create the Azure Arc data controller](create-data-controller.md) for overview information.
-To create the Azure Arc data Controller using the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] you will need to have the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] installed.
+### Install tools
- [Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md)
+To create the data controller using the CLI, you will need to install the `arcdata` extension for Azure (az) CLI.
+
+[Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md)
Regardless of which target platform you choose, you will need to set the following environment variables prior to the creation for the data controller administrator user. You can provide these credentials to other people that need to have administrator access to the data controller as needed.
-**AZDATA_USERNAME** - A username of your choice for the data controller administrator user. Example: `arcadmin`
+### Set environment variables
+
+**AZDATA_USERNAME** - A username of your choice for the Kibana/Grafana administrator user. Example: `arcadmin`
-**AZDATA_PASSWORD** - A password of your choice for the data controller administrator user. The password must be at least eight characters long and contain characters from three of the following four sets: uppercase letters, lowercase letters, numbers, and symbols.
+**AZDATA_PASSWORD** - A password of your choice for the Kibana/Grafana administrator user. The password must be at least eight characters long and contain characters from three of the following four sets: uppercase letters, lowercase letters, numbers, and symbols.
-### Linux or macOS
+#### Linux or macOS
```console export AZDATA_USERNAME="<your username of choice>" export AZDATA_PASSWORD="<your password of choice>" ```
-### Windows PowerShell
+#### Windows PowerShell
```console $ENV:AZDATA_USERNAME="<your username of choice>"
You will need to connect and authenticate to a Kubernetes cluster and have an ex
You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands. ```console
-kubectl get namespace
+kubectl cluster-info
kubectl config current-context ``` ## Create the Azure Arc data controller > [!NOTE]
-> You can use a different value for the `--namespace` parameter of the `azdata arc dc create` command in the examples below, but be sure to use that namespace name for the `--namespace parameter` in all other commands below.
+> You can use a different value for the `--namespace` parameter of the `az arcdata dc create` command in the examples below, but be sure to use that namespace name for the `--namespace` parameter in all other commands below.
- [Create Azure Arc data controller using the CLI](#create-azure-arc-data-controller-using-the-cli) - [Prerequisites](#prerequisites)
kubectl config current-context
- [Windows PowerShell](#windows-powershell) - [Create the Azure Arc data controller](#create-the-azure-arc-data-controller) - [Create on Azure Kubernetes Service (AKS)](#create-on-azure-kubernetes-service-aks)
- - [Create on AKS engine on Azure Stack Hub](#create-on-aks-engine-on-azure-stack-hub)
- [Create on AKS on Azure Stack HCI](#create-on-aks-on-azure-stack-hci) - [Create on Azure Red Hat OpenShift (ARO)](#create-on-azure-red-hat-openshift-aro) - [Create custom deployment profile](#create-custom-deployment-profile)
By default, the AKS deployment profile uses the `managed-premium` storage class.
If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location.
-```console
-azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+```azurecli
+az arcdata dc create --profile-name azure-arc-aks-premium-storage --k8s-namespace <namespace> --name arc --azure-subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --use-k8s
#Example:
-#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --profile-name azure-arc-aks-premium-storage --k8s-namespace arc --name arc --azure-subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --use-k8s
``` If you are not sure what storage class to use, you should use the `default` storage class which is supported regardless of which VM type you are using. It just won't provide the fastest performance. If you want to use the `default` storage class, then you can run this command:
-```console
-azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+```azurecli
+az arcdata dc create --profile-name azure-arc-aks-default-storage --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --profile-name azure-arc-aks-default-storage --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
-### Create on AKS engine on Azure Stack Hub
-
-By default, the deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class will only work if you have worker VMs that were deployed using VM images that have premium disks on Azure Stack Hub.
-
-You can run the following command to create the data controller using the managed-premium storage class:
-
-```console
-azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
-
-#Example:
-#azdata arc dc create --profile-name azure-arc-aks-premium-storage --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
-```
+### Create on AKS on Azure Stack HCI
-If you are not sure what storage class to use, you should use the `default` storage class which is supported regardless of which VM type you are using. In Azure Stack Hub, premium disks and standard disks are backed by the same storage infrastructure. Therefore, they are expected to provide the same general performance, but with different IOPS limits.
+#### Configure storage (Azure Stack HCI with AKS-HCI)
-If you want to use the `default` storage class, then you can run this command.
+If you are using Azure Stack HCI with AKS-HCI, do one of the following, depending on your Azure stack HCA AKS-HCI version:
-```console
-azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+- For version 1.20 and above, create a custom storage class with `fsGroupPolicy:File` (For details - https://kubernetes-csi.github.io/docs/support-fsgroup.html).
+- For version 1.19, use:
-#Example:
-#azdata arc dc create --profile-name azure-arc-aks-default-storage --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
-```
-
-Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
+ ```json
+ fsType: ext4
+ ```
-### Create on AKS on Azure Stack HCI
+Use this type to deploy the data controller. See the complete instructions at [Create a custom storage class for an AKS on Azure Stack HCI disk](/azure-stack/aks-hci/container-storage-interface-disks#create-a-custom-storage-class-for-an-aks-on-azure-stack-hci-disk).
By default, the deployment profile uses a storage class named `default` and the service type `LoadBalancer`. You can run the following command to create the data controller using the `default` storage class and service type `LoadBalancer`.
-```console
-azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+```azurecli
+az arcdata dc create --profile-name azure-arc-aks-hci --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --profile-name azure-arc-aks-hci --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
Once you have run the command, continue on to [Monitoring the creation status](#
Use the profile `azure-arc-azure-openshift` for Azure RedHat Open Shift.
-```console
-azdata arc dc config init --source azure-arc-azure-openshift --path ./custom
+```azurecli
+az arcdata dc config init --source azure-arc-azure-openshift --path ./custom
``` #### Create data controller You can run the following command to create the data controller:
-```console
-azdata arc dc create --profile-name azure-arc-azure-openshift --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+```azurecli
+az arcdata dc create --profile-name azure-arc-azure-openshift --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example
-#azdata arc dc create --profile-name azure-arc-azure-openshift --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --profile-name azure-arc-azure-openshift --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status). ### Create on Red Hat OpenShift Container Platform (OCP)
-> [!NOTE]
-> If you are using Red Hat OpenShift Container Platform on Azure, it is recommended to use the latest available version.
- #### Determine storage class You will also need to determine which storage class to use by running the following command.
Create a new custom deployment profile file based on the `azure-arc-openshift` d
Use the profile `azure-arc-openshift` for OpenShift Container Platform.
-```console
-azdata arc dc config init --source azure-arc-openshift --path ./custom
+```azurecli
+az arcdata dc config init --source azure-arc-openshift --path ./custom
``` #### Set storage class Now, set the desired storage class by replacing `<storageclassname>` in the command below with the name of the storage class that you want to use that was determined by running the `kubectl get storageclass` command above.
-```console
-azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>"
-azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>"
+```azurecli
+az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>"
+az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>"
#Example:
-#azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass"
-#azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass"
+#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass"
+#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass"
``` #### Set LoadBalancer (optional) By default, the `azure-arc-openshift` deployment profile uses `NodePort` as the service type. If you are using an OpenShift cluster that is integrated with a load balancer, you can change the configuration to use the `LoadBalancer` service type using the following command:
-```console
-azdata arc dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer"
+```azurecli
+az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer"
``` #### Create data controller
Now you are ready to create the data controller using the following command.
> [!NOTE] > The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
+> [!NOTE]
+> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
-```console
-azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+```azurecli
+az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
#Example:
-#azdata arc dc create --path ./custom --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --path ./custom --k8s-namespace arc --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --infrastructure onpremises
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status). ### Create on open source, upstream Kubernetes (kubeadm)
-By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `azdata arc dc create` command below.
+By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `az arcdata dc create` command below.
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command will create a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
-```console
-azdata arc dc config init --source azure-arc-kubeadm --path ./custom
+```azurecli
+az arcdata dc config init --source azure-arc-kubeadm --path ./custom --k8s-namespace <namespace> --use-k8s
``` You can look up the available storage classes by running the following command.
kubectl get storageclass
Now, set the desired storage class by replacing `<storageclassname>` in the command below with the name of the storage class that you want to use that was determined by running the `kubectl get storageclass` command above.
-```console
-azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>"
-azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>"
+```azurecli
+az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=<storageclassname>"
+az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=<storageclassname>"
#Example:
-#azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass"
-#azdata arc dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass"
+#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.data.className=mystorageclass"
+#az arcdata dc config replace --path ./custom/control.json --json-values "spec.storage.logs.className=mystorageclass"
``` By default, the kubeadm deployment profile uses `NodePort` as the service type. If you are using a Kubernetes cluster that is integrated with a load balancer, you can change the configuration using the following command.
-```console
-azdata arc dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer"
+```azurecli
+az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.services[*].serviceType=LoadBalancer" --k8s-namespace <namespace> --use-k8s
``` Now you are ready to create the data controller using the following command.
-```console
-azdata arc dc create --path ./custom --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+> [!NOTE]
+> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
+
+```azurecli
+az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
#Example:
-#azdata arc dc create --path ./custom --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --path ./custom - --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect --infrastructure onpremises
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
By default, the EKS storage class is `gp2` and the service type is `LoadBalancer
Run the following command to create the data controller using the provided EKS deployment profile.
-```console
-azdata arc dc create --profile-name azure-arc-eks --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+```azurecli
+az arcdata dc create --profile-name azure-arc-eks --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-eks --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --profile-name azure-arc-eks --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
By default, the GKE storage class is `standard` and the service type is `LoadBal
Run the following command to create the data controller using the provided GKE deployment profile.
-```console
-azdata arc dc create --profile-name azure-arc-gke --namespace arc --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
+```azurecli
+az arcdata dc create --profile-name azure-arc-gke --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect
#Example:
-#azdata arc dc create --profile-name azure-arc-gke --namespace arc --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
+#az arcdata dc create --profile-name azure-arc-gke --k8s-namespace <namespace> --use-k8s --name arc --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group my-resource-group --location eastus --connectivity-mode indirect
``` Once you have run the command, continue on to [Monitoring the creation status](#monitoring-the-creation-status).
kubectl describe po/<pod name> --namespace arc
## Troubleshooting creation problems
-If you encounter any troubles with creation, see the [troubleshooting guide](troubleshoot-guide.md).
+If you encounter any troubles with creation, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Create Azure Arc data controller using Kubernetes tools ## Prerequisites Review the topic [Create the Azure Arc data controller](create-data-controller.md) for overview information.
-To create the Azure Arc data Controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
+To create the Azure Arc data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
If you installed the Azure Arc data controller in the past on the same cluster a
```console # Cleanup azure arc data service artifacts+
+# Note: not all of these objects will exist in your environment depending on which version of the Arc data controller was installed
+
+# Custom resource definitions (CRD)
kubectl delete crd datacontrollers.arcdata.microsoft.com kubectl delete crd postgresqls.arcdata.microsoft.com kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com
kubectl delete crd dags.sql.arcdata.microsoft.com
kubectl delete crd exporttasks.tasks.arcdata.microsoft.com kubectl delete crd monitors.arcdata.microsoft.com
+# Cluster roles and role bindings
+kubectl delete clusterrole arcdataservices-extension
kubectl delete clusterrole arc:cr-arc-metricsdc-reader
-kubectl delete clusterrolebinding arc:crb-arc-metricsdc-reader
+kubectl delete clusterrole arc:cr-arc-dc-watch
+kubectl delete clusterrole cr-arc-webhook-job
+
+# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get clusterrolebinding'
+kubectl delete clusterrolebinding {namespace}:crb-arc-metricsdc-reader
+kubectl delete clusterrolebinding {namespace}:crb-arc-dc-watch
+kubectl delete clusterrolebinding crb-arc-webhook-job
+
+# API services
+# Up to May 2021 release
+kubectl delete apiservice v1alpha1.arcdata.microsoft.com
+kubectl delete apiservice v1alpha1.sql.arcdata.microsoft.com
+# June 2021 release
kubectl delete apiservice v1beta1.arcdata.microsoft.com kubectl delete apiservice v1beta1.sql.arcdata.microsoft.com+
+# GA/July 2021 release
+kubectl delete apiservice v1.arcdata.microsoft.com
+kubectl delete apiservice v1.sql.arcdata.microsoft.com
+
+# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get mutatingwebhookconfiguration'
+kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{namespace}
+ ``` ## Overview
Creating the Azure Arc data controller has the following high level steps:
3. Create the bootstrapper service including the replica set, service account, role, and role binding. 4. Create a secret for the data controller administrator username and password. 5. Create the data controller.
+6. Create the webhook deployment job, cluster role and cluster role binding.
## Create the custom resource definitions
Run a command similar to the following to create a new, dedicated namespace in w
```console kubectl create namespace arc ```
+If you are using OpenShift, you will need to edit the `openshift.io/sa.scc.supplemental-groups` and `openshift.io/sa.scc.uid-range` annotations on the namespace using `kubectl edit namespace <name of namespace>`. Change these existing annotations to match these _specific_ UID and fsGroup IDs/ranges.
+
+```console
+openshift.io/sa.scc.supplemental-groups: 1000700001/10000
+openshift.io/sa.scc.uid-range: 1000700001/10000
+```
If other people will be using this namespace that are not cluster administrators, we recommend creating a namespace admin role and granting that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac). ## Create the bootstrapper service
-The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller, SQL managed instance, or PostgreSQL Hyperscale server group.
+The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller, SQL managed instances, or PostgreSQL Hyperscale server groups.
Run the following command to create a bootstrapper service, a service account for the bootstrapper service, and a role and role binding for the bootstrapper service account.
The example below assumes that you created a image pull secret name `arc-private
- name: arc-private-registry #Create this image pull secret if you are using a private container registry containers: - name: bootstrapper
- image: mcr.microsoft.com/arcdata/arc-bootstrapper:latest #Change this registry location if you are using a private container registry.
+ image: mcr.microsoft.com/arcdata/arc-bootstrapper:v1.0.0_2021-07-30 #Change this registry location if you are using a private container registry.
imagePullPolicy: Always ```
-## Create a secret for the data controller administrator
+## Create a secret for the Kibana/Grafana dashboards
-The data controller administrator username and password is used to authenticate to the data controller API to perform administrative functions. Choose a secure password and share it with only those that need to have cluster administrator privileges.
+The username and password is used to authenticate to the Kibana and Grafana dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges.
A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password.
Edit the following as needed:
**RECOMMENDED TO REVIEW AND POSSIBLY CHANGE DEFAULTS** - **storage..className**: the storage class to use for the data controller data and log files. If you are unsure of the available storage classes in your Kubernetes cluster, you can run the following command: `kubectl get storageclass`. The default is `default` which assumes there is a storage class that exists and is named `default` not that there is a storage class that _is_ the default. Note: There are two className settings to be set to the desired storage class - one for data and one for logs.-- **serviceType**: Change the service type to `NodePort` if you are not using a LoadBalancer. Note: There are two serviceType settings that need to be changed.-- On Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, you must apply the security context constraint before you create the data controller. Follow the instructions at [Apply a security context constraint for Azure Arc-enabled data services on OpenShift](how-to-apply-security-context-constraint.md).
+- **serviceType**: Change the service type to `NodePort` if you are not using a LoadBalancer.
- **Security** For Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, replace the `security:` settings with the following values in the data controller yaml file. ```yml security:
- allowDumps: true
+ allowDumps: false
allowNodeMetricsCollection: false allowPodMetricsCollection: false
- allowRunAsRoot: false
``` **OPTIONAL**
kind: ServiceAccount
metadata: name: sa-mssql-controller
-apiVersion: arcdata.microsoft.com/v1beta1
-kind: datacontroller
+apiVersion: arcdata.microsoft.com/v1
+kind: DataController
metadata: generation: 1 name: arc-dc
spec:
credentials: controllerAdmin: controller-login-secret dockerRegistry: arc-private-registry #Create a registry secret named 'arc-private-registry' if you are going to pull from a private registry instead of MCR.
- serviceAccount: sa-mssql-controller
+ serviceAccount: sa-arc-controller
docker: imagePullPolicy: Always
- imageTag: latest
+ imageTag: v1.0.0_2021-07-30
registry: mcr.microsoft.com repository: arcdata infrastructure: other #Must be a value in the array [alibaba, aws, azure, gcp, onpremises, other] security:
- allowDumps: true
- allowNodeMetricsCollection: true
- allowPodMetricsCollection: true
- allowRunAsRoot: false
+ allowDumps: true #Set this to false if deploying on OpenShift
+ allowNodeMetricsCollection: true #Set this to false if deploying on OpenShift
+ allowPodMetricsCollection: true #Set this to false if deploying on OpenShift
- name: controller port: 30080 serviceType: LoadBalancer # Modify serviceType based on your Kubernetes environment
- - name: serviceProxy
- port: 30777
- serviceType: LoadBalancer # Modify serviceType based on your Kubernetes environment
settings: ElasticSearch: vm.max_map_count: "-1"
kubectl describe pod/<pod name> --namespace arc
#kubectl describe pod/control-2g7bl --namespace arc ```
-Azure Arc extension for Azure Data Studio provides a notebook to walk you through the experience of how to set up Azure Arc-enabled Kubernetes and configure it to monitor a git repository that contains a sample SQL Managed Instance yaml file. When everything is connected, a new SQL Managed Instance will be deployed to your Kubernetes cluster.
+## Create the webhook deployment job, cluster role and cluster role binding
+
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/web-hook.yaml) locally on your computer so that you can modify some of the settings.
-See the **Deploy a SQL Managed Instance using Azure Arc-enabled Kubernetes and Flux** notebook in the Azure Arc extension for Azure Data Studio.
+Edit the file and replace `{{namespace}}` in three places with the name of the namespace you created in the previous step. **Save the file.**
+
+Run the following command to create the cluster role and cluster role bindings. **[Requires Kubernetes Cluster Administrator Permissions]**
+
+```console
+kubectl create -n arc -f <path to the edited template file on your computer>
+```
## Troubleshooting creation problems
azure-arc Create Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Create the Azure Arc data controller ## Overview of creating the Azure Arc data controller
Regardless of the option you choose, during the creation process you will need t
- **Data controller username** - Any username for the data controller administrator user. - **Data controller password** - A password for the data controller administrator user. - **Name of your Kubernetes namespace** - the name of the Kubernetes namespace that you want to create the data controller in.-- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc-enabled data services environment to Azure. Preview currently only supports indirectly connected and directly connected modes. For information, see [connectivity mode](./connectivity.md).
+- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc-enabled data services environment to Azure. Indirect connectivity mode is generally available. Direct connectivity mode is in preview. For information, see [connectivity mode](./connectivity.md).
- **Azure subscription ID** - The Azure subscription GUID for where you want the data controller resource in Azure to be created. - **Azure resource group name** - The name of the resource group where you want the data controller resource in Azure to be created. - **Azure location** - The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see [Azure global infrastructure / Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc). The metadata and billing information about the Azure resources managed by the data controller that you are deploying will be stored only in the location in Azure that you specify as the location parameter. If you are deploying in the directly connected mode, the location parameter for the data controller will be the same as the location of the custom location resource that you target.
azure-arc Create Postgresql Hyperscale Server Group Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
This document walks you through the steps for using Azure Data Studio to provisi
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## Connect to the Azure Arc data controller
-
-Before you can create an instance, log in to the Azure Arc data controller if you are not already logged in.
-
-```console
-azdata login
-```
-
-You will then be prompted for the namespace where the data controller is created, the username, and password to log in to the controller.
-
-> If you need to validate the namespace, you can run ```kubectl get pods -A``` to get a list of all the namespaces on the cluster.
-
-```console
-Username: arcadmin
-Password:
-Namespace: arc
-Logged in successfully to `https://10.0.0.4:30080` in namespace `arc`. Setting active context to `arc`
-```
- ## Preliminary and temporary step for OpenShift users only Implement this step before moving to the next step. To deploy PostgreSQL Hyperscale server group onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL Hyperscale server group. The security context constraint (SCC) **_arc-data-scc_** is the one you added when you deployed the Azure Arc data controller.
While indicating 1 worker works, we do not recommend you use it. This deployment
- to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class. - to set the storage class for the backups: in this Preview of the Azure Arc-enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group. - if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class.
- - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `-vcm` followed by the name of a volume claim and a volume type.
+ - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `--volume-claim-mounts` followed by the name of a volume claim and a volume type.
## Next steps
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
While indicating 1 worker works, we do not recommend you use it. This deployment
- to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class. - to set the storage class for the backups: in this Preview of the Azure Arc-enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group. - if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class.
- - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `-vcm` followed by the name of a volume claim and a volume type.
+ - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `--volume-claim-mounts` followed by the name of a volume claim and a volume type.
## Next steps
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
There are important topics you may want read before you proceed with creation:
If you prefer to try out things without provisioning a full environment yourself, get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
-## Login to the Azure Arc data controller
-
-Before you can create an instance, you must first login to the Azure Arc data controller. If you are already logged in into the data controller, you can skip this step.
-
-```console
-azdata login
-```
-
-You will then be prompted for the username, password, and the system namespace.
-
-> If you used the script to create the data controller then your namespace should be **arc**
-
-```console
-Namespace: arc
-Username: arcadmin
-Password:
-Logged in successfully to `https://10.0.0.4:30080` in namespace `arc`. Setting active context to `arc`
-```
- ## Preliminary and temporary step for OpenShift users only Implement this step before moving to the next step. To deploy PostgreSQL Hyperscale server group onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL Hyperscale server group. The security context constraint (SCC) arc-data-scc is the one you added when you deployed the Azure Arc data controller.
For more details on SCCs in OpenShift, please refer to the [OpenShift documentat
## Create an Azure Arc-enabled PostgreSQL Hyperscale server group
-To create an Azure Arc-enabled PostgreSQL Hyperscale server group on your Arc data controller, you will use the command `azdata arc postgres server create` to which you will pass several parameters.
+To create an Azure Arc-enabled PostgreSQL Hyperscale server group on your Arc data controller, you will use the command `az postgres arc-server create` to which you will pass several parameters.
For details about all the parameters you can set at the creation time, review the output of the command:
-```console
-azdata arc postgres server create --help
+```azurecli
+az postgres arc-server create --help
``` The main parameters should consider are:
While using -w 1 works, we do not recommend you use it. This deployment will not
- to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class. - to set the storage class for the backups: in this Preview of the Azure Arc-enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group. - if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class.
- - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `-vcm` followed by the name of a volume claim and a volume type.
+ - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `--volume-claim-mounts` followed by the name of a volume claim and a volume type.
Note that when you execute the create command, you will be prompted to enter the password of the default `postgres` administrative user. The name of that user cannot be changed in this Preview. You may skip the interactive prompt by setting the `AZDATA_PASSWORD` session environment variable before you run the create command. ### Examples **To deploy a server group of Postgres version 12 named postgres01 with 2 worker nodes that uses the same storage classes as the data controller, run the following command:**
-```console
-azdata arc postgres server create -n postgres01 --workers 2
+```azurecli
+az postgres arc-server create -n postgres01 --workers 2 --k8s-namespace <namespace> --use-k8s
``` **To deploy a server group of Postgres version 12 named postgres01 with 2 worker nodes that uses the same storage classes as the data controller for data and logs but its specific storage class to do both full restores and point in time restores, use the following steps:**
kubectl create -f e:\CreateBackupPVC.yml -n arc
Next, create the server group:
-```console
-azdata arc postgres server create -n postgres01 --workers 2 -vcm backup-pvc:backup
+```azurecli
+az postgres arc-server create -n postgres01 --workers 2 --volume-claim-mounts backup-pvc:backup --k8s-namespace <namespace> --use-k8s
``` > [!IMPORTANT]
azdata arc postgres server create -n postgres01 --workers 2 -vcm backup-pvc:back
To list the PostgreSQL Hyperscale server groups deployed in your Arc data controller, run the following command:
-```console
-azdata arc postgres server list
+```azurecli
+az postgres arc-server list --k8s-namespace <namespace> --use-k8s
```
postgres01 Ready 2
To view the endpoints for a PostgreSQL server group, run the following command:
-```console
-azdata arc postgres endpoint list -n <server group name>
+```azurecli
+az postgres arc-server endpoint list -n <server group name> --k8s-namespace <namespace> --use-k8s
``` For example: ```console
For example:
You can use the PostgreSQL Instance endpoint to connect to the PostgreSQL Hyperscale server group from your favorite tool: [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), [pgcli](https://www.pgcli.com/) psql, pgAdmin, etc. When you do so, you connect to the coordinator node/instance which takes care of routing the query to the appropriate worker nodes/instances if you have created distributed tables. For more details, read the [concepts of Azure Arc-enabled PostgreSQL Hyperscale](concepts-distributed-postgres-hyperscale.md).
+ [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
+ ## Special note about Azure virtual machine deployments When you are using an Azure virtual machine, then the endpoint IP address will not show the _public_ IP address. To locate the public IP address, use the following command:
az network nsg list -g azurearcvm-rg --query "[].{NSGName:name}" -o table
Once you have the name of the NSG, you can add a firewall rule using the following command. The example values here create an NSG rule for port 30655 and allows connection from **any** source IP address. This is not a security best practice! You can lock down things better by specifying a -source-address-prefixes value that is specific to your client IP address or an IP address range that covers your team's or organization's IP addresses.
-Replace the value of the --destination-port-ranges parameter below with the port number you got from the 'azdata arc postgres server list' command above.
+Replace the value of the --destination-port-ranges parameter below with the port number you got from the 'az postgres arc-server list' command above.
```azurecli az network nsg rule create -n db_port --destination-port-ranges 30655 --source-address-prefixes '*' --nsg-name azurearcvmNSG --priority 500 -g azurearcvm-rg --access Allow --description 'Allow port through for db access' --destination-address-prefixes '*' --direction Inbound --protocol Tcp --source-port-ranges '*'
azure-arc Create Sql Managed Instance Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md
Previously updated : 07/13/2021 Last updated : 07/30/2021
This document walks you through the steps for installing Azure SQL Managed Insta
[!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)] ## Create Azure SQL Managed Instance on Azure Arc
This document walks you through the steps for installing Azure SQL Managed Insta
- View all the Azure SQL Managed Instances provisioned, using the following commands: ```azurecli
-azdata arc sql mi list
+az sql mi-arc list --k8s-namespace <namespace> --use-k8s
``` Output should look like this, copy the ServerEndpoint (including the port number) from here.
azure-arc Create Sql Managed Instance Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md
Previously updated : 06/02/2021 Last updated : 07/30/2021 # Create Azure SQL managed instance using Kubernetes tools ## Prerequisites
To create a SQL managed instance using Kubernetes tools, you will need to have t
## Overview
-To create a SQL managed instance, you need to create a Kubernetes secret to store your system administrator login and password securely and a SQL managed instance custom resource based on the sqlmanagedinstance custom resource definition.
+To create a SQL managed instance, you need to create a Kubernetes secret to store your system administrator login and password securely and a SQL managed instance custom resource based on the SqlManagedInstance custom resource definition.
## Create a yaml file
metadata:
name: sql1-login-secret type: Opaque
-apiVersion: sql.arcdata.microsoft.com/v1alpha1
-kind: sqlmanagedinstance
+apiVersion: sql.arcdata.microsoft.com/v1
+kind: SqlManagedInstance
metadata: name: sql1 annotations:
Requirements for resource limits and requests:
- The cores limit value is **required** for billing purposes. - The rest of the resource requests and limits are optional. - The cores limit and request must be a positive integer value, if specified.-- The minimum of 2 cores is required for the cores request, if specified.
+- The minimum of 1 cores is required for the cores request, if specified.
- The memory value format follows the Kubernetes notation. - A minimum of 2Gi is required for memory request, if specified. - As a general guideline, you should have 4GB of RAM for each 1 core for production use cases.
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance.md
Previously updated : 07/13/2021 Last updated : 07/30/2021
[!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)] ## Create an Azure SQL Managed Instance
-To view available create options forSQL Managed Instance, use the following command:
-```console
-azdata arc sql mi create --help
+To view available options for create command for SQL Managed Instance, use the following command:
+```azurecli
+az sql mi-arc create --help
``` To create an SQL Managed Instance, use the following command:
-```console
-azdata arc sql mi create -n <instanceName> --storage-class-data <storage class> --storage-class-logs <storage class>
+```azurecli
+az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s
``` Example:
-```console
-azdata arc sql mi create -n sqldemo --storage-class-data managed-premium --storage-class-logs managed-premium
+```azurecli
+az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s
``` > [!NOTE] > Names must be less than 13 characters in length and conform to [DNS naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names) >
-> When specifying memory allocation and vCore allocation use this formula to ensure your creation is successful - for each 1 vCore you need at least 4GB of RAM of capacity available on the Kubernetes node where the SQL managed instance pod will run.
+> When specifying memory allocation and vCore allocation use this formula to ensure your performance is acceptable: for each 1 vCore you should have at least 4GB of RAM of capacity available on the Kubernetes node where the SQL managed instance pod will run.
>
-> When creating a SQL instance do not use upper case in the name if you are provisioning in Azure
->
-> To list available storage classes in your Kubernetes cluster run `kubectl get storageclass`
--
-> [!NOTE]
-> If you want to automate the creation of SQL instances and avoid the interactive prompt for the admin password, you can set the `AZDATA_USERNAME` and `AZDATA_PASSWORD` environment variables to the desired username and password prior to running the `azdata arc sql mi create` command.
+> If you want to automate the creation of SQL instances and avoid the interactive prompt for the admin password, you can set the `AZDATA_USERNAME` and `AZDATA_PASSWORD` environment variables to the desired username and password prior to running the `az sql mi-arc create` command.
> > If you created the data controller using AZDATA_USERNAME and AZDATA_PASSWORD in the same terminal session, then the values for AZDATA_USERNAME and AZDATA_PASSWORD will be used to create the SQL managed instance too. > [!NOTE]
-> Creating Azure SQL Managed Instance will not register the resources in Azure. Steps to register the resource are in the following articles:
-> - [View logs and metrics using Kibana and Grafana](monitor-grafana-kibana.md)
+> If you are using the indirect connectivity mode, creating Azure SQL Managed Instance in Kubernetes will not automatically register the resources in Azure. Steps to register the resource are in the following articles:
> - [Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md)
azdata arc sql mi create -n sqldemo --storage-class-data managed-premium --stora
To view the instance, use the following command:
-```console
-azdata arc sql mi list
-```
-
-Output should look like this:
-
-```console
-Name Replicas ServerEndpoint State
- - - -
-sqldemo 1/1 10.240.0.4:32023 Ready
+```azurecli
+az sql mi-arc list --k8s-namespace <namespace> --use-k8s
```
-If you are using AKS or `kubeadm` or OpenShift etc., you can copy the external IP and port number from here and connect to it using your favorite tool for connecting to a SQL Sever/Azure SQL instance such as Azure Data Studio or SQL Server Management Studio. However, if you are using the quickstart VM, see the [Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md) article for special instructions.
+You can copy the external IP and port number from here and connect to it using your favorite tool for connecting to a SQL Sever/Azure SQL instance such as Azure Data Studio or SQL Server Management Studio.
## Next steps - [Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md)
azure-arc Delete Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-azure-resources.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
In some cases, you may need to manually delete Azure Arc-enabled data services r
- [Delete Azure Arc data controller resources using the Azure CLI](#delete-azure-arc-data-controller-resources-using-the-azure-cli) - [Delete a resource group using the Azure CLI](#delete-a-resource-group-using-the-azure-cli) ## Delete an entire resource group
azure-arc Delete Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-managed-instance.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Delete Azure Arc-enabled SQL Managed Instance This article describes how you can delete an Azure Arc-enabled SQL Managed Instance. ## View Existing Azure Arc-enabled SQL Managed Instances To view SQL Managed Instances, run the following command: ```azurecli
-az sql mi-arc list
+az sql mi-arc list --k8s-namespace <namespace> --use-k8s
``` Output should look something like this:
demo-mi 1/1 10.240.0.4:32023 Ready
To delete a SQL Managed Instance, run the following command: ```azurecli
-az sql mi-arc delete -n <NAME_OF_INSTANCE>
+az sql mi-arc delete -n <NAME_OF_INSTANCE> --k8s-namespace <namespace> --use-k8s
``` Output should look something like this:
-```console
-# az sql mi-arc delete -n demo-mi
+```azurecli
+# az sql mi-arc delete -n demo-mi --k8s-namespace <namespace> --use-k8s
Deleted demo-mi from namespace arc ```
persistentvolumeclaim "logs-demo-mi-0" deleted
> [!NOTE]
-> As indicated, not deleting the PVCs might eventually get your Kubernetes cluster in a situation where it will throw errors. Some of these errors may include being unable to login to your Kubernetes cluster with azdata as the pods may be evicted from it because of this storage issue (normal Kubernetes behavior).
+> As indicated, not deleting the PVCs might eventually get your Kubernetes cluster in a situation where it will throw errors. Some of these errors may include being unable to create, read, update, delete resources from the Kubernetes API, or being able to run commands like `az arcdata dc export` as the controller pods may be evicted from the Kubernetes nodes because of this storage issue (normal Kubernetes behavior).
> > For example, you may see messages in the logs similar to: > - Annotations: microsoft.com/ignore-pod-health: true
azure-arc Delete Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-postgresql-hyperscale-server-group.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
This document describes the steps to delete a server group from your Azure Arc s
As an example, let's consider we want to delete the _postgres01_ instance from the below setup:
-```console
-azdata arc postgres server list
+```azurecli
+az postgres arc-server list --k8s-namespace <namespace> --use-k8s
Name State Workers - - postgres01 Ready 3 ``` The general format of the delete command is:
-```console
-azdata arc postgres server delete -n <server group name>
+```azurecli
+az postgres arc-server delete -n <server group name> --k8s-namespace <namespace> --use-k8s
``` When you execute this command, you will be requested to confirm the deletion of the server group. If you are using scripts to automate deletions you will need to use the --force parameter to bypass the confirmation request. For example, you would run a command like:
-```console
-azdata arc postgres server delete -n <server group name> --force
+```azurecli
+az postgres arc-server delete -n <server group name> --force --k8s-namespace <namespace> --use-k8s
``` For more details about the delete command, run:
-```console
-azdata arc postgres server delete --help
+```azurecli
+az postgres arc-server delete --help
``` ### Delete the server group used in this example
-```console
-azdata arc postgres server delete -n postgres01
+```azurecli
+az postgres arc-server delete -n postgres01 --k8s-namespace <namespace> --use-k8s
``` ## Reclaim the Kubernetes Persistent Volume Claims (PVCs)
persistentvolumeclaim "data-postgres01-0" deleted
>[!NOTE]
-> As indicated, not deleting the PVCs might eventually get your Kubernetes cluster in a situation where it will throw errors. Some of these errors may include being unable to login to your Kubernetes cluster with azdata as the pods may be evicted from it because of this storage issue (normal Kubernetes behavior).
+> As indicated, not deleting the PVCs might eventually get your Kubernetes cluster in a situation where it will throw errors. Some of these errors may include being unable to create, read, update, delete resources from the Kubernetes API, or being able to run commands like `az arcdata dc export` as the controller pods may be evicted from the Kubernetes nodes because of this storage issue (normal Kubernetes behavior).
> > For example, you may see messages in the logs similar to: > ```output
azure-arc Get Connection Endpoints And Connection Strings Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/get-connection-endpoints-and-connection-strings-postgres-hyperscale.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
This article explains how you can retrieve the connection endpoints for your ser
## Get connection end points:
-### From CLI with azdata
-#### 1. Connect to your Arc Data Controller:
-- If you already have a session opened on the host of the Arc Data Controller: Run the following command:
-```console
-azdata login
-```
--- If you do not have a session opened on the host of the Arc Data Controller:
-run the following command
-```console
-azdata login --endpoint https://<external IP address of host/data controller>:30080
-```
-
-#### 2. Show the connection endpoints
-Run the following command:
-```console
-azdata arc postgres endpoint list -n <server group name>
+```azurecli
+az postgres arc-server endpoint list -n <server group name> --k8s-namespace <namespace> --use-k8s
``` For example:
-```console
-azdata arc postgres endpoint list -n postgres01
+```azurecli
+az postgres arc-server endpoint list -n postgres01 --k8s-namespace <namespace> --use-k8s
``` It shows the list of endpoints: the PostgreSQL endpoint that you use to connect your application and use the database, Kibana and Grafana endpoints for log analytics and monitoring. For example:
postgres=#
> [!NOTE] > > - The password of the _postgres_ user indicated in the end point named "_PostgreSQL Instance_" is the password you chose when deploying the server group.
-> - About azdata: the lease associated to your connection lasts about 10 hours. After that you need to reconnect. If your lease has expired, you will get the following error message when you try to execute a command with azdata (other than azdata login):
> _ERROR: (401)_ > _Reason: Unauthorized_ > _HTTP response headers: HTTPHeaderDict({'Date': 'Sun, 06 Sep 2020 16:58:38 GMT', 'Content-Length': '0', 'WWW-Authenticate': '_
NAME STATE READY-PODS EXTERNAL-ENDPOINT AGE
postgres01 Ready 3/3 123.456.789.4:31066 5d20h ``` - ## Form connection strings: Use the below table of templates of connections strings for your server group. You can then copy/paste and customize them as further needed:
azure-arc How To Apply Security Context Constraint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/how-to-apply-security-context-constraint.md
- Title: How to apply security context constraint
-description: Apply a security context constraint for Azure Red Hat OpenShift or Red Hat OpenShift Container Platform
------ Previously updated : 07/13/2021---
-# Apply a security context constraint for Azure Arc-enabled data services on OpenShift
-
-This article describes how to apply a security context constraint for Azure Arc-enabled data services.
-
-## Applicability
-
-It applies to deployments on Azure Red Hat OpenShift or Red Hat OpenShift Container platform.
-
-## Apply security context constraint
--
-## Next steps
--- [Create the Azure Arc data controller](create-data-controller.md)-- [Create data controller in Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)-- [Create Azure Arc data controller with CLI](create-data-controller-indirect-cli.md)-
azure-arc Install Arcdata Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/install-arcdata-extension.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Install `arcdata` Azure CLI extension > [!IMPORTANT]
-> If you are updating to a new monthly release, please be sure to also update to the latest version of Azure CLI and the Azure CLI extension.
+> If you are updating to a new release, please be sure to also update to the latest version of Azure CLI and the `arcdata` extension.
## Install latest Azure CLI
azure-arc Install Client Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/install-client-tools.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Install client tools for deploying and managing Azure Arc-enabled data services > [!IMPORTANT]
-> If you are updating to a new monthly release, please be sure to also update to the latest version of Azure Data Studio, the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] tool, the Azure CLI and Azure Arc extensions for Azure Data Studio.
+> If you are updating to a new release, please be sure to also update to the latest version of Azure Data Studio, the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] tool, the Azure CLI and Azure Arc extensions for Azure Data Studio.
+ > [!IMPORTANT]
-> The Arc enabled data services command groups in the Azure Data CLI (azdata) are deprecated and will be removed in the next release. Please move to using the `arcdata` extension for Azure CLI instead.
+> The Arc enabled data services command groups in the Azure Data CLI (azdata) are deprecated and will be removed in the next release. Please move to using the [`arcdata` extension for Azure CLI instead](reference/reference-az-arcdata-dc.md).
This document walks you through the steps for installing the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)], Azure Data Studio, Azure CLI (az), and the Kubernetes CLI tool (kubectl) on your client machine. ## Tools for creating and managing Azure Arc-enabled data services
The following table lists common tools required for creating and managing Azure
| Tool | Required | Description | Installation | |||||
-| [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] | Yes | Command-line tool for installing and managing a SQL Server Big Data Cluster and Azure Arc-enabled data services. [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] also includes a command line utility to connect to and query Azure SQL and SQL Server instances and Postgres servers using the commands `azdata sql query` (run a single query from the command line), `azdata sql shell` (an interactive shell), `azdata postgres query` and `azdata postgres shell`. | [Install](/sql/azdata/install/deploy-install-azdata?toc=/azure/azure-arc/data/toc.json&bc=/azure/azure-arc/data/breadcrumb/toc.json) |
-| Azure CLI (az)<sup>1</sup> | Yes | Modern command-line interface for managing Azure services. Used with AKS deployments and to upload Azure Arc-enabled data services inventory and billing data to Azure. ([More info](/cli/azure/)). | [Install](/cli/azure/install-azure-cli) |
-| Azure CLI Extension for Arc enabled data services | Yes | Command-line tool for managing Arc enabled data services as an extension to the Azure CLI (az) | [Install](install-arcdata-extension.md). |
+| Azure CLI (az)<sup>1</sup> | Yes | Modern command-line interface for managing Azure services. Used to manage Azure services in general and also specifically Arc-enabled data services using the CLI or in scripts for both indirectly connected mode (available now) and directly connected mode (available soon). ([More info](/cli/azure/)). | [Install](/cli/azure/install-azure-cli) |
+| Azure (az) CLI extension for Azure Arc-enabled data services | Yes | Command-line tool for managing Arc enabled data services as an extension to the Azure CLI (az) | [Install](install-arcdata-extension.md). |
| Azure Data Studio | Yes | Rich experience tool for connecting to and querying a variety of databases including Azure SQL, SQL Server, PostrgreSQL, and MySQL. Extensions to Azure Data Studio provide an administration experience for Azure Arc-enabled data services. | [Install](/sql/azure-data-studio/download-azure-data-studio) |
-| [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] extension for Azure Data Studio | Yes | Extension for Azure Data Studio that will install [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] if you don't already have it.| Install from extensions gallery in Azure Data Studio.|
-| Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc-enabled data services. There is a dependency on the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)] extension for Azure Data Studio. | Install from extensions gallery in Azure Data Studio.|
+| Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc-enabled data services.| Install from the extensions gallery in Azure Data Studio.|
| PostgreSQL extension in Azure Data Studio | No | PostgreSQL extension for Azure Data Studio that provides management capabilities for PostgreSQL. | <!--{need link} [Install](../azure-data-studio/data-virtualization-extension.md) --> Install from extensions gallery in Azure Data Studio.| | Kubernetes CLI (kubectl)<sup>2</sup> | Yes | Command-line tool for managing the Kubernetes cluster ([More info](https://kubernetes.io/docs/tasks/tools/install-kubectl/)). | [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-powershell-from-psgallery) \| [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management) | | curl <sup>3</sup> | Required for some sample scripts. | Command-line tool for transferring data with URLs. | [Windows](https://curl.haxx.se/windows/) \| Linux: install curl package |
-| oc | Required for Red Hat OpenShift and Azure Redhat OpenShift deployments. |`oc` is the Open Shift command line interface (CLI). | [Installing the CLI](https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli)
+| oc | Required for Red Hat OpenShift and Azure Redhat OpenShift deployments. |`oc` is the Open Shift command line interface (CLI). | [Installing the CLI](https://docs.openshift.com/container-platform/4.6/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli)
The following table lists common tools required for creating and managing Azure
## Next steps
-[Create the Azure Arc data controller](create-data-controller.md)
+[Create the Azure Arc data controller](create-data-controller.md)
azure-arc List Server Groups Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/list-server-groups-postgres-hyperscale.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
To retrieve this list, use either of the following methods once you are connecte
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## From CLI with azdata
+## From CLI with Azure CLI extension (az)
+ The general format of the command is:
-```console
-azdata arc postgres server list
+```azurecli
+az postgres arc-server list --k8s-namespace <namespace> --use-k8s
``` It will return an output like:
postgres01 Ready 2
postgres02 Ready 2 ``` For more details about the parameters available for this command, run:
-```console
-azdata arc postgres server list --help
+```azurecli
+az postgres arc-server list --help
``` ## From CLI with kubectl
azure-arc Manage Postgresql Hyperscale Server Group With Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/manage-postgresql-hyperscale-server-group-with-azure-data-studio.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
This article describes how to:
- [Install azdata, Azure Data Studio, and Azure CLI](install-client-tools.md) - Install in Azure Data Studio the **[!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]** and **Azure Arc** and **PostgreSQL** extensions+
+ [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
+ - Create the [Azure Arc Data Controller](create-data-controller-using-azdata.md) - Launch Azure Data Studio
Enter the connection information to your Azure Data Controller:
``` - **Username:**
- Name of the user account you use to connect to the Controller. Use the name you typically use when you run `azdata login`. It is not the name of the PostgreSQL user you use to connect to the PostgreSQL database engine typically from psql.
+ Name of the user account you use to connect to the Controller. Use the name you typically use when you run `az login`. It is not the name of the PostgreSQL user you use to connect to the PostgreSQL database engine typically from psql.
- **Password:** The password of the user account you use to connect to the Controller
And select [Add Connection] and fill in the connection details to your PostgreSQ
- **Server name:** enter the name of your PostgreSQL instance. For example: postgres01 - **Authentication type:** Password - **User name:** for example, you can use the standard/default PostgreSQL admin user name. Note, this field is case-sensitive.-- **Password:** you'll find the password of the PostgreSQL username in the psql connection string in the output of the `azdata postgres server endpoint -n postgres01` command
+- **Password:** you'll find the password of the PostgreSQL username in the psql connection string in the output of the `az postgres arc-server endpoint -n postgres01` command
- **Database name:** set the name of the database you want to connect to. You can let it set to __Default__ - **Server group:** you can let it set to __Default__ - **Name (optional):** you can let this blank - **Advanced:** - **Host IP Address:** is the Public IP address of the Kubernetes cluster
- - **Port:** is the port on which your PostgreSQL instance is listening. You can find this port at the end of the psql connection string in the output of the `azdata postgres server endpoint -n postgres01` command. Not port 30080 on which Kubernetes is listening and that you entered when connecting to the Azure Data Controller in Azure Data Studio.
+ - **Port:** is the port on which your PostgreSQL instance is listening. You can find this port at the end of the psql connection string in the output of the `az postgres arc-server endpoint -n postgres01` command. Not port 30080 on which Kubernetes is listening and that you entered when connecting to the Azure Data Controller in Azure Data Studio.
- **Other parameters:** They should be self-explicit, you can live with the default/blank values they appear with. Select **[OK] and [Connect]** to connect to your server.
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-features.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
## Features of Azure Arc-enabled SQL Managed Instance
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
|Feature|Azure Arc-enabled SQL Managed Instance| |-|-|
-|Always On failover cluster instance<sup>1</sup>| Not Applicable. Similar capabilities available |
-|Always On availability groups<sup>2</sup>|HA capabilities are planned.|
-|Basic availability groups <sup>2</sup>|HA capabilities are planned.|
-|Minimum replica commit availability group <sup>2</sup>|HA capabilities are planned.|
+|Always On failover cluster instance<sup>1</sup>| Not Applicable. Similar capabilities available.|
+|Always On availability groups<sup>2</sup>|Business critical service tier. In preview.|
+|Basic availability groups <sup>2</sup>|Not Applicable. Similar capabilities available.|
+|Minimum replica commit availability group <sup>2</sup>|Business critical service tier. In preview.|
|Clusterless availability group|Yes| |Backup database | Yes - `COPY_ONLY` See [BACKUP - (Transact-SQL)](/sql/t-sql/statements/backup-transact-sql?view=azuresqldb-mi-current&preserve-view=true)| |Backup compression|Yes|
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
|Database snapshot|Yes| |Fast recovery|Yes| |Hot add memory and CPU|Yes|
-|Log shipping|Yes|
+|Log shipping|Not currently available.|
|Online page and file restore|Yes| |Online indexing|Yes| |Online schema change|Yes| |Resumable online index rebuilds|Yes|
-<sup>1</sup> In the scenario where there is pod failure, a new SQL Managed Instance will start up and re-attach to the persistent volume containing your data. [Learn more about Kubernetes persistent volumes here](https://kubernetes.io/docs/concepts/storage/persistent-volumes).
-
-<sup>2</sup> Future releases will provide AG capabilities.
-
+<sup>1</sup> In the scenario where there is a pod failure, a new SQL Managed Instance will start up and re-attach to the persistent volume containing your data. [Learn more about Kubernetes persistent volumes here](https://kubernetes.io/docs/concepts/storage/persistent-volumes).
### <a name="RDBMSSP"></a> RDBMS Scalability and Performance
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
### Tools
-Azure Arc-enabled SQL Managed Instance support various data tools that can help you manage your data.
+Azure Arc-enabled SQL Managed Instance supports various data tools that can help you manage your data.
| **Tool** | Azure Arc-enabled SQL Managed Instance| | | | | | Azure portal <sup>1</sup> | No |
-| Azure CLI | No |
+| Azure CLI | Yes |
| [Azure Data Studio](/sql/azure-data-studio/what-is) | Yes |
-| Azure PowerShell | Yes |
+| Azure PowerShell | No |
| [BACPAC file (export)](/sql/relational-databases/data-tier-applications/export-a-data-tier-application) | Yes | | [BACPAC file (import)](/sql/relational-databases/data-tier-applications/import-a-bacpac-file-to-create-a-new-user-database) | Yes | | [SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt) | Yes |
Azure Arc-enabled SQL Managed Instance support various data tools that can help
| [SQL Server PowerShell](/sql/relational-databases/scripting/sql-server-powershell) | Yes | | [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) | Yes |
-<sup>1</sup> The Azure portal is only used to view Azure Arc-enabled SQL Managed Instances in read-only mode during preview.
+<sup>1</sup> The Azure portal can be used to create, view, and delete Azure Arc-enabled SQL Managed Instances. Updates cannot be done through the Azure portal currently.
+ [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
### <a name="Unsupported"></a> Unsupported Features & Services
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-high-availability.md
description: Learn how to deploy Azure Arc-enabled SQL Managed Instance with hig
Previously updated : 07/13/2021 Last updated : 07/30/2021
Capabilities that availability groups enable:
To deploy a managed instance with availability groups, run the following command. ```azurecli
-az sql mi-arc create -n <name of instance> --replicas 3
+az sql mi-arc create -n <name of instance> --replicas 3 --k8s-namespace <namespace> --use-k8s
``` ### Check status Once the instance has been deployed, run the following commands to check the status of your instance: ```azurecli
-az sql mi-arc list
-az sql mi-arc show -n <name of instance>
+az sql mi-arc list --k8s-namespace <namespace> --use-k8s
+az sql mi-arc show -n <name of instance> --k8s-namespace <namespace> --use-k8s
``` Example output: ```output
-user@pc:/# az sql mi-arc list
+user@pc:/# az sql mi-arc list --k8s-namespace <namespace> --use-k8s
ExternalEndpoint Name Replicas State - - 20.131.31.58,1433 sql2 3/3 Ready
-user@pc:/# az sql mi-arc show -n sql2
+user@pc:/# az sql mi-arc show -n sql2 --k8s-namespace <namespace> --use-k8s
{ ... "status": {
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-overview.md
Previously updated : 03/02/2021 Last updated : 07/30/2021
Azure Arc-enabled SQL Managed Instance is an Azure SQL data service that can be created on the infrastructure of your choice. ## Description
azure-arc Migrate Postgresql Data Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/migrate-postgresql-data-into-postgresql-hyperscale-server-group.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
To do this backup/restore operation, you can use any tool that is capable of doi
- `psql` - ...
+ [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
+ ## Example Let's illustrate those steps using the `pgAdmin` tool. Consider the following setup:
The backup completes successfully:
> [!NOTE] > To register a Postgres instance in the `pgAdmin` tool, you need to you use public IP of your instance in your Kubernetes cluster and set the port and security context appropriately. You will find these details on the `psql` endpoint line after running the following command:
-```console
-azdata arc postgres endpoint list -n postgres01
+```azurecli
+az postgres arc-server endpoint list -n postgres01 --k8s-namespace <namespace> --use-k8s
``` That returns an output like: ```console
Within your Arc setup you can use `psql` to connect to your Postgres instance, s
1. List the end points to help from your `psql` connection string:
- ```console
- azdata arc postgres endpoint list -n postgres01
+ ```azurecli
+ az postgres arc-server endpoint list -n postgres01 --k8s-namespace <namespace> --use-k8s
[ { "Description": "PostgreSQL Instance",
azure-arc Migrate To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/migrate-to-managed-instance.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
This scenario walks you through the steps for migrating a database from a SQL Server instance to Azure SQL managed instance in Azure Arc via two different backup and restore methods. ## Use Azure blob storage
This method uses Azure Blob Storage as a temporary storage location that you can
### Prerequisites - [Install Azure Data Studio](install-client-tools.md)+
+ [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
+ - [Install Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) - Azure subscription ++ ### Step 1: Provision Azure blob storage 1. Follow the steps described in [Create an Azure Blob Storage account](../../storage/common/storage-account-create.md?tabs=azure-portal)
azure-arc Monitor Grafana Kibana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/monitor-grafana-kibana.md
Previously updated : 12/08/2020 Last updated : 07/30/2021
Kibana and Grafana web dashboards are provided to bring insight and clarity to the Kubernetes namespaces being used by Azure Arc-enabled data services. ## Monitor Azure SQL managed instances on Azure Arc To access the logs and monitoring dashboards for Arc enabled SQL Managed Instance, run the following `azdata` CLI command
-```bash
-
-azdata arc sql endpoint list -n <name of SQL instance>
+```azurecl
+az sql mi-arc endpoint list -n <name of SQL instance>
``` The relevant Grafana dashboards are:
The relevant Grafana dashboards are:
> When prompted to enter a username and password, enter the username and password that you provided at the time that you created the Azure Arc data controller. > [!NOTE]
-> You will be prompted with a certificate warning because the certificates used in preview are self-signed certificates.
+> You will be prompted with a certificate warning because the certificates are self-signed certificates.
## Monitor Azure Database for PostgreSQL Hyperscale on Azure Arc
To access the logs and monitoring dashboards for PostgreSQL Hyperscale, run the
```bash
-azdata arc postgres endpoint list -n <name of postgreSQL instance>
+az postgres arc-server endpoint list -n <name of postgreSQL instance> --k8s-namespace <namespace> --use-k8s
```
azure-arc Monitoring Log Analytics Azure Portal Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/monitoring-log-analytics-azure-portal-managed-instance.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
This article lists additional experiences you can have with Azure Arc-enabled data services. ## Experiences
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/overview.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
-# What are Azure Arc-enabled data services (preview)?
+# What are Azure Arc-enabled data services?
Azure Arc makes it possible to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice.
-Currently, the following Azure Arc-enabled data services are available in preview:
+Currently, the following Azure Arc-enabled data services are available:
- SQL Managed Instance-- PostgreSQL Hyperscale
+- PostgreSQL Hyperscale (preview)
## Always current
Azure Arc also provides other cloud benefits such as fast deployment and automat
Using familiar tools such as the Azure portal, Azure Data Studio, and the Azure CLI (`az`) with the `arcdata` extension, you can now gain a unified view of all your data assets deployed with Azure Arc. You are able to not only view and manage a variety of relational databases across your environment and Azure, but also get logs and telemetry from Kubernetes APIs to analyze the underlying infrastructure capacity and health. Besides having localized log analytics and performance monitoring, you can now leverage Azure Monitor for comprehensive operational insights across your entire estate. + ## Disconnected scenario support Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure. Connecting directly to Azure opens up additional options for integration with other Azure services such as Azure Monitor and the ability to use the Azure portal and Azure Resource Manager APIs from anywhere in the world to manage your Azure Arc-enabled data services.
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/plan-azure-arc-data-services.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Plan to deploy Azure Arc-enabled data services This article describes how to plan to deploy Azure Arc-enabled data services. First, deployment of Azure Arc data services involves proper understanding of the database workloads and the business requirements for those workloads. For example, consider things like availability, business continuity, and capacity requirements for memory, CPU, and storage for those workloads. Second, the infrastructure to support those database workloads needs to be prepared based on the business requirements.
Verify that you have:
``` - an Azure subscription to which resources such as Azure Arc data controller, Azure Arc-enabled SQL managed instance or Azure Arc-enabled PostgreSQL Hyperscale server will be projected and billed to. -
-> [!NOTE]
-> Billing applies after general availability and when not using for dev edition.
- Once the infrastructure is prepared, deploy Azure Arc-enabled data services in the following way: 1. Create an Azure Arc-enabled data controller on one of the validated distributions of a Kubernetes cluster
-1. Create an Azure Arc-enabled SQL managed instance or an Azure Arc-enabled PostgreSQL Hyperscale server group.
+1. Create an Azure Arc-enabled SQL managed instance and/or an Azure Arc-enabled PostgreSQL Hyperscale server group.
## Overview: Create the Azure Arc-enabled data controller
Currently, the validated list of Kubernetes services and distributions includes:
> [!IMPORTANT] > * The minimum supported version of Kubernetes is v1.19. See [Known issues](./release-notes.md#known-issues) for additional information. > * The minimum supported version of OCP is 4.7.
-> * If you are using Azure Kubernetes Service, your cluster's worker node VM size should be at least **Standard_D8s_v3** and use **premium disks.** The cluster should not span multiple availability zones. See [Known issues](./release-notes.md#known-issues) for additional information.
--
-> [!NOTE]
-> If you are using Red Hat OpenShift Container Platform on Azure, it is recommended to use the latest available version.
+> * If you are using Azure Kubernetes Service, your cluster's worker node VM size should be at least **Standard_D8s_v3** and use **premium disks.**
+> * The cluster should not span multiple availability zones.
+> * See [Known issues](./release-notes.md#known-issues) for additional information.
Regardless of the option you choose, during the creation process you will need to provide the following information: - **Data controller name** - descriptive name for your data controller - e.g. "production-dc", "seattle-dc". The name must meet [Kubernetes naming standards](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/).-- **Data controller username** - username for the data controller administrator user.-- **Data controller password** - password for the data controller administrator user.
+- **username** - username for the Kibana/Grafana administrator users.
+- **password** - password for the Kibana/Grafana administrator user.
- **Name of your Kubernetes namespace** - the name of the Kubernetes namespace that you want to create the data controller in.-- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc-enabled data services environment to Azure. Preview currently only supports indirectly connected and directly connected modes. For information, see [connectivity mode](./connectivity.md).-- **Azure subscription ID** - The Azure subscription GUID for where you want the data controller resource in Azure to be created.-- **Azure resource group name** - The name of the resource group where you want the data controller resource in Azure to be created.
+- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc-enabled data services environment to Azure. Indirectly connected mode is generally available. Directly connected mode is in preview. The choice of connectivity mode determines the options for deployment methods. For information, see [connectivity mode](./connectivity.md).
+- **Azure subscription ID** - The Azure subscription GUID for where you want the data controller resource in Azure to be created. All Azure Arc-enabled SQL Managed Instances and PostgreSQL Hyperscale server groups will also be created in this subscription and billed to that subscription.
+- **Azure resource group name** - The name of the resource group where you want the data controller resource in Azure to be created. All Azure Arc-enabled SQL Managed Instances and PostgreSQL Hyperscale server groups will also be created in this resource group.
- **Azure location** - The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see [Azure global infrastructure / Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc). The metadata and billing information about the Azure resources managed by the data controller that you are deploying will be stored only in the location in Azure that you specify as the location parameter. If you are deploying in the directly connected mode, the location parameter for the data controller will be the same as the location of the custom location resource that you target. - **Service Principal information** - as described in the [Upload prerequisites](upload-metrics-and-logs-to-azure-monitor.md) article, you will need the Service Principal information during Azure Arc data controller create when deploying in *direct* connectivity mode. For *indirect* connectivity mode, the Service Principal is still needed to export and upload manually but after the Azure Arc data controller is created. - **Infrastructure** - For billing purposes, it is required to indicate the infrastructure on which you are running Arc enabled data services. The options are `alibaba`, `aws`, `azure`, `gcp`, `onpremises`, or `other`.
As described in the [connectivity modes](./connectivity.md), Azure Arc data cont
First, the Kubernetes cluster where the Arc enabled data services will be deployed needs to be an [Azure Arc-enabled Kubernetes cluster](../kubernetes/overview.md). Onboarding the Kubernetes cluster to Azure Arc provides Azure connectivity that is leveraged for capabilities such as automatic upload of usage information, logs, metrics etc. Connecting your Kubernetes cluster to Azure also allows you to deploy and manage Azure Arc data services to your cluster directly from the Azure portal. Connecting your Kubernetes cluster to Azure involves the following steps:-- Install the required az extensions - [Connect your cluster to Azure](../kubernetes/quickstart-connect-cluster.md)
-Second, after the Kubernetes cluster is onboarded to Azure Arc, deploying Azure Arc data services on an Azure Arc-enabled Kubernetes cluster involves the following:
+Second, after the Kubernetes cluster is onboarded to Azure Arc, deploying Azure Arc-enabled data services on an Azure Arc-enabled Kubernetes cluster involves the following:
- Create the Arc data services extension, learn more about [cluster extensions](../kubernetes/conceptual-extensions.md) - Create a custom location, learn more about [custom locations](../kubernetes/conceptual-custom-locations.md) - Create the Azure Arc data controller
-After the Azure Arc data controller is installed, data services such as Azure Arc-enabled SQL managed instance or Azure Arc-enabled PostgreSQL Hyperscale Server can be created.
+All three of these steps can be performed in a single step by using the Azure Arc data controller creation wizard in the Azure portal.
+
+After the Azure Arc data controller is installed, data services such as Azure Arc-enabled SQL Managed Instance or Azure Arc-enabled PostgreSQL Hyperscale Server Group can be created.
## Next steps
There are multiple options for creating the Azure Arc data controller:
- [Create a data controller in indirect connected mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md) - [Create a data controller in indirect connected mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md) - [Create a data controller in indirect connected mode with Kubernetes tools such as kubectl or oc](create-data-controller-using-kubernetes-native-tools.md)-- [Create a data controller with Azure Arc Jumpstart for an accelerated experience of a test deployment](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/)+
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/point-in-time-restore.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Perform a Point in Time Restore Azure Arc-enabled SQL Managed Instance comes built in with many PaaS like capabilities. One such capability is the ability to restore a database to a point-in-time, within the pre-configured retention settings. This article describes how to do a point-in-time restore of a database in Azure Arc-enabled SQL managed instance.
azure-arc Postgresql Hyperscale Server Group Placement On Kubernetes Cluster Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/postgresql-hyperscale-server-group-placement-on-kubernetes-cluster-nodes.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
It means that, at this point, each PostgreSQL instance constituting the Azure Ar
Now, letΓÇÖs scale out to add a third worker node to the server group and observe what happens. It will create a fourth PostgreSQL instance that will be hosted in a fourth pod. To scale out run the command:
-```console
-azdata arc postgres server edit --name postgres01 --workers 3
+```azurecli
+az postgres arc-server edit --name postgres01 --workers 3 --k8s-namespace <namespace> --use-k8s
``` That produces the following output:
postgres01 is Ready
List the server groups deployed in the Azure Arc Data Controller and verify that the server group now runs with three workers. Run the command:
-```console
-azdata arc postgres server list
+```azurecli
+az postgres arc-server list --k8s-namespace <namespace> --use-k8s
``` And observe that it did scale out from two workers to three workers:
Note that, in this example, we are focusing only on the namespace of the Arc Dat
The fifth physical node is not hosting any workload yet. As we scale out the Azure Arc-enabled PostgreSQL Hyperscale, Kubernetes will optimize the placement of the new PostgreSQL pod and should not collocate it on physical nodes that are already hosting more workloads. Run the following command to scale the Azure Arc-enabled PostgreSQL Hyperscale from 3 to 4 workers. At the end of the operation, the server group will be constituted and distributed across five PostgreSQL instances, one coordinator and four workers.
-```console
-azdata arc postgres server edit --name postgres01 --workers 4
+```azurecli
+az postgres arc-server edit --name postgres01 --workers 4 --k8s-namespace <namespace> --use-k8s
``` That produces the following output:
postgres01 is Ready
List the server groups deployed in the Data Controller and verify the server group now runs with four workers:
-```console
-azdata arc postgres server list
+```azurecli
+az postgres arc-server list --k8s-namespace <namespace> --use-k8s
``` And observe that it did scale out from three to four workers.
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/privacy-data-collection-and-reporting.md
Previously updated : 04/27/2021 Last updated : 07/30/2021
This article describes the data that Azure Arc-enabled data services transmits to Microsoft. ## Related products
Azure Arc-enabled data services may use some or all of the following products:
- SQL MI ΓÇô Azure Arc - PostgreSQL Hyperscale ΓÇô Azure Arc - Azure Data Studio+
+ [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
+ - Azure CLI (az) - Azure Data CLI (`azdata`)
In support situations, you may be asked to provide database instance logs, Kuber
|Crash dumps ΓÇô customer data | Maximum 30-day retention of crash dumps ΓÇô may contain access control data <br/><br/> Statistics objects, data values within rows, query texts could be in customer crash dumps | |Crash dumps ΓÇô personal data | Machine, logins/ user names, emails, location information, customer identification ΓÇô require user consent to be included |
-### Customer experience improvement program (CEIP) (Telemetry)
-
-Telemetry is used to track product usage metrics and environment information.
-See [SQL Server privacy supplement](/sql/sql-server/sql-server-privacy/).
- ## Next steps [Upload usage data to Azure Monitor](upload-usage-data.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/overview.md
+
+ Title: az arcdata reference overview
+
+description: Reference article for arcdata commands.
+++ Last updated : 07/30/2021+++++
+# Overview: az arcdata reference
+
+## az arcdata
+### Commands
+| Command | Description|
+| | |
+[az arcdata dc](reference-az-arcdata-dc.md) | Create, delete, and manage data controllers.
+[az arcdata resource-kind](reference-az-arcdata-resource-kind.md) | Resource-kind commands to define and template custom resources on your cluster.
++
+## az sql mi-arc
+### Commands
+| Command | Description|
+| | |
+[az sql mi-arc](reference-az-sql-mi-arc.md) | Manage Azure Arc-enabled SQL managed instances.
++
+## az postgres arc-server
+### Commands
+| Command | Description|
+| | |
+[az postgres arc-server](reference-az-postgres-arc-server.md) | Manage Azure Arc enabled PostgreSQL Hyperscale server groups.
azure-arc Reference Az Arcdata Dc Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-arcdata-dc-config.md
+
+ Title: az arcdata dc config reference
+
+description: Reference article for az arcdata dc config commands.
+++ Last updated : 07/30/2021+++++
+# az arcdata dc config
+## Commands
+| Command | Description|
+| | |
+[az arcdata dc config init](#az-arcdata-dc-config-init) | Initialize a data controller configuration profile that can be used with `az arcdata dc create`.
+[az arcdata dc config list](#az-arcdata-dc-config-list) | List available configuration profile choices.
+[az arcdata dc config add](#az-arcdata-dc-config-add) | Add a value for a json path in a config file.
+[az arcdata dc config remove](#az-arcdata-dc-config-remove) | Remove a value for a json path in a config file.
+[az arcdata dc config replace](#az-arcdata-dc-config-replace) | Replace a value for a json path in a config file.
+[az arcdata dc config patch](#az-arcdata-dc-config-patch) | Patch a config file based on a json patch file.
+## az arcdata dc config init
+Initialize a data controller configuration profile that can be used with `az arcdata dc create`. The specific source of the configuration profile can be specified in the arguments.
+```bash
+az arcdata dc config init [--path -p]
+ [--source -s]
+
+[--force -f]
+```
+### Examples
+Guided data controller config init experience - you will receive prompts for needed values.
+```bash
+az arcdata dc config init
+```
+arcdata dc config init with arguments, creates a configuration profile of aks-dev-test in ./custom.
+```bash
+az arcdata dc config init --source azure-arc-kubeadm --path custom
+```
+### Optional Parameters
+#### `--path -p`
+File path of where you would like the config profile placed, defaults to <cwd>/custom.
+#### `--source -s`
+Config profile source: ['azure-arc-gke', 'azure-arc-eks', 'azure-arc-kubeadm', 'azure-arc-aks-default-storage', 'azure-arc-azure-openshift', 'azure-arc-ake', 'azure-arc-openshift', 'azure-arc-aks-dev-test', 'azure-arc-aks-hci', 'azure-arc-kubeadm-dev-test', 'azure-arc-aks-premium-storage']
+#### `--force -f`
+Force overwrite of the target file.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc config list
+List available configuration profile choices for use in `arcdata dc config init`
+```bash
+az arcdata dc config list [--config-profile -c]
+
+```
+### Examples
+Shows all available configuration profile names.
+```bash
+az arcdata dc config list
+```
+Shows json of a specific configuration profile.
+```bash
+az arcdata dc config list --config-profile aks-dev-test
+```
+### Optional Parameters
+#### `--config-profile -c`
+Default config profile: ['azure-arc-gke', 'azure-arc-eks', 'azure-arc-kubeadm', 'azure-arc-aks-default-storage', 'azure-arc-azure-openshift', 'azure-arc-ake', 'azure-arc-openshift', 'azure-arc-aks-dev-test', 'azure-arc-aks-hci', 'azure-arc-kubeadm-dev-test', 'azure-arc-aks-premium-storage']
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc config add
+Add the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
+```bash
+az arcdata dc config add --path -p
+ --json-values -j
+```
+### Examples
+Add data controller storage.
+```bash
+az arcdata dc config add --path custom/control.json --json-values "spec.storage={"accessMode":"ReadWriteOnce","className":"managed-premium","size":"10Gi"}"
+```
+### Required Parameters
+#### `--path -p`
+Data controller config file path of the config you would like to set, i.e. custom/control.json
+#### `--json-values -j`
+A key value pair list of json paths to values: key1.subkey1=value1,key2.subkey2=value2. You may provide inline json values such as: key='{"kind":"cluster","name":"test-cluster"}' or provide a file path, such as key=./values.json. The add command does NOT support conditionals. If the inline value you are providing is a key value pair itself with "=" and "," escape those characters. For example, key1="key2\=val2\,key3\=val3". See http://jsonpatch.com/ for examples of how your path should look. If you would like to access an array, you must do so by indicating the index, such as key.0=value
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc config remove
+Remove the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
+```bash
+az arcdata dc config remove --path -p
+ --json-path -j
+```
+### Examples
+Ex 1 - Remove data controller storage.
+```bash
+az arcdata dc config remove --path custom/control.json --json-path ".spec.storage"
+```
+### Required Parameters
+#### `--path -p`
+Data controller config file path of the config you would like to set, i.e. custom/control.json
+#### `--json-path -j`
+A list of json paths based on the jsonpatch library that indicates which values you would like removed, such as: key1.subkey1,key2.subkey2. The remove command does NOT support conditionals. See http://jsonpatch.com/ for examples of how your path should look. If you would like to access an array, you must do so by indicating the index, such as key.0=value
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc config replace
+Replace the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
+```bash
+az arcdata dc config replace --path -p
+ --json-values -j
+```
+### Examples
+Ex 1 - Replace the port of a single endpoint (Data Controller Endpoint).
+```bash
+az arcdata dc config replace --path custom/control.json --json-values "$.spec.endpoints[?(@.name=="Controller")].port=30080"
+```
+Ex 2 - Replace data controller storage.
+```bash
+az arcdata dc config replace --path custom/control.json --json-values "spec.storage={"accessMode":"ReadWriteOnce","className":"managed-premium","size":"10Gi"}"
+```
+### Required Parameters
+#### `--path -p`
+Data controller config file path of the config you would like to set, i.e. custom/control.json
+#### `--json-values -j`
+A key value pair list of json paths to values: key1.subkey1=value1,key2.subkey2=value2. You may provide inline json values such as: key='{"kind":"cluster","name":"test-cluster"}' or provide a file path, such as key=./values.json. The replace command supports conditionals through the jsonpath library. To use this, start your path with a $. This will allow you to do a conditional such as -j $.key1.key2[?(@.key3=="someValue"].key4=value. If the inline value you are providing is a key value pair itself with "=" and "," escape those characters. For example, key1="key2\=val2\,key3\=val3". You may see examples below. For additional help, See: https://jsonpath.com/
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc config patch
+Patch the config file according to the given patch file. Consult http://jsonpatch.com/ for a better understanding of how the paths should be composed. The replace operation can use conditionals in its path due to the jsonpath library https://jsonpath.com/. All patch json files must start with a key of "patch" that has an array of patches with their corresponding op (add, replace, remove), path, and value. The "remove" op does not require a value, just a path. See the examples below.
+```bash
+az arcdata dc config patch --path
+ --patch-file -p
+```
+### Examples
+Ex 1 - Replace the port of a single endpoint (Data Controller Endpoint) with patch file.
+```bash
+az arcdata dc config patch --path custom/control.json --patch ./patch.json
+
+ Patch File Example (patch.json):
+ {"patch":[{"op":"replace","path":"$.spec.endpoints[?(@.name=="Controller")].port","value":30080}]}
+```
+Ex 2 - Replace data controller storage with patch file.
+```bash
+az arcdata dc config patch --path custom/control.json --patch ./patch.json
+
+ Patch File Example (patch.json):
+ {"patch":[{"op":"replace","path":".spec.storage","value":{"accessMode":"ReadWriteMany","className":"managed-premium","size":"10Gi"}}]}
+```
+### Required Parameters
+#### `--path`
+Data controller config file path of the config you would like to set, i.e. custom/control.json
+#### `--patch-file -p`
+Path to a patch json file that is based off the jsonpatch library: http://jsonpatch.com/. You must start your patch json file with a key called "patch", whose value is an array of patch operations you intend to make. For the path of a patch operation, you may use dot notation, such as key1.key2 for most operations. If you would like to do a replace operation, and you are replacing a value in an array that requires a conditional, please use the jsonpath notation by beginning your path with a $. This will allow you to do a conditional such as $.key1.key2[?(@.key3=="someValue"].key4. See the examples below. For additional help with conditionals, See: https://jsonpath.com/.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Arcdata Dc Debug https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-arcdata-dc-debug.md
+
+ Title: az arcdata dc debug reference
+
+description: Reference article for az arcdata dc debug commands.
+++ Last updated : 07/30/2021+++++
+# az arcdata dc debug
+## Commands
+| Command | Description|
+| | |
+[az arcdata dc debug copy-logs](#az-arcdata-dc-debug-copy-logs) | Copy logs.
+[az arcdata dc debug dump](#az-arcdata-dc-debug-dump) | Trigger memory dump.
+## az arcdata dc debug copy-logs
+Copy the debug logs from the data controller - Kubernetes configuration is required on your system.
+```bash
+az arcdata dc debug copy-logs --k8s-namespace -k
+ [--container -c]
+
+[--target-folder -d]
+
+[--pod]
+
+[--resource-kind]
+
+[--resource-name]
+
+[--timeout -t]
+
+[--skip-compress]
+
+[--exclude-dumps]
+
+[--exclude-system-logs ]
+
+[--use-k8s]
+```
+### Required Parameters
+#### `--k8s-namespace -k`
+Kubernetes namespace of the data controller.
+### Optional Parameters
+#### `--container -c`
+Copy the logs for the containers with similar name, Optional, by default copies logs for all containers. Cannot be specified multiple times. If specified multiple times, last one will be used
+#### `--target-folder -d`
+Target folder path to copy logs to. Optional, by default creates the result in the local folder. Cannot be specified multiple times. If specified multiple times, last one will be used
+#### `--pod`
+Copy the logs for the pods with similar name. Optional, by default copies logs for all pods. Cannot be specified multiple times. If specified multiple times, last one will be used
+#### `--resource-kind`
+Copy the logs for the resource of a particular kind. Cannot specified multiple times. If specified multiple times, last one will be used. If specified, --resource-name should also be specified to identify the resource.
+#### `--resource-name`
+Copy the logs for the resource of the specified name. Cannot be specified multiple times. If specified multiple times, last one will be used. If specified, --resource-kind should also be specified to identify the resource.
+#### `--timeout -t`
+The number of seconds to wait for the command to complete. The default value is 0 which is unlimited
+#### `--skip-compress`
+Whether or not to skip compressing the result folder. The default value is False which compresses the result folder.
+#### `--exclude-dumps`
+Whether or not to exclude dumps from result folder. The default value is False which includes dumps.
+#### `--exclude-system-logs `
+Whether or not to exclude system logs from collection. The default value is False which includes system logs.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc debug dump
+Trigger memory dump and copy it out from container - Kubernetes configuration is required on your system.
+```bash
+az arcdata dc debug dump --k8s-namespace -k
+ [--container -c]
+
+[--target-folder -d]
+
+[--use-k8s]
+```
+### Required Parameters
+#### `--k8s-namespace -k`
+Kubernetes namespace of the data controller.
+### Optional Parameters
+#### `--container -c`
+The target container to be triggered for dumping the running processes.
+`controller`
+#### `--target-folder -d`
+Target folder to copy the dump out.
+`./output/dump`
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Arcdata Dc Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-arcdata-dc-endpoint.md
+
+ Title: az arcdata dc endpoint reference
+
+description: Reference article for az arcdata dc endpoint commands.
+++ Last updated : 07/30/2021+++++
+# az arcdata dc endpoint
+## Commands
+| Command | Description|
+| | |
+[az arcdata dc endpoint list](#az-arcdata-dc-endpoint-list) | List the data controller endpoint.
+## az arcdata dc endpoint list
+List the data controller endpoint.
+```bash
+az arcdata dc endpoint list --k8s-namespace -k
+ [--endpoint-name -e]
+
+[--use-k8s]
+```
+### Examples
+Lists all available data controller endpoints.
+```bash
+az arcdata dc endpoint list --k8s-namespace namespace
+```
+### Required Parameters
+#### `--k8s-namespace -k`
+The Kubernetes namespace in which the data controller exists.
+### Optional Parameters
+#### `--endpoint-name -e`
+Arc data controller endpoint name.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Arcdata Dc Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-arcdata-dc-status.md
+
+ Title: az arcdata dc status reference
+
+description: Reference article for az arcdata dc status commands.
+++ Last updated : 07/30/2021+++++
+# az arcdata dc status
+## Commands
+| Command | Description|
+| | |
+[az arcdata dc status show](#az-arcdata-dc-status-show) | Show the status of the data controller.
+## az arcdata dc status show
+Show the status of the data controller.
+```bash
+az arcdata dc status show [--k8s-namespace -k]
+ [--use-k8s]
+```
+### Examples
+Show the status of the data controller in a particular kubernetes namespace.
+```bash
+az arcdata dc status show --k8s-namespace <ns>
+```
+### Optional Parameters
+#### `--k8s-namespace -k`
+The Kubernetes namespace in which the data controller exists.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Arcdata Dc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-arcdata-dc.md
+
+ Title: az arcdata dc reference
+
+description: Reference article for az arcdata dc commands.
+++ Last updated : 07/30/2021+++++
+# az arcdata dc
+## Commands
+| Command | Description|
+| | |
+[az arcdata dc create](#az-arcdata-dc-create) | Create data controller.
+[az arcdata dc delete](#az-arcdata-dc-delete) | Delete data controller.
+[az arcdata dc endpoint](reference-az-arcdata-dc-endpoint.md) | Endpoint commands.
+[az arcdata dc status](reference-az-arcdata-dc-status.md) | Status commands.
+[az arcdata dc config](reference-az-arcdata-dc-config.md) | Configuration commands.
+[az arcdata dc debug](reference-az-arcdata-dc-debug.md) | Debug data controller.
+[az arcdata dc export](#az-arcdata-dc-export) | Export metrics, logs or usage.
+[az arcdata dc upload](#az-arcdata-dc-upload) | Upload exported data file.
+## az arcdata dc create
+Create data controller - kube config is required on your system along with the following environment variables ['AZDATA_USERNAME', 'AZDATA_PASSWORD'].
+```bash
+az arcdata dc create --k8s-namespace -k
+ --name -n
+
+--connectivity-mode
+
+--resource-group -g
+
+--location -l
+
+[--profile-name]
+
+[--path -p]
+
+[--storage-class]
+
+[--infrastructure]
+
+[--labels]
+
+[--annotations]
+
+[--service-annotations]
+
+[--service-labels]
+
+[--storage-labels]
+
+[--storage-annotations]
+
+[--use-k8s]
+```
+### Examples
+Data controller deployment.
+```bash
+az arcdata dc create --name name --k8s-namespace namespace --connectivity-mode indirect --resource-group group --location location, --subscription subscription
+```
+### Required Parameters
+#### `--k8s-namespace -k`
+The Kubernetes namespace to deploy the data controller into. If it exists already it will be used. If it does not exist, an attempt will be made to create it first.
+#### `--name -n`
+The name for the data controller.
+#### `--connectivity-mode`
+The connectivity to Azure - indirect or direct - which the data controller should operate in.
+#### `--resource-group -g`
+The Azure resource group in which the data controller resource should be added.
+#### `--location -l`
+The Azure location in which the data controller metadata will be stored (e.g. eastus).
+### Optional Parameters
+#### `--profile-name`
+The name of an existing configuration profile. Run `az arcdata dc config list` to see available options. One of the following: ['azure-arc-gke', 'azure-arc-eks', 'azure-arc-kubeadm', 'azure-arc-aks-default-storage', 'azure-arc-azure-openshift', 'azure-arc-ake', 'azure-arc-openshift', 'azure-arc-aks-hci', 'azure-arc-aks-premium-storage'].
+#### `--path -p`
+The path to a directory containing a custom configuration profile to use. Run `az arcdata dc config init` to create a custom configuration profile.
+#### `--storage-class`
+The storage class to be use for all data and logs persistent volumes for all data controller pods that require them.
+#### `--infrastructure`
+The infrastructure on which the data controller will be running on. Allowed values: ['aws', 'gcp', 'azure', 'alibaba', 'onpremises', 'other', 'auto']
+#### `--labels`
+Comma-separated list of labels to apply to all data controller resources.
+#### `--annotations`
+Comma-separated list of annotations to apply all data controller resources.
+#### `--service-annotations`
+Comma-separated list of annotations to apply to all external data controller services.
+#### `--service-labels`
+Comma-separated list of labels to apply to all external data controller services.
+#### `--storage-labels`
+Comma-separated list of labels to apply to all PVCs created by the data controller.
+#### `--storage-annotations`
+Comma-separated list of annotations to apply to all PVCs created by the data controller.
+#### `--use-k8s`
+Create data controller using local Kubernetes APIs.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc delete
+Delete data controller - kube config is required on your system.
+```bash
+az arcdata dc delete --name -n
+ --k8s-namespace -k
+
+[--force -f]
+
+[--yes -y]
+```
+### Examples
+Data controller deployment.
+```bash
+az arcdata dc delete --name name --k8s-namespace namespace
+```
+### Required Parameters
+#### `--name -n`
+Data controller name.
+#### `--k8s-namespace -k`
+The Kubernetes namespace in which the data controller exists.
+### Optional Parameters
+#### `--force -f`
+Force delete data controller and all of its data services.
+#### `--yes -y`
+Delete data controller without confirmation prompt.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc export
+Export metrics, logs or usage to a file.
+```bash
+az arcdata dc export --type -t
+ --path -p
+
+--k8s-namespace -k
+
+[--force -f]
+
+[--use-k8s]
+```
+### Required Parameters
+#### `--type -t`
+The type of data to be exported. Options: logs, metrics, and usage.
+#### `--path -p`
+The full or relative path including the file name of the file to be exported.
+#### `--k8s-namespace -k`
+The Kubernetes namespace in which the data controller exists.
+### Optional Parameters
+#### `--force -f`
+Force create output file. Overwrites any existing file at the same path.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata dc upload
+Upload data file exported from a data controller to Azure.
+```bash
+az arcdata dc upload --path -p
+
+```
+### Required Parameters
+#### `--path -p`
+The full or relative path including the file name of the file to be uploaded.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Arcdata Resource Kind https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-arcdata-resource-kind.md
+
+ Title: az arcdata resource-kind reference
+
+description: Reference article for az arcdata resource-kind commands.
+++ Last updated : 07/30/2021+++++
+# az arcdata resource-kind
+## Commands
+| Command | Description|
+| | |
+[az arcdata resource-kind list](#az-arcdata-resource-kind-list) | List the available custom resource kinds for Arc that can be defined and created.
+[az arcdata resource-kind get](#az-arcdata-resource-kind-get) | Get the Arc resource-kind's template file.
+## az arcdata resource-kind list
+List the available custom resource kinds for Arc that can be defined and created. After listing, you can proceed to getting the template file needed to define or create that custom resource.
+```bash
+az arcdata resource-kind list
+```
+### Examples
+Example command for listing the available custom resource kinds for Arc.
+```bash
+az arcdata resource-kind list
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az arcdata resource-kind get
+Get the Arc resource-kind's template file.
+```bash
+az arcdata resource-kind get --kind -k
+ [--dest -d]
+```
+### Examples
+Example command for getting an Arc resource-kind's CRD template file.
+```bash
+az arcdata resource-kind get --kind SqlManagedInstance
+```
+### Required Parameters
+#### `--kind -k`
+The kind of arc resource you want the template file for.
+### Optional Parameters
+#### `--dest -d`
+The directory where you"d like to place the template files.
+`template`
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Arcdata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-arcdata.md
+
+ Title: az arcdata reference
+
+description: Reference article for az arcdata commands.
+++ Last updated : 07/30/2021+++++
+# az arcdata
+## Commands
+| Command | Description|
+| | |
+[az arcdata dc](reference-az-arcdata-dc.md) | Create, delete, and manage data controllers.
+[az arcdata resource-kind](reference-az-arcdata-resource-kind.md) | Resource-kind commands to define and template custom resources on your cluster.
azure-arc Reference Az Postgres Arc Server Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-postgres-arc-server-endpoint.md
+
+ Title: az postgres arc-server endpoint reference
+
+description: Reference article for az postgres arc-server endpoint commands.
+++ Last updated : 07/30/2021+++++
+# az postgres arc-server endpoint
+## Commands
+| Command | Description|
+| | |
+[az postgres arc-server endpoint list](#az-postgres-arc-server-endpoint-list) | List Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
+## az postgres arc-server endpoint list
+List Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
+```bash
+az postgres arc-server endpoint list [--name -n]
+ [--k8s-namespace -k]
+
+[--use-k8s]
+```
+### Examples
+List Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
+```bash
+az postgres arc-server endpoint list --name postgres01 --k8s-namespace namespace --use-k8s
+```
+### Optional Parameters
+#### `--name -n`
+Name of the Azure Arc enabled PostgreSQL Hyperscale server group.
+#### `--k8s-namespace -k`
+The Kubernetes namespace where the Azure Arc enabled PostgreSQL Hyperscale server group is deployed. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Postgres Arc Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-postgres-arc-server.md
+
+ Title: az postgres arc-server reference
+
+description: Reference article for az postgres arc-server commands.
+++ Last updated : 07/30/2021+++++
+# az postgres arc-server
+## Commands
+| Command | Description|
+| | |
+[az postgres arc-server create](#az-postgres-arc-server-create) | Create an Azure Arc enabled PostgreSQL Hyperscale server group.
+[az postgres arc-server edit](#az-postgres-arc-server-edit) | Edit the configuration of an Azure Arc enabled PostgreSQL Hyperscale server group.
+[az postgres arc-server delete](#az-postgres-arc-server-delete) | Delete an Azure Arc enabled PostgreSQL Hyperscale server group.
+[az postgres arc-server show](#az-postgres-arc-server-show) | Show the details of an Azure Arc enabled PostgreSQL Hyperscale server group.
+[az postgres arc-server list](#az-postgres-arc-server-list) | List Azure Arc enabled PostgreSQL Hyperscale server groups.
+[az postgres arc-server endpoint](reference-az-postgres-arc-server-endpoint.md) | Manage Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
+## az postgres arc-server create
+To set the password of the server group, please set the environment variable AZDATA_PASSWORD
+```bash
+az postgres arc-server create --name -n
+ [--path]
+
+[--k8s-namespace -k]
+
+[--cores-limit]
+
+[--cores-request]
+
+[--memory-limit]
+
+[--memory-request]
+
+[--storage-class-data]
+
+[--storage-class-logs]
+
+[--storage-class-backups]
+
+[--volume-claim-mounts]
+
+[--extensions]
+
+[--volume-size-data]
+
+[--volume-size-logs]
+
+[--volume-size-backups]
+
+[--workers -w]
+
+[--engine-version]
+
+[--no-external-endpoint]
+
+[--port]
+
+[--no-wait]
+
+[--engine-settings]
+
+[--coordinator-settings]
+
+[--worker-settings]
+
+[--use-k8s]
+```
+### Examples
+Create an Azure Arc enabled PostgreSQL Hyperscale server group.
+```bash
+az postgres arc-server create -n pg1 --k8s-namespace namespace --use-k8s
+```
+Create an Azure Arc enabled PostgreSQL Hyperscale server group with engine settings. Both below examples are valid.
+```bash
+az postgres arc-server create -n pg1 --engine-settings "key1=val1" --k8s-namespace namespace
+az postgres arc-server create -n pg1 --engine-settings "key2=val2" --k8s-namespace namespace --use-k8s
+```
+Create a PostgreSQL server group with volume claim mounts.
+```bash
+az postgres arc-server create -n pg1 --volume-claim-mounts backup-pvc:backup
+```
+Create a PostgreSQL server group with specific memory-limit for different node roles.
+```bash
+az postgres arc-server create -n pg1 --memory-limit "coordinator=2Gi,w=1Gi" --workers 1 --k8s-namespace namespace --use-k8s
+```
+### Required Parameters
+#### `--name -n`
+Name of the Azure Arc enabled PostgreSQL Hyperscale server group.
+### Optional Parameters
+#### `--path`
+The path to the source json file for the Azure Arc enabled PostgreSQL Hyperscale server group. This is optional.
+#### `--k8s-namespace -k`
+The Kubernetes namespace where the Azure Arc enabled PostgreSQL Hyperscale server group is deployed. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--cores-limit`
+The maximum number of CPU cores for Azure Arc enabled PostgreSQL Hyperscale server group that can be used per node. Fractional cores are supported. Optionally a comma-separated list of roles with values can be specified in format <role>=<value>. Valid roles are: "coordinator" or "c", "worker" or "w". If no roles are specified, settings will apply to all nodes of the PostgreSQL Hyperscale server group.
+#### `--cores-request`
+The minimum number of CPU cores that must be available per node to schedule the service. Fractional cores are supported. Optionally a comma-separated list of roles with values can be specified in format <role>=<value>. Valid roles are: "coordinator" or "c", "worker" or "w". If no roles are specified, settings will apply to all nodes of the PostgreSQL Hyperscale server group.
+#### `--memory-limit`
+The memory limit of the Azure Arc enabled PostgreSQL Hyperscale server group as a number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes). Optionally a comma-separated list of roles with values can be specified in format <role>=<value>. Valid roles are: "coordinator" or "c", "worker" or "w". If no roles are specified, settings will apply to all nodes of the PostgreSQL Hyperscale server group.
+#### `--memory-request`
+The memory request of the Azure Arc enabled PostgreSQL Hyperscale server group as a number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes). Optionally a comma-separated list of roles with values can be specified in format <role>=<value>. Valid roles are: "coordinator" or "c", "worker" or "w". If no roles are specified, settings will apply to all nodes of the PostgreSQL Hyperscale server group.
+#### `--storage-class-data`
+The storage class to be used for data persistent volumes.
+#### `--storage-class-logs`
+The storage class to be used for logs persistent volumes.
+#### `--storage-class-backups`
+The storage class to be used for backup persistent volumes.
+#### `--volume-claim-mounts`
+A comma-separated list of volume claim mounts. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon.The persistent volume will be mounted in each pod for the PostgreSQL server group. The mount path may depend on the volume type.
+#### `--extensions`
+A comma-separated list of the Postgres extensions that should be loaded on startup. Please refer to the postgres documentation for supported values.
+#### `--volume-size-data`
+The size of the storage volume to be used for data as a positive number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes).
+#### `--volume-size-logs`
+The size of the storage volume to be used for logs as a positive number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes).
+#### `--volume-size-backups`
+The size of the storage volume to be used for backups as a positive number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes).
+#### `--workers -w`
+The number of worker nodes to provision in a server group. In Preview, reducing the number of worker nodes is not supported. Refer to documentation for additional details.
+#### `--engine-version`
+Must be 11 or 12. The default value is 12.
+`12`
+#### `--no-external-endpoint`
+If specified, no external service will be created. Otherwise, an external service will be created using the same service type as the data controller.
+#### `--port`
+Optional.
+#### `--no-wait`
+If given, the command will not wait for the instance to be in a ready state before returning.
+#### `--engine-settings`
+A comma separated list of Postgres engine settings in the format 'key1=val1, key2=val2'.
+#### `--coordinator-settings`
+A comma separated list of Postgres engine settings in the format 'key1=val1, key2=val2' to be applied to 'coordinator' node role. When node role specific settings are specified, default settings will be ignored and overridden with the settings provided here.
+#### `--worker-settings`
+A comma separated list of Postgres engine settings in the format 'key1=val1, key2=val2' to be applied to 'worker' node role. When node role specific settings are specified, default settings will be ignored and overridden with the settings provided here.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az postgres arc-server edit
+Edit the configuration of an Azure Arc enabled PostgreSQL Hyperscale server group.
+```bash
+az postgres arc-server edit --name -n
+ [--k8s-namespace -k]
+
+[--path]
+
+[--workers -w]
+
+[--cores-limit]
+
+[--cores-request]
+
+[--memory-limit]
+
+[--memory-request]
+
+[--extensions]
+
+[--port]
+
+[--no-wait]
+
+[--engine-settings]
+
+[--replace-settings]
+
+[--coordinator-settings]
+
+[--worker-settings]
+
+[--admin-password]
+
+[--use-k8s]
+```
+### Examples
+Edit the configuration of an Azure Arc enabled PostgreSQL Hyperscale server group.
+```bash
+az postgres arc-server edit --path ./spec.json -n pg1 --k8s-namespace namespace --use-k8s
+```
+Edit an Azure Arc enabled PostgreSQL Hyperscale server group with engine settings for the coordinator node.
+```bash
+az postgres arc-server edit -n pg1 --coordinator-settings "key2=val2" --k8s-namespace namespace
+```
+Edits an Azure Arc enabled PostgreSQL Hyperscale server group and replaces existing engine settings with new setting key1=val1.
+```bash
+az postgres arc-server edit -n pg1 --engine-settings "key1=val1" --replace-settings --k8s-namespace namespace
+```
+### Required Parameters
+#### `--name -n`
+Name of the Azure Arc enabled PostgreSQL Hyperscale server group that is being edited. The name under which your instance is deployed cannot be changed.
+### Optional Parameters
+#### `--k8s-namespace -k`
+The Kubernetes namespace where the Azure Arc enabled PostgreSQL Hyperscale server group is deployed. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--path`
+The path to the source json file for the Azure Arc enabled PostgreSQL Hyperscale server group. This is optional.
+#### `--workers -w`
+The number of worker nodes to provision in a server group. In Preview, reducing the number of worker nodes is not supported. Refer to documentation for additional details.
+#### `--cores-limit`
+The maximum number of CPU cores for Azure Arc enabled PostgreSQL Hyperscale server group that can be used per node, fractional cores are supported. To remove the cores_limit, specify its value as empty string. Optionally a comma-separated list of roles with values can be specified in format <role>=<value>. Valid roles are: "coordinator" or "c", "worker" or "w". If no roles are specified, settings will apply to all nodes of the PostgreSQL Hyperscale server group.
+#### `--cores-request`
+The minimum number of CPU cores that must be available per node to schedule the service, fractional cores are supported. To remove the cores_request, specify its value as empty string. Optionally a comma-separated list of roles with values can be specified in format <role>=<value>. Valid roles are: "coordinator" or "c", "worker" or "w". If no roles are specified, settings will apply to all nodes of the PostgreSQL Hyperscale server group.
+#### `--memory-limit`
+The memory limit for Azure Arc enabled PostgreSQL Hyperscale server group as a number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes). To remove the memory_limit, specify its value as empty string. Optionally a comma-separated list of roles with values can be specified in format <role>=<value>. Valid roles are: "coordinator" or "c", "worker" or "w". If no roles are specified, settings will apply to all nodes of the PostgreSQL Hyperscale server group.
+#### `--memory-request`
+The memory request for Azure Arc enabled PostgreSQL Hyperscale server group as a number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes). To remove the memory_request, specify its value as empty string. Optionally a comma-separated list of roles with values can be specified in format <role>=<value>. Valid roles are: "coordinator" or "c", "worker" or "w". If no roles are specified, settings will apply to all nodes of the PostgreSQL Hyperscale server group.
+#### `--extensions`
+A comma-separated list of the Postgres extensions that should be loaded on startup. Please refer to the postgres documentation for supported values.
+#### `--port`
+Optional.
+#### `--no-wait`
+If given, the command will not wait for the instance to be in a ready state before returning.
+#### `--engine-settings`
+A comma separated list of Postgres engine settings in the format 'key1=val1, key2=val2'. The provided settings will be merged with the existing settings. To remove a setting, provide an empty value like 'removedKey='. If you change an engine setting that requires a restart, the service will be restarted to apply the settings immediately.
+#### `--replace-settings`
+When specified with --engine-settings, will replace all existing custom engine settings with new set of settings and values.
+#### `--coordinator-settings`
+A comma separated list of Postgres engine settings in the format 'key1=val1, key2=val2' to be applied to 'coordinator' node role. When node role specific settings are specified, default settings will be ignored and overridden with the settings provided here.
+#### `--worker-settings`
+A comma separated list of Postgres engine settings in the format 'key1=val1, key2=val2' to be applied to 'worker' node role. When node role specific settings are specified, default settings will be ignored and overridden with the settings provided here.
+#### `--admin-password`
+If given, the Azure Arc enabled PostgreSQL Hyperscale server group's admin password will be set to the value of the AZDATA_PASSWORD environment variable if present and a prompted value otherwise.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az postgres arc-server delete
+Delete an Azure Arc enabled PostgreSQL Hyperscale server group.
+```bash
+az postgres arc-server delete --name -n
+ [--k8s-namespace -k]
+
+[--force -f]
+
+[--use-k8s]
+```
+### Examples
+Delete an Azure Arc enabled PostgreSQL Hyperscale server group.
+```bash
+az postgres arc-server delete -n pg1 --k8s-namespace namespace --use-k8s
+```
+### Required Parameters
+#### `--name -n`
+Name of the Azure Arc enabled PostgreSQL Hyperscale server group.
+### Optional Parameters
+#### `--k8s-namespace -k`
+The Kubernetes namespace where the Azure Arc enabled PostgreSQL Hyperscale server group is deployed. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--force -f`
+Force delete the Azure Arc enabled PostgreSQL Hyperscale server group without confirmation.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az postgres arc-server show
+Show the details of an Azure Arc enabled PostgreSQL Hyperscale server group.
+```bash
+az postgres arc-server show --name -n
+ [--k8s-namespace -k]
+
+[--path]
+
+[--use-k8s]
+```
+### Examples
+Show the details of an Azure Arc enabled PostgreSQL Hyperscale server group.
+```bash
+az postgres arc-server show -n pg1 --k8s-namespace namespace --use-k8s
+```
+### Required Parameters
+#### `--name -n`
+Name of the Azure Arc enabled PostgreSQL Hyperscale server group.
+### Optional Parameters
+#### `--k8s-namespace -k`
+The Kubernetes namespace where the Azure Arc enabled PostgreSQL Hyperscale server group is deployed. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--path`
+A path where the full specification for the Azure Arc enabled PostgreSQL Hyperscale server group should be written. If omitted, the specification will be written to standard output.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az postgres arc-server list
+List Azure Arc enabled PostgreSQL Hyperscale server groups.
+```bash
+az postgres arc-server list [--k8s-namespace -k]
+ [--use-k8s]
+```
+### Examples
+List Azure Arc enabled PostgreSQL Hyperscale server groups.
+```bash
+az postgres arc-server list --k8s-namespace namespace --use-k8s
+```
+### Optional Parameters
+#### `--k8s-namespace -k`
+The Kubernetes namespace where the Azure Arc enabled PostgreSQL Hyperscale server groups are deployed. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Sql Mi Arc Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-sql-mi-arc-config.md
+
+ Title: az sql mi-arc config reference
+
+description: Reference article for az sql mi-arc config commands.
+++ Last updated : 07/30/2021+++++
+# az sql mi-arc config
+## Commands
+| Command | Description|
+| | |
+[az sql mi-arc config init](#az-sql-mi-arc-config-init) | Initialize the CRD and specification files for a SQL managed instance.
+[az sql mi-arc config add](#az-sql-mi-arc-config-add) | Add a value for a json path in a config file.
+[az sql mi-arc config remove](#az-sql-mi-arc-config-remove) | Remove a value for a json path in a config file.
+[az sql mi-arc config replace](#az-sql-mi-arc-config-replace) | Replace a value for a json path in a config file.
+[az sql mi-arc config patch](#az-sql-mi-arc-config-patch) | Patch a config file based on a json patch file.
+## az sql mi-arc config init
+Initialize the CRD and specification files for a SQL managed instance.
+```bash
+az sql mi-arc config init --path -p
+
+```
+### Examples
+Initialize the CRD and specification files for a SQL managed instance.
+```bash
+az sql mi-arc config init --path ./template
+```
+### Required Parameters
+#### `--path -p`
+A path where the CRD and specification for the SQL managed instance should be written.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc config add
+Add the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
+```bash
+az sql mi-arc config add --path -p
+ --json-values -j
+```
+### Examples
+Ex 1 - Add storage.
+```bash
+az sql mi-arc config add --path custom/spec.json --json-values "spec.storage={"accessMode":"ReadWriteOnce","className":"managed-premium","size":"10Gi"}"
+```
+### Required Parameters
+#### `--path -p`
+Path to the custom resource specification, i.e. custom/spec.json
+#### `--json-values -j`
+A key value pair list of json paths to values: key1.subkey1=value1,key2.subkey2=value2. You may provide inline json values such as: key='{"kind":"cluster","name":"test-cluster"}' or provide a file path, such as key=./values.json. The add command does NOT support conditionals. If the inline value you are providing is a key value pair itself with "=" and "," please escape those characters. For example, key1="key2\=val2\,key3\=val3". Please see http://jsonpatch.com/ for examples of how your path should look. If you would like to access an array, you must do so by indicating the index, such as key.0=value
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc config remove
+Remove the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
+```bash
+az sql mi-arc config remove --path -p
+ --json-path -j
+```
+### Examples
+Ex 1 - Remove storage.
+```bash
+az sql mi-arc config remove --path custom/spec.json --json-path ".spec.storage"
+```
+### Required Parameters
+#### `--path -p`
+Path to the custom resource specification, i.e. custom/spec.json
+#### `--json-path -j`
+A list of json paths based on the jsonpatch library that indicates which values you would like removed, such as: key1.subkey1,key2.subkey2. The remove command does NOT support conditionals. Please see http://jsonpatch.com/ for examples of how your path should look. If you would like to access an array, you must do so by indicating the index, such as key.0=value
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc config replace
+Replace the value at the json path in the config file. All examples below are given in Bash. If using another command line, you may need to escape quotations appropriately. Alternatively, you may use the patch file functionality.
+```bash
+az sql mi-arc config replace --path -p
+ --json-values -j
+```
+### Examples
+Ex 1 - Replace the port of a single endpoint.
+```bash
+az sql mi-arc config replace --path custom/spec.json --json-values "$.spec.endpoints[?(@.name=="Controller")].port=30080"
+```
+Ex 2 - Replace storage.
+```bash
+az sql mi-arc config replace --path custom/spec.json --json-values "spec.storage={"accessMode":"ReadWriteOnce","className":"managed-premium","size":"10Gi"}"
+```
+### Required Parameters
+#### `--path -p`
+Path to the custom resource specification, i.e. custom/spec.json
+#### `--json-values -j`
+A key value pair list of json paths to values: key1.subkey1=value1,key2.subkey2=value2. You may provide inline json values such as: key='{"kind":"cluster","name":"test-cluster"}' or provide a file path, such as key=./values.json. The replace command supports conditionals through the jsonpath library. To use this, start your path with a $. This will allow you to do a conditional such as -j $.key1.key2[?(@.key3=="someValue"].key4=value. If the inline value you are providing is a key value pair itself with "=" and "," please escape those characters. For example, key1="key2\=val2\,key3\=val3". You may see examples below. For additional help, please see: https://jsonpath.com/
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc config patch
+Patch the config file according to the given patch file. Consult http://jsonpatch.com/ for a better understanding of how the paths should be composed. The replace operation can use conditionals in its path due to the jsonpath library https://jsonpath.com/. All patch json files must start with a key of "patch" that has an array of patches with their corresponding op (add, replace, remove), path, and value. The "remove" op does not require a value, just a path. See the examples below.
+```bash
+az sql mi-arc config patch --path -p
+ --patch-file
+```
+### Examples
+Ex 1 - Replace the port of a single endpoint with patch file.
+```bash
+az sql mi-arc config patch --path custom/spec.json --patch ./patch.json
+
+ Patch File Example (patch.json):
+ {"patch":[{"op":"replace","path":"$.spec.endpoints[?(@.name=="Controller")].port","value":30080}]}
+```
+Ex 2 - Replace storage with patch file.
+```bash
+az sql mi-arc config patch --path custom/spec.json --patch ./patch.json
+
+ Patch File Example (patch.json):
+ {"patch":[{"op":"replace","path":".spec.storage","value":{"accessMode":"ReadWriteMany","className":"managed-premium","size":"10Gi"}}]}
+```
+### Required Parameters
+#### `--path -p`
+Path to the custom resource specification, i.e. custom/spec.json
+#### `--patch-file`
+Path to a patch json file that is based off the jsonpatch library: http://jsonpatch.com/. You must start your patch json file with a key called "patch", whose value is an array of patch operations you intend to make. For the path of a patch operation, you may use dot notation, such as key1.key2 for most operations. If you would like to do a replace operation, and you are replacing a value in an array that requires a conditional, please use the jsonpath notation by beginning your path with a $. This will allow you to do a conditional such as $.key1.key2[?(@.key3=="someValue"].key4. Please see the examples below. For additional help with conditionals, please see: https://jsonpath.com/.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Sql Mi Arc Dag https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-sql-mi-arc-dag.md
+
+ Title: az sql mi-arc dag reference
+
+description: Reference article for az sql mi-arc dag commands.
+++ Last updated : 07/30/2021+++++
+# az sql mi-arc dag
+## Commands
+| Command | Description|
+| | |
+[az sql mi-arc dag create](#az-sql-mi-arc-dag-create) | Create a distributed availability group custom resource
+[az sql mi-arc dag delete](#az-sql-mi-arc-dag-delete) | Delete a distributed availability group custom resource on a sqlmi instance.
+[az sql mi-arc dag show](#az-sql-mi-arc-dag-show) | show a distributed availability group custom resource.
+## az sql mi-arc dag create
+Create a distributed availability group custom resource to create a distributed availability group
+```bash
+az sql mi-arc dag create --name -n
+ --dag-name -d
+
+--local-instance-name -l
+
+--local-primary -p
+
+--remote-instance-name -r
+
+--remote-mirroring-url -u
+
+--remote-mirroring-cert-file -f
+
+[--k8s-namespace -k]
+
+[--path]
+
+[--use-k8s]
+```
+### Examples
+Ex 1 - Create a distributed availability group custom resource dagCr1 to create distributed availability group dagName1 between local sqlmi instance sqlmi1 and remote sqlmi instance sqlmi2. It requires remote sqlmi primary mirror remotePrimary:5022 and remote sqlmi mirror endpoint certificate file ./sqlmi2.cer.
+```bash
+az sql mi-arc dag create --name=dagCr1 --dag-name=dagName1 --local-instance-name=sqlmi1 --local-primary=true --remote-instance-name=sqlmi2 --remote-mirroring-url==remotePrimary:5022 --remote-mirroing-cert-file="./sqlmi2.cer"
+```
+### Required Parameters
+#### `--name -n`
+The name of the distributed availability group resource.
+#### `--dag-name -d`
+The name of the distributed availability group for this SQL managed instance. Both local and remote have to use the same name.
+#### `--local-instance-name -l`
+The name of the local SQL managed instance
+#### `--local-primary -p`
+True indicates local SQL managed instance is geo primary. False indicates local SQL managed instance is geo secondary
+#### `--remote-instance-name -r`
+The name of the remote SQL managed instance or remote SQL availability group
+#### `--remote-mirroring-url -u`
+The mirroring endpoint URL of the remote SQL managed instance or remote SQL availability group
+#### `--remote-mirroring-cert-file -f`
+The filename of mirroring endpoint public certificate for the remote SQL managed instance or remote SQL availability group. Only PEM format is supported
+### Optional Parameters
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instance exists. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--path`
+Path to the custom resource specification, i.e. custom/spec.json
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc dag delete
+Delete a distributed availability group custom resource on a sqlmi instance to delete a distributed availability group. It requires a custom resource name
+```bash
+az sql mi-arc dag delete --name
+ [--k8s-namespace -k]
+
+[--use-k8s]
+```
+### Examples
+Ex 1 - delete distributed availability group resources named dagCr1.
+```bash
+az sql mi-arc dag delete --name=dagCr1
+```
+### Required Parameters
+#### `--name`
+The name of the distributed availability group resource.
+### Optional Parameters
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instance exists. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc dag show
+show a distributed availability group custom resource. It requires a custom resource name
+```bash
+az sql mi-arc dag show --name
+ [--k8s-namespace -k]
+
+[--use-k8s]
+```
+### Examples
+Ex 1 - show distributed availability group resources named dagCr1.
+```bash
+az sql mi-arc dag show --name=dagCr1
+```
+### Required Parameters
+#### `--name`
+The name of the distributed availability group resource.
+### Optional Parameters
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instance exists. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Sql Mi Arc Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-sql-mi-arc-endpoint.md
+
+ Title: az sql mi-arc endpoint reference
+
+description: Reference article for az sql mi-arc endpoint commands.
+++ Last updated : 07/30/2021+++++
+# az sql mi-arc endpoint
+## Commands
+| Command | Description|
+| | |
+[az sql mi-arc endpoint list](#az-sql-mi-arc-endpoint-list) | List the SQL endpoints.
+## az sql mi-arc endpoint list
+List the SQL endpoints.
+```bash
+az sql mi-arc endpoint list [--name -n]
+ [--k8s-namespace -k]
+
+[--use-k8s]
+```
+### Examples
+List the endpoints for a SQL managed instance.
+```bash
+az sql mi-arc endpoint list -n sqlmi1
+```
+### Optional Parameters
+#### `--name -n`
+The name of the SQL instance to be shown. If omitted, all endpoints for all instances will be shown.
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instances exist. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Reference Az Sql Mi Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/reference/reference-az-sql-mi-arc.md
+
+ Title: az sql mi-arc reference
+
+description: Reference article for az sql mi-arc commands.
+++ Last updated : 07/30/2021+++++
+# az sql mi-arc
+## Commands
+| Command | Description|
+| | |
+[az sql mi-arc endpoint](reference-az-sql-mi-arc-endpoint.md) | View and manage SQL endpoints.
+[az sql mi-arc create](#az-sql-mi-arc-create) | Create a SQL managed instance.
+[az sql mi-arc edit](#az-sql-mi-arc-edit) | Edit the configuration of a SQL managed instance.
+[az sql mi-arc delete](#az-sql-mi-arc-delete) | Delete a SQL managed instance.
+[az sql mi-arc show](#az-sql-mi-arc-show) | Show the details of a SQL managed instance.
+[az sql mi-arc get-mirroring-cert](#az-sql-mi-arc-get-mirroring-cert) | Retrieve certificate of availability group mirroring endpoint from sql mi and store in a file.
+[az sql mi-arc list](#az-sql-mi-arc-list) | List SQL managed instances.
+[az sql mi-arc config](reference-az-sql-mi-arc-config.md) | Configuration commands.
+[az sql mi-arc dag](reference-az-sql-mi-arc-dag.md) | Create or Delete a Distributed Availability Group.
+## az sql mi-arc create
+To set the password of the SQL managed instance, set the environment variable AZDATA_PASSWORD
+```bash
+az sql mi-arc create --name -n
+ --k8s-namespace -k
+
+[--path]
+
+[--replicas]
+
+[--cores-limit -c]
+
+[--cores-request]
+
+[--memory-limit -m]
+
+[--memory-request]
+
+[--storage-class-data -d]
+
+[--storage-class-logs -g]
+
+[--storage-class-datalogs]
+
+[--storage-class-backups]
+
+[--volume-size-data]
+
+[--volume-size-logs]
+
+[--volume-size-datalogs]
+
+[--volume-size-backups]
+
+[--no-wait]
+
+[--no-external-endpoint]
+
+[--cert-public-key-file]
+
+[--cert-private-key-file]
+
+[--service-cert-secret]
+
+[--admin-login-secret]
+
+[--license-type -l]
+
+[--tier -t]
+
+[--dev]
+
+[--labels]
+
+[--annotations]
+
+[--service-labels]
+
+[--service-annotations]
+
+[--storage-labels]
+
+[--storage-annotations]
+
+[--use-k8s]
+
+[--collation]
+
+[--language]
+
+[--agent-enabled]
+
+[--trace-flags]
+```
+### Examples
+Create a SQL managed instance.
+```bash
+az sql mi-arc create -n sqlmi1 --k8s-namespace namespace
+```
+Create a SQL managed instance with 3 replicas in HA scenario.
+```bash
+az sql mi-arc create -n sqlmi2 --replicas 3 --k8s-namespace namespace
+```
+### Required Parameters
+#### `--name -n`
+The name of the SQL managed instance.
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instance is to be deployed. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+### Optional Parameters
+#### `--path`
+The path to the azext_arcdata file for the SQL managed instance json file.
+#### `--replicas`
+This option specifies the number of SQL Managed Instance replicas that will be deployed in your Kubernetes cluster for high availability purpose. Allowed values are '3' or '1' with default of '1'.
+#### `--cores-limit -c`
+The cores limit of the managed instance as an integer.
+#### `--cores-request`
+The request for cores of the managed instance as an integer.
+#### `--memory-limit -m`
+The limit of the capacity of the managed instance as an integer number followed by Gi (gigabytes). Example: 4Gi
+#### `--memory-request`
+The request for the capacity of the managed instance as an integer number followed by Gi (gigabytes). Example: 4Gi
+#### `--storage-class-data -d`
+The storage class to be used for data files (.mdf, .ndf). If no value is specified, then no storage class will be specified, which will result in Kubernetes using the default storage class.
+#### `--storage-class-logs -g`
+The storage class to be used for logs (/var/log). If no value is specified, then no storage class will be specified, which will result in Kubernetes using the default storage class.
+#### `--storage-class-datalogs`
+The storage class to be used for database logs (.ldf). If no value is specified, then no storage class will be specified, which will result in Kubernetes using the default storage class.
+#### `--storage-class-backups`
+The storage class to be used for backups (/var/opt/mssql/backups). If no value is specified, then no storage class will be specified, which will result in Kubernetes using the default storage class.
+#### `--volume-size-data`
+The size of the storage volume to be used for data as a positive number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes).
+#### `--volume-size-logs`
+The size of the storage volume to be used for logs as a positive number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes).
+#### `--volume-size-datalogs`
+The size of the storage volume to be used for data logs as a positive number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes).
+#### `--volume-size-backups`
+The size of the storage volume to be used for backups as a positive number followed by Ki (kilobytes), Mi (megabytes), or Gi (gigabytes).
+#### `--no-wait`
+If given, the command will not wait for the instance to be in a ready state before returning.
+#### `--no-external-endpoint`
+If specified, no external service will be created. Otherwise, an external service will be created using the same service type as the data controller.
+#### `--cert-public-key-file`
+Path to the file containing a PEM formatted certificate public key to be used for SQL Server.
+#### `--cert-private-key-file`
+Path to the file containing a PEM formatted certificate private key to be used for SQL Server.
+#### `--service-cert-secret`
+Name of the Kubernetes secret to generate that hosts or will host SQL service certificate.
+#### `--admin-login-secret`
+Name of the Kubernetes secret to generate that hosts or will host user admin login account credential.
+#### `--license-type -l`
+The license type to apply for this managed instance. Allowed values are: BasePrice, LicenseIncluded. Default is LicenseIncluded. The license type cannot be changed.
+#### `--tier -t`
+The pricing tier for the instance. Allowed values: BusinessCritical (bc for short) or GeneralPurpose (gp for short). Default is GeneralPurpose. The price tier cannot be changed.
+#### `--dev`
+If this is specified, then it is considered a dev instance and will not be billed for.
+#### `--labels`
+Comma-separated list of labels of the SQL managed instance.
+#### `--annotations`
+Comma-separated list of annotations of the SQL managed instance.
+#### `--service-labels`
+Comma-separated list of labels to apply to all external services.
+#### `--service-annotations`
+Comma-separated list of annotations to apply to all external services.
+#### `--storage-labels`
+Comma-separated list of labels to apply to all PVCs.
+#### `--storage-annotations`
+Comma-separated list of annotations to apply to all PVCs.
+#### `--use-k8s`
+Create SQL managed instance using local Kubernetes APIs.
+#### `--collation`
+The SQL Server collation for the instance.
+#### `--language`
+The SQL Server locale to any supported language identifier (LCID) for the instance.
+#### `--agent-enabled`
+Enable SQL Server agent for the instance. Default is disabled. Allowed values are 'true' or 'false'.
+#### `--trace-flags`
+Comma separated list of traceflags. No flags by default.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc edit
+Edit the configuration of a SQL managed instance.
+```bash
+az sql mi-arc edit --name -n
+ [--k8s-namespace -k]
+
+[--path]
+
+[--cores-limit -c]
+
+[--cores-request]
+
+[--memory-limit -m]
+
+[--memory-request]
+
+[--no-wait]
+
+[--dev]
+
+[--labels]
+
+[--annotations]
+
+[--service-labels]
+
+[--service-annotations]
+
+[--agent-enabled]
+
+[--trace-flags]
+
+[--use-k8s]
+```
+### Examples
+Edit the configuration of a SQL managed instance.
+```bash
+az sql mi-arc edit --path ./spec.json -n sqlmi1
+```
+### Required Parameters
+#### `--name -n`
+The name of the SQL managed instance that is being edited. The name under which your instance is deployed cannot be changed.
+### Optional Parameters
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instance exists. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--path`
+The path to the azext_arcdata file for the SQL managed instance json file.
+#### `--cores-limit -c`
+The cores limit of the managed instance as an integer.
+#### `--cores-request`
+The request for cores of the managed instance as an integer.
+#### `--memory-limit -m`
+The limit of the capacity of the managed instance as an integer number followed by Gi (gigabytes). Example: 4Gi
+#### `--memory-request`
+The request for the capacity of the managed instance as an integer number followed by Gi (gigabytes). Example: 4Gi
+#### `--no-wait`
+If given, the command will not wait for the instance to be in a ready state before returning.
+#### `--dev`
+If this is specified, then it is considered a dev instance and will not be billed for.
+#### `--labels`
+Comma-separated list of labels of the SQL managed instance.
+#### `--annotations`
+Comma-separated list of annotations of the SQL managed instance.
+#### `--service-labels`
+Comma-separated list of labels to apply to all external services.
+#### `--service-annotations`
+Comma-separated list of annotations to apply to all external services.
+#### `--agent-enabled`
+Enable SQL Server agent for the instance. Default is disabled.
+#### `--trace-flags`
+Comma separated list of traceflags. No flags by default.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc delete
+Delete a SQL managed instance.
+```bash
+az sql mi-arc delete --name -n
+ [--k8s-namespace -k]
+
+[--use-k8s]
+```
+### Examples
+Delete a SQL managed instance.
+```bash
+az sql mi-arc delete -n sqlmi1
+```
+### Required Parameters
+#### `--name -n`
+The name of the SQL managed instance to be deleted.
+### Optional Parameters
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instance exists. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc show
+Show the details of a SQL managed instance.
+```bash
+az sql mi-arc show --name -n
+ [--path -p]
+
+[--k8s-namespace -k]
+
+[--use-k8s]
+```
+### Examples
+Show the details of a SQL managed instance.
+```bash
+az sql mi-arc show -n sqlmi1
+```
+### Required Parameters
+#### `--name -n`
+The name of the SQL managed instance to be shown.
+### Optional Parameters
+#### `--path -p`
+A path where the full specification for the SQL managed instance should be written. If omitted, the specification will be written to standard output.
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instance exists. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc get-mirroring-cert
+Retrieve certificate of availability group mirroring endpoint from sql mi and store in a file.
+```bash
+az sql mi-arc get-mirroring-cert --name -n
+ --cert-file
+
+[--k8s-namespace -k]
+
+[--use-k8s]
+```
+### Examples
+Retrieve certificate of availability group mirroring endpoint from sqlmi1 and store in file fileName1
+```bash
+az sql mi-arc get-mirroring-cert -n sqlmi1 --cert-file fileName1
+```
+### Required Parameters
+#### `--name -n`
+The name of the SQL managed instance.
+#### `--cert-file`
+The local filename to store the retrieved certificate in PEM format.
+### Optional Parameters
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instance exists. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
+## az sql mi-arc list
+List SQL managed instances.
+```bash
+az sql mi-arc list [--k8s-namespace -k]
+ [--use-k8s]
+```
+### Examples
+List SQL managed instances.
+```bash
+az sql mi-arc list
+```
+### Optional Parameters
+#### `--k8s-namespace -k`
+Namespace where the SQL managed instances exist. If no namespace is specified, then the namespace defined in the kubeconfig will be used.
+#### `--use-k8s`
+Use local Kubernetes APIs to perform this action.
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, none, table, tsv, yaml, yamlc. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org](http://jmespath.org) for more information and examples.
+#### `--subscription`
+Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+#### `--verbose`
+Increase logging verbosity. Use --debug for full debug logs.
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
-# Release notes - Azure Arc-enabled data services (Preview)
+# Release notes - Azure Arc-enabled data services
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## July 2021
+
+This release is published July 30, 2021.
+
+The current release announces general availability for the following
+- Azure Arc-enabled SQL Managed Instance general purpose service tier.
+
+ > [!NOTE]
+ > The services above are generally available in indirectly connected mode.
+ >
+ > These services are also available in directly connected mode, for preview.
+ >
+ > Azure SQL Managed Instance business critical service tier continues to be available in preview.
+ >
+ > Azure Arc-enabled PostgreSQL Hyperscale continues to be available in preview.
+
+### Breaking changes
+
+#### Data controller
+
+- `az arcdata dc create` parameter named `--azure-subscription` has been changed to use the standard `--subscription` parameter.
+- Deployment on AKS HCI requires a special storage class configuration. See details under [Configure storage (Azure Stack HCI with AKS-HCI)](create-data-controller-indirect-cli.md#configure-storage-azure-stack-hci-with-aks-hci).
+- There is a new requirement to allow non-SSL connections when exporting data. Set an environment variable to suppress the interactive prompt.
+
+### What's new
+
+#### Data controller
+
+- Directly connected mode is in preview.
+
+- Directly connected mode (preview) is only available in the following Azure regions for this release:
+ - Central US
+ - East US
+ - East US 2
+ - West US 2
+ - UK South
+ - West Europe
+ - North Europe
+ - Australia East
+ - Southeast Asia
+ - Korea Central
+ - France Central
+
+- Currently, additional basic authentication users can be added to Grafana using the Grafana administrative experience. Customizing Grafana by modifying the Grafana .ini files is not supported.
+
+- Currently, modifying the configuration of ElasticSearch and Kibana is not supported beyond what is available through the Kibana administrative experience. Only basic authentication with a single user is supported.
+
+- Custom metrics in Azure portal is in preview.
+
+- Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently.
+
+#### Azure Arc-enabled SQL Managed Instance
+
+- Automated backup and point-in-time restore is in preview.
+
+- Supports point-in-time restore from an existing database in an Azure Arc-enabled SQL managed instance to a new database within the same instance.
+- If the current datetime is given as point-in-time in UTC format, it resolves to the latest valid restore time and restores the given database until last valid transaction.
+- A database can be restored to any point-in-time where the transactions took place.
+- To set a specific recovery point objective for an Azure Arc-enabled SQL Managed Instance, edit the SQL managed instance CRD to set the `recoveryPointObjectiveInSeconds` property. Supported values are from 300 to 600.
+- To disable the automated backups, edit the SQL instance CRD and set the `recoveryPointObjectiveInSeconds` property to 0.
+
+### Known issues
+
+#### Platform
+
+- You can create a data controller, SQL managed instance, or PostgreSQL Hyperscale server group on a directly connected mode cluster with the Azure portal. Directly connected mode deployment is not supported with other Azure Arc-enabled data services tools. Specifically, you can't deploy a data controller in directly connect mode with any of the following tools during this release.
+ - Azure Data Studio
+
+ [!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
+
+ - Kubernetes native tools (`kubectl`)
+ - The `arcdata` extension for the Azure CLI (`az`)
+
+ [Create Azure Arc data controller in Direct connectivity mode from Azure portal](create-data-controller-direct-azure-portal.md) explains how to create the data controller in the portal.
+
+- You can still use `kubectl` to create resources directly on a Kubernetes cluster, however they will not be reflected in the Azure portal if you are using direct connected mode.
+
+- In directly connected mode, upload of usage, metrics, and logs using `az arcdata dc upload` is blocked by design. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work.
+- Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`.
+- Azure Arc-enabled SQL Managed instance and Azure Arc-enabled PostgreSQL Hyperscale are not GB18030 certified.
+- Currently, only one Azure Arc data controller per Kubernetes cluster is supported.
+
+#### Data controller
+
+- When Azure Arc data controller is deleted from Azure portal, validation is done to block the delete if there any Azure Arc enabled SQL managed instances deployed on this Arc data controller. Currently, this validation is applied only when the delete is performed from the Overview page of the Azure Arc data controller.
+
+#### Azure Arc-enabled PostgreSQL Hyperscale
+
+- Backup and restore operations no longer work in the July 30 release. This is a temporary limitation. Use the June 2021 release for now if you need to do to back up or restore. This will be fixed in a future release.
+
+- It is not possible to enable and configure the `pg_cron` extension at the same time. You need to use two commands for this. One command to enable it and one command to configure it. For example:
+
+ 1. Enable the extension:
+
+ ```console
+ azdata postgres arc-server edit -n myservergroup --extensions pg_cron
+ ```
+
+ 1. Restart the server group.
+
+ 1. Configure the extension:
+
+ ```console
+ azdata postgres arc-server edit -n myservergroup --engine-settings cron.database_name='postgres'
+ ```
+
+ If you execute the second command before the restart has completed it will fail. If that is the case, simply wait for a few more moments and execute the second command again.
+
+- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.
+
+- Point in time restore is not supported for now on NFS storage.
+
+#### Azure Arc-enabled SQL Managed Instance
+
+##### - Can't see resources in portal
+
+- Portal does not show Azure Arc-enabled SQL Managed Instance resources created in the June release. Delete the SQL Managed Instance resources from the resource group list view. You may need to delete the custom location resource first.
+
+##### Point-in-time restore(PITR) supportability and limitations:
+
+- Doesn't support restore from one Azure Arc-enabled SQL managed instance to another Azure Arc enabled SQL managed instance. The database can only be restored to the same Arc-enabled SQL Managed Instance where the backups were created.
+- Renaming of a databases is currently not supported, for point in time restore purposes.
+- Currently there is no CLI command or an API to provide the allowed time window information for point-in-time restore. You can provide a time within a reasonable window, since the time the database was created, and if the timestamp is valid the restore would work. If the timestamp is not valid, the allowed time window will be provided via an error message.
+- No support for restoring a TDE enabled database.
+- A deleted database cannot be restored currently.
+
+##### Automated backups
+
+- Renaming database will stop the automated backups for this database.
+- No retention enforced. Will preserve all backups as long as there's available space.
+- User databases with SIMPLE recovery model are not backed up.
+- System database `model` is not backed up in order to prevent interference with creation/deletion of database. The DB gets locked when admin operations are performed.
+- Currently only `master` and `msdb` system databases are backed up. Only full backups are performed every 12 hours.
+- Only `ONLINE` user databases are backup up.
+
+##### Other limitations
+- Transaction replication is currently not supported.
+- Log shipping is currently blocked
+ ## June 2021 This preview release is published July 13, 2021.
To update your scripts for managed instance, replace `azdata arc sql mi...` with
For Azure Arc-enabled PostgreSQL Hyperscale, replace `azdata arc sql postgres...` with `az postgres arc-server...`.
-In addition to the parameters that have historically existed on the azdata commands, the same commands in the `arcdata` Azure CLI extension have some new parameters such as `--namespace` and `--use-k8s` are now required. The `--use-k8s` parameter will be used to differentiate when the command should be sent to the Kubernetes API or to the ARM API. For now all Azure CLI commands for Arc enabled data services target only the Kubernetes API.
+In addition to the parameters that have historically existed on the `azdata` commands, the same commands in the `arcdata` Azure CLI extension have some new parameters such as `--k8s-namespace` and `--use-k8s` are now required. The `--use-k8s` parameter will be used to differentiate when the command should be sent to the Kubernetes API or to the ARM API. For now all Azure CLI commands for Arc-enabled data services target only the Kubernetes API.
Some of the short forms of the parameter names (e.g. `--core-limit` as `-cl`) have either been removed or changed. Use the new parameter short names or the long name.
The `azdata arc dc export` command is no longer functional. Use `az arcdata dc e
#### Required property: `infrastructure`
-The `infrastructure` property is a new required property when deploying a data controller. Adjust your yaml files, azdata/az scripts, and ARM templates to account for specifying this property value. Allowed values are `alibaba`, `aws`, `azure`, `gcp`, `onpremises`, `other`.
+The `infrastructure` property is a new required property when deploying a data controller. Adjust your yaml files, azdata/az scripts, and ARM templates to account for specifying this property value. Allowed values are `alibaba`, `aws`, `azure`, `gpc`, `onpremises`, `other`.
#### Kibana login
-The OpenDistro security pack has been removed. Log in to Kibana is now done through a generic browser username/password prompt. More information will be provided later how to configure additional authentication/authorization options.
+The OpenDistro security pack has been removed. Log in to Kibana is now done through a generic browser username/password prompt. More information will be provided later how to configure additional authentication/authorization options.
#### CRD version bump to `v1beta1`
-All CRDs have had the version bumped from `v1alpha1` to `v1beta1` for this release. Be sure to delete all CRDs as part of the uninstall process if you have deployed a version of Arc enabled data services prior to the June 2021 release. The new CRDs deployed with the June 2021 release will have v1beta1 as the version.
+All CRDs have had the version bumped from `v1alpha1` to `v1beta1` for this release. Be sure to delete all CRDs as part of the uninstall process if you have deployed a version of Arc-enabled data services prior to the June 2021 release. The new CRDs deployed with the June 2021 release will have v1beta1 as the version.
#### Azure Arc-enabled SQL Managed Instance
This release introduces `az` CLI extensions for Azure Arc-enabled data services.
- Support for using NFS-based storage classes. - Diagnostics and solutions have been added to the Azure portal for Arc SQL Managed Instance
-### Known issues
-
-#### Platform
--- You can create a data controller, SQL managed instance, or PostgreSQL Hyperscale server group on a connected cluster with the Azure portal. Deployment is not supported with other Azure Arc-enabled data services tools. Specifically, you can't deploy a data controller in direct connect mode with any of the following tools during this release.
- - Azure Data Studio
- - Azure Data CLI (`azdata`)
- - Kubernetes native tools (`kubectl`)
- - The `arcdata` extension for the Azure CLI (`az`)
-
- [Create Azure Arc data controller in Direct connectivity mode from Azure portal](create-data-controller-direct-azure-portal.md) explains how to create the data controller in the portal.
--- You can still use `kubectl` to create resources directly on a Kubernetes cluster, however they will not be reflected in the Azure portal if you are using direct connected mode.--- In direct connected mode, upload of usage, metrics, and logs using `az arcdata dc upload` is currently blocked. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work.-- Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`.-- Azure Arc-enabled SQL Managed instance and Azure Arc-enabled PostgreSQL Hyperscale are not GB18030 certified.-- Currently, only one Azure Arc data controller per Kubernetes cluster is supported.-
-#### Data controller
-
-Deleting the data controller does not in all cases delete the monitor custom resource. You can delete it manually by running the command `kubectl delete monitor monitoringstack -n <namespace>`.
-
-#### Azure Arc-enabled PostgreSQL Hyperscale
--- It is not possible to enable and configure the `pg_cron` extension at the same time. You need to use two commands for this. One command to enable it and one command to configure it. For example:-
- 1. Enable the extension:
-
- ```console
- azdata arc postgres server edit -n myservergroup --extensions pg_cron
- ```
-
- 1. Restart the server group.
-
- 1. Configure the extension:
-
- ```console
- azdata arc postgres server edit -n myservergroup --engine-settings cron.database_name='postgres'
- ```
-
- If you execute the second command before the restart has completed it will fail. If that is the case, simply wait for a few more moments and execute the second command again.
--- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.--- Point in time restore is not supported for now on NFS storage.-
-#### Azure Arc-enabled SQL Managed Instance
-
-Some limitations for the automated backup service. Refer to the Point-In-Time restore article to learn more.
- ## May 2021 This preview release is published on June 2, 2021.
This release introduces direct connectivity mode. Direct connectivity mode enabl
You can specify direct connectivity when you create the data controller. The following example creates a data controller with `az arcdata dc create` named `arc` using direct connectivity mode (`connectivity-mode direct`). Before you run the example, replace `<subscription id>` with your subscription ID. ```azurecli
-az arcdata dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription <subscription id> --resource-group my-resource-group --location eastus --connectivity-mode direct
+az arcdata dc create --profile-name azure-arc-aks-hci --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group my-resource-group --location eastus --connectivity-mode direct
``` ## October 2020
azure-arc Restore Adventureworks Sample Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
This document describes a simple process to get the AdventureWorks sample database restored into your SQL Managed Instance - Azure Arc. ## Download the AdventureWorks backup file
azure-arc Retrieve The Username Password For Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/retrieve-the-username-password-for-data-controller.md
Previously updated : 07/13/2021 Last updated : 07/30/2021
If you are the Kubernetes administrator for the cluster. As such you have the pr
> [!NOTE] > If you used a different name for the namespace where the data controller was created, be sure to change the `-n arc` parameter in the commands below to use the name of the namespace that you created the data controller to. ## Linux
azure-arc Scale Out In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/scale-out-in-postgresql-hyperscale-server-group.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
The scenario uses a sample of publicly available GitHub data, available from the
##### List the connection information Connect to your Azure Arc-enabled PostgreSQL Hyperscale server group by first getting the connection information: The general format of this command is
-```console
-azdata arc postgres endpoint list -n <server name>
+```azurecli
+az postgres arc-server endpoint list -n <server name> --k8s-namespace <namespace> --use-k8s
``` For example:
-```console
-azdata arc postgres endpoint list -n postgres01
+```azurecli
+az postgres arc-server endpoint list -n postgres01 --k8s-namespace <namespace> --use-k8s
``` Example output:
Make a note of the query execution time.
## Scale out The general format of the scale-out command is:
-```console
-azdata arc postgres server edit -n <server group name> -w <target number of worker nodes>
+```azurecli
+az postgres arc-server edit -n <server group name> -w <target number of worker nodes> --k8s-namespace <namespace> --use-k8s
``` In this example, we increase the number of worker nodes from 2 to 4, by running the following command:
-```console
-azdata arc postgres server edit -n postgres01 -w 4
+```azurecli
+az postgres arc-server edit -n postgres01 -w 4 --k8s-namespace <namespace> --use-k8s
``` Upon adding nodes, and you'll see a Pending state for the server group. For example:
-```console
-azdata arc postgres server list
+```azurecli
+az postgres arc-server list --k8s-namespace <namespace> --use-k8s
``` ```console
Once the nodes are available, the Hyperscale Shard Rebalancer runs automatically
### Verify the new shape of the server group (optional) Use either of the methods below to verify that the server group is now using the additional worker nodes you added.
-#### With azdata:
+#### With Azure CLI (az):
+ Run the command:
-```console
-azdata arc postgres server list
+
+```azurecli
+az postgres arc-server list --k8s-namespace <namespace> --use-k8s
``` It returns the list of server groups created in your namespace and indicates their number of worker nodes. For example:
Note the execution time.
To scale in (reduce the number of worker nodes in your server group), you use the same command as to scale out but you indicate a smaller number of worker nodes. The worker nodes that are removed are the latest ones added to the server group. When you run this command, the system moves the data out of the nodes that are removed and redistributes (rebalances) it automatically to the remaining nodes. The general format of the scale-in command is:
-```console
-azdata arc postgres server edit -n <server group name> -w <target number of worker nodes>
+```azurecli
+az postgres arc-server edit -n <server group name> -w <target number of worker nodes> --k8s-namespace <namespace> --use-k8s
```
azure-arc Scale Up Down Postgresql Hyperscale Server Group Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/scale-up-down-postgresql-hyperscale-server-group-using-cli.md
Title: Scale up and down an Azure Database for PostgreSQL Hyperscale server group using CLI (azdata or kubectl)
-description: Scale up and down an Azure Database for PostgreSQL Hyperscale server group using CLI (azdata or kubectl)
+ Title: Scale up and down an Azure Database for PostgreSQL Hyperscale server group using CLI (az or kubectl)
+description: Scale up and down an Azure Database for PostgreSQL Hyperscale server group using CLI (az or kubectl)
Previously updated : 06/02/2021 Last updated : 07/30/2021
-# Scale up and down an Azure Database for PostgreSQL Hyperscale server group using CLI (azdata or kubectl)
--
+# Scale up and down an Azure Database for PostgreSQL Hyperscale server group using CLI (az or kubectl)
There are times when you may need to change the characteristics or the definition of a server group. For example:
Scaling up or down the vCore or memory settings of your server group means you h
To show the current definition of your server group and see what are the current vCore and Memory settings, run either of the following command:
-### CLI with azdata
+### CLI with azure cli (az)
-```console
-azdata arc postgres server show -n <server group name>
+```azurecli
+az postgres arc-server show -n <server group name> --k8s-namespace <namespace> --use-k8s
``` ### CLI with kubectl
How do you indicate what role does the setting apply to?
**The general syntax is:**
-```console
-azdata arc postgres server edit -n <servergroup name> --memory-limit/memory-request/cores-request/cores-limit <coordinator=val1,worker=val2>
+```azurecli
+az postgres arc-server edit -n <servergroup name> --memory-limit/memory-request/cores-request/cores-limit <coordinator=val1,worker=val2> --k8s-namespace <namespace> --use-k8s
``` The value you indicate for the memory setting is a number followed by a unit of volume. For example, to indicate 1Gb, you would indicate 1024Mi or 1Gi. To indicate a number of cores, you just pass a number without unit.
-### Examples using the azdata CLI
----
+### Examples using the Azure CLI
**Configure the coordinator role to not exceed 2 cores and the worker role to not exceed 4 cores:**
-```console
- azdata arc postgres server edit -n postgres01 --cores-request coordinator=1, --cores-limit coordinator=2
- azdata arc postgres server edit -n postgres01 --cores-request worker=1, --cores-limit worker=4
+
+```azurecli
+ az postgres arc-server edit -n postgres01 --cores-request coordinator=1, --cores-limit coordinator=2 --k8s-namespace <namespace> --use-k8s
+ az postgres arc-server edit -n postgres01 --cores-request worker=1, --cores-limit worker=4 --k8s-namespace <namespace> --use-k8s
``` or
-```console
-azdata arc postgres server edit -n postgres01 --cores-request coordinator=1,worker=1 --cores-limit coordinator=4,worker=4
+```azurecli
+az postgres arc-server edit -n postgres01 --cores-request coordinator=1,worker=1 --cores-limit coordinator=4,worker=4 --k8s-namespace <namespace> --use-k8s
``` > [!NOTE]
-> For details about those parameters, run `azdata arc postgres server edit --help`.
+> For details about those parameters, run `az postgres arc-server edit --help`.
### Example using Kubernetes native tools like `kubectl`
If you are not familiar with the `vi` editor, see a description of the commands
## Reset to default values To reset core/memory limits/requests parameters to their default values, edit them and pass an empty string instead of an actual value. For example, if you want to reset the core limit parameter, run the following commands:
-```console
-azdata arc postgres server edit -n postgres01 --cores-request coordinator='',worker=''
-azdata arc postgres server edit -n postgres01 --cores-limit coordinator='',worker=''
+```azurecli
+az postgres arc-server edit -n postgres01 --cores-request coordinator='',worker='' --k8s-namespace <namespace> --use-k8s
+az postgres arc-server edit -n postgres01 --cores-limit coordinator='',worker='' --k8s-namespace <namespace> --use-k8s
``` or
-```console
-azdata arc postgres server edit -n postgres01 --cores-request coordinator='',worker='' --cores-limit coordinator='',worker=''
+```azurecli
+az postgres arc-server edit -n postgres01 --cores-request coordinator='',worker='' --cores-limit coordinator='',worker='' --k8s-namespace <namespace> --use-k8s
``` ## Next steps
azure-arc Service Tiers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/service-tiers.md
+
+ Title: Azure Arc-enabled SQL Managed Instance service tiers
+description: Explains the service tiers available for Azure Arc-enabled SQL Managed Instance deployments.
++++++ Last updated : 07/30/2021+++
+# Azure Arc-enabled SQL Managed Instance service tiers
+
+As part of of the family of Azure SQL products, Azure Arc-enabled SQL Managed Instance is available in two [vCore](../../azure-sql/database/service-tiers-vcore.md) service tiers.
+
+- **General purpose** is a budget-friendly tier designed for most workloads with common performance and availability features.
+- **Business critical** tier is designed for performance-sensitive workloads with higher availability features.
+
+At this time, the business critical service tier is [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The general purpose service tier is generally available.
+
+In Azure, storage and compute is provided by Microsoft with guaranteed service level agreements (SLAs) for performance, throughput, availability, and etc. across each of the service tiers. With Azure Arc-enabled data services, customers provide the storage and compute. Hence, there are no guaranteed SLAs provided to customers with Azure Arc-enabled data services. However, customers get the flexibility to bring their own performant hardware irrespective of the service tier.
+
+## Service tier comparison
+
+Following is a description of the various capabilities available from Azure Arc-enabled data services across the two service tiers:
++
+Area | Business critical (preview)* | General purpose
+-|--|
+Feature set | Same as SQL Server Enterprise Edition | Same as SQL Server Standard Edition
+CPU limit/instance | Unlimited | 24 cores
+Memory limit/instance | Unlimited | 128 GB
+High availability | Availability group | Single instance w/ Kubernetes redeploy + shared storage.
+Read scale out | Availability group | None
+AHB exchange rates for IP component of price | 1:1 Enterprise Edition <br> 4:1 Standard Edition | 1:4 Enterprise Edition​ <br> 1:1 Standard Edition
+Dev/Test pricing | No cost | No cost
+
+\* Currently business critical service tier is in preview and does not incur any charges for use use during this preview. Some of the features may change as we get closer to general availability.
+
+## How to choose between the service tiers
+
+Since customers bring their own hardware with performance and availability requirements based on their business needs, the primary differentiators between the service tiers are what is provided at the software level.
+
+### Choose general purpose if
+
+- CPU/memory requirements meet or are within the limits of the general purpose service tier
+- The high availability options provided by Kubernetes, such as pod redeployments, is sufficient for the workload
+- Application does not need read scale out
+- The application does not require any of the features found in the business critical service tier (same as SQL Server Enterprise Edition)
+
+### Choose business critical if
+
+- CPU/memory requirements exceed the limits of the general purpose service tier
+- Application requires a higher level of High Availability such as built-in Availability Groups to handle application failovers than what is offered by Kubernetes.
+- Application can take advantage of read scale out to offload read workloads to the secondary replicas
+- Application requires features found only in the business critical service tier (same as SQL Server Enterprise Edition)
azure-arc Show Configuration Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/show-configuration-postgresql-hyperscale-server-group.md
Previously updated : 06/02/2021 Last updated : 07/30/2021
Use either of the following commands.
Each of them runs on 3 nodes/pods: 1 coordinator and 2 workers. -- **With azdata:**
+- **With Azure CLI (az):**
Run the following command. The output shows similar information to what kubectl shows:
- ```console
- azdata arc postgres server list
+ ```azurecli
+ az postgres arc-server list --k8s-namespace <namespace> --use-k8s
`output Name State Workers
Let's call out some specific points of interest in the description of the `serve
> State: Ready > ```
-**With azdata:**
+**With Azure CLI (az):**
The general format of the command is:
-```console
-azdata arc postgres server show -n <server group name>
+```azurecli
+az postgres arc-server show -n <server group name> --k8s-namespace <namespace> --use-k8s
``` For example:
-```console
-azdata arc postgres server show -n postgres02
+```azurecli
+az postgres arc-server show -n postgres02 --k8s-namespace <namespace> --use-k8s
``` Returns the below output in a format and content very similar to the one returned by kubectl.
azure-arc Sizing Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/sizing-guidance.md
Previously updated : 09/22/2020 Last updated : 07/30/2021 # Sizing Guidance ## Overview of sizing guidance
When planning for the deployment of Azure Arc data services you should plan for
Cores numbers must be an integer value greater than or equal to one.
-When using azdata for deployment the memory values should be specified in a power of two number - i.e. using the suffixes: Ki, Mi, or Gi.
+When using Azure CLI (az) for deployment the memory values should be specified in a power of two number - i.e. using the suffixes: Ki, Mi, or Gi.
Limit values must always be greater than to the request value, if specified.
See the [storage-configuration](storage-configuration.md) article for details on
The data controller is a collection of pods that are deployed to your Kubernetes cluster to provide an API, the controller service, the bootstrapper, and the monitoring databases and dashboards. This table describes the default values for memory and CPU requests and limits.
-|Pod name|CPU Request|Memory Request|CPU Limit|Memory Limit|Notes|
+|Pod name|CPU request|Memory request|CPU limit|Memory limit|Notes|
||||||| |**bootstrapper**|100m|100Mi|200m|200Mi|| |**control**|400m|2Gi|1800m|2Gi|| |**controldb**|200m|3Gi|800m|6Gi||
-|**controlwd**|10m|100Mi|100m|200Mi||
|**logsdb**|200m|1600Mi|2|1600Mi|| |**logsui**|100m|500Mi|2|2Gi|| |**metricsdb**|200m|800Mi|400m|2Gi|| |**metricsdc**|100m|200Mi|200m|300Mi|Metricsdc is a daemonset which is created on each of the Kubernetes nodes in your cluster. The numbers in the table here are _per node_. If you set allowNodeMetricsCollection = false in your deployment profile file before creating the data controller, the metricsdc daemonset will not be created.| |**metricsui**|20m|200Mi|500m|200Mi||
-|**mgmtproxy**|200m|250Mi|500m|500Mi||
You can override the default settings for the controldb and control pods in your deployment profile file or datacontroller YAML file. Example:
See the [storage-configuration](storage-configuration.md) article for details on
## SQL managed instance sizing details
-Each SQL managed instance must have the following minimum resource requests:
-- Memory: 2Gi-- Cores: 1
+Each SQL managed instance must have the following minimum resource requests and limits:
+
+|Service tier|General purpose|Business critical (preview)|
+||||
+|CPU request|Minimum: 1; Maximum: 24; Default: 2|Minimum: 1; Maximum: unlimited; Default: 4|
+|CPU limit|Minimum: 1; Maximum: 24; Default: 2|Minimum: 1; Maximum: unlimited; Default: 4|
+|Memory request|Minimum: 2Gi; Maxium: 128Gi; Default: 4Gi|Minimum: 2Gi; Maxium: unlimited; Default: 4Gi|
+|Memory limit|Minimum: 2Gi; Maxium: 128Gi; Default: 4Gi|Minimum: 2Gi; Maxium: unlimited; Default: 4Gi|
Each SQL managed instance pod that is created has three containers:
See the [storage-configuration](storage-configuration.md) article for details on
Keep in mind that a given database instance size request for cores or RAM cannot exceed the available capacity of the Kubernetes nodes in the cluster. For example, if the largest Kubernetes node you have in your Kubernetes cluster is 256 GB of RAM and 24 cores, you will not be able to create a database instance with a request of 512 GB of RAM and 48 cores.
-It is a good idea to maintain at least 25% of available capacity across the Kubernetes nodes to allow Kubernetes to efficiently schedule pods to be created and to allow for elastic scaling and longer term growth on demand.
+It is a good idea to maintain at least 25% of available capacity across the Kubernetes nodes to allow Kubernetes to efficiently schedule pods to be created and to allow for elastic scaling, allow for rolling upgrades of the Kubernetes nodes, and longer term growth on demand.
In your sizing calculations, don't forget to add in the resource requirements of the Kubernetes system pods and any other workloads which may be sharing capacity with Azure Arc-enabled data services on the same Kubernetes cluster.
azure-arc Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/storage-configuration.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Storage Configuration ## Kubernetes storage concepts
-Kubernetes provides an infrastructure abstraction layer over the underlying virtualization tech stack (optional) and hardware. The way that Kubernetes abstracts away storage is through **[Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/)**. At the time of provisioning a pod, a storage class can be specified to be used for each volume. At the time the pod is provisioned, the storage class **[provisioner](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/)** is called to provision the storage and then a **[persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)** is created on that provisioned storage and then the pod is mounted to the persistent volume by a **[persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)**.
+Kubernetes provides an infrastructure abstraction layer over the underlying virtualization tech stack (optional) and hardware. The way that Kubernetes abstracts away storage is through **[Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/)**. When you provision a pod, you can specify a storage class for each volume. At the time the pod is provisioned, the storage class **[provisioner](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/)** is called to provision the storage, and then a **[persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)** is created on that provisioned storage and then the pod is mounted to the persistent volume by a **[persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)**.
Kubernetes provides a way for storage infrastructure providers to plug in drivers (also called "Addons") that extend Kubernetes. Storage addons must comply with the **[Container Storage Interface standard](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/)**. There are dozens of addons that can be found in this non-definitive **[list of CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html)**. Which CSI driver you use will depend on factors such as whether you are running in a cloud-hosted, managed Kubernetes service or which OEM provider you are using for your hardware.
There are generally two types of storage:
- **Local storage** - storage provisioned on local hard drives on a given node. This kind of storage can be ideal in terms of performance, but requires specifically designing for data redundancy by replicating the data across multiple nodes. - **Remote, shared storage** - storage provisioned on some remote storage device - for example, a SAN, NAS, or cloud storage service like EBS or Azure Files. This kind of storage generally provides for data redundancy automatically, but is not as fast as local storage can be.
-> [!NOTE]
-> For now, if you are using NFS, you need to set allowRunAsRoot to true in your deployment profile file before deploying the Azure Arc data controller.
+## NFS based storage classes
+
+Depending on the configuration of your NFS server and storage class provisioner, you may need to set the `supplementalGroups` in the pod configurations for database instances, and you may need to change the NFS server configuration to use the group IDs passed in by the client (as opposed to looking group IDs up on the server using the passed-in user ID). Consult your NFS administrator to determine if this is the case.
+
+The `supplementalGroups` property takes an array of values and can be set as part of the Azure Arc data controller deployment and will be used by any database instances configured by the Azure Arc data controller.
+
+To set this property, run the following command:
+
+```azurecli
+az arcdata dc config add --path custom/control.json --json-values 'spec.security.supplementalGroups="1234556"'
+```
### Data controller storage configuration
Some services in Azure Arc for data services depend upon being configured to use
|**Controller SQL instance**|`<namespace>/logs-controldb`, `<namespace>/data-controldb`| |**Controller API service**|`<namespace>/data-controller`|
-At the time the data controller is provisioned, the storage class to be used for each of these persistent volumes is specified by either passing the --storage-class | -sc parameter to the `az arcdata dc create` command or by setting the storage classes in the control.json deployment template file that is used.
+At the time the data controller is provisioned, the storage class to be used for each of these persistent volumes is specified by either passing the --storage-class | -sc parameter to the `az arcdata dc create` command or by setting the storage classes in the control.json deployment template file that is used. If you are using the Azure portal to create the data controller in the directly connected mode, the deployment template that you choose will either have the storage class predefined in the template or if you select a template which does not have a predefined storage class then you will be prompted for one. If you use a custom deployment template, then you can specify the storage class.
The deployment templates that are provided out of the box have a default storage class specified that is appropriate for the target environment, but it can be overridden during deployment. See the detailed steps to [alter the deployment profile](create-data-controller.md) to change the storage class configuration for the data controller pods at deployment time.
Important factors to consider when choosing a storage class for the data control
- Changing the storage class post deployment is difficult, not documented, and not supported. Be sure to choose the storage class correctly at deployment time. > [!NOTE]
-> If no storage class is specified the default storage class will be used. There can be only one default storage class per Kubernetes cluster. You can [change the default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/).
+> If no storage class is specified, the default storage class will be used. There can be only one default storage class per Kubernetes cluster. You can [change the default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/).
### Database instance storage configuration Each database instance has data, logs, and backup persistent volumes. The storage classes for these persistent volumes can be specified at deployment time. If no storage class is specified the default storage class will be used.
-When creating an instance using either `az sql mi-arc create` or `azdata arc postgres server create`, there are two parameters that can be used to set the storage classes:
-
-> [!NOTE]
-> Some of these parameters are in development and will become available on `az sql mi-arc create` and `azdata arc postgres server create` in the upcoming releases.
+When creating an instance using either `az sql mi-arc create` or `az postgres arc-server create`, there are four parameters that can be used to set the storage classes:
|Parameter name, short name|Used for| |||
-|`--storage-class-data`, `-scd`|Used to specify the storage class for all data files including transaction log files|
-|`--storage-class-logs`, `-scl`|Used to specify the storage class for all log files|
-|`--storage-class-data-logs`, `-scdl`|Used to specify the storage class for the database transaction log files. **Note: Not available yet.**|
-|`--storage-class-backups`, `-scb`|Used to specify the storage class for all backup files. **Note: Not available yet.**|
+|`--storage-class-data`, `-d`|Used to specify the storage class for all data files including transaction log files|
+|`--storage-class-logs`, `-g`|Used to specify the storage class for all log files|
+|`--storage-class-data-logs`|Used to specify the storage class for the database transaction log files.|
+|`--storage-class-backups`|Used to specify the storage class for all backup files.|
The table below lists the paths inside the Azure SQL Managed Instance container that is mapped to the persistent volume for data and logs: |Parameter name, short name|Path inside mssql-miaa container|Description| ||||
-|`--storage-class-data`, `-scd`|/var/opt|Contains directories for the mssql installation and other system processes. The mssql directory contains default data (including transaction logs), error log & backup directories|
-|`--storage-class-logs`, `-scl`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container|
+|`--storage-class-data`, `-d`|/var/opt|Contains directories for the mssql installation and other system processes. The mssql directory contains default data (including transaction logs), error log & backup directories|
+|`--storage-class-logs`, `-g`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container|
The table below lists the paths inside the PostgreSQL instance container that is mapped to the persistent volume for data and logs: |Parameter name, short name|Path inside postgres container|Description| ||||
-|`--storage-class-data`, `-scd`|/var/opt/postgresql|Contains data and log directories for the postgres installation|
-|`--storage-class-logs`, `-scl`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container|
+|`--storage-class-data`, `-d`|/var/opt/postgresql|Contains data and log directories for the postgres installation|
+|`--storage-class-logs`, `-g`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container|
Each database instance will have a separate persistent volume for data files, logs, and backups. This means that there will be separation of the I/O for each of these types of files subject to how the volume provisioner will provision storage. Each database instance has its own persistent volume claims and persistent volumes.
If there are multiple databases on a given database instance, all of the databas
Important factors to consider when choosing a storage class for the database instance pods: -- Database instances can be deployed in either a single pod pattern or a multiple pod pattern. An example of a single pod pattern is a developer instance of Azure SQL managed instance or a general purpose pricing tier Azure SQL managed instance. An example of a multiple pod pattern is a highly available business critical pricing tier Azure SQL managed instance. (Note: pricing tiers are in development and not available to customers yet.) Database instances deployed with the single pod pattern **must** use a remote, shared storage class in order to ensure data durability and so that if a pod or node dies that when the pod is brought back up it can connect again to the persistent volume. In contrast, a highly available Azure SQL managed instance uses Always On Availability Groups to replicate the data from one instance to another either synchronously or asynchronously. Especially in the case where the data is replicated synchronously, there is always multiple copies of the data - typically three(3) copies. Because of this, it is possible to use local storage or remote, shared storage classes for data and log files. If utilizing local storage, the data is still preserved even in the case of a failed pod, node, or storage hardware. Given this flexibility, you might choose to use local storage for better performance.-- Database performance is largely a function of the I/O throughput of a given storage device. If your database is heavy reads or heavy writes, then you should choose a storage class with hardware designed for that type of workload. For example, if your database is mostly used for writes, you might choose local storage with RAID 0. If your database is mostly used for reads of a small amount of "hot data", but there is a large overall storage volume of cold data, then you might choose a SAN device capable of tiered storage. Choosing the right storage class is not that much different than choosing the type of storage you would use for any database.
+- Database instances can be deployed in either a single pod pattern or a multiple pod pattern. An example of a single pod pattern is a general purpose pricing tier Azure SQL managed instance. An example of a multiple pod pattern is a highly available business critical pricing tier Azure SQL managed instance. Database instances deployed with the single pod pattern **must** use a remote, shared storage class in order to ensure data durability and so that if a pod or node dies that when the pod is brought back up it can connect again to the persistent volume. In contrast, a highly available Azure SQL managed instance uses Always On Availability Groups to replicate the data from one instance to another either synchronously or asynchronously. Especially in the case where the data is replicated synchronously, there is always multiple copies of the data - typically three copies. Because of this, it is possible to use local storage or remote, shared storage classes for data and log files. If utilizing local storage, the data is still preserved even in the case of a failed pod, node, or storage hardware because there are multiple copies of the data. Given this flexibility, you might choose to use local storage for better performance.
+- Database performance is largely a function of the I/O throughput of a given storage device. If your database is heavy on reads or heavy on writes, then you should choose a storage class with hardware designed for that type of workload. For example, if your database is mostly used for writes, you might choose local storage with RAID 0. If your database is mostly used for reads of a small amount of "hot data", but there is a large overall storage volume of cold data, then you might choose a SAN device capable of tiered storage. Choosing the right storage class is not any different than choosing the type of storage you would use for any database.
- If you are using a local storage volume provisioner, ensure that the local volumes that are provisioned for data, logs, and backups are each landing on different underlying storage devices to avoid contention on disk I/O. The OS should also be on a volume that is mounted to a separate disk(s). This is essentially the same guidance as would be followed for a database instance on physical hardware. - Because all databases on a given instance share a persistent volume claim and persistent volume, be sure not to colocate busy database instances on the same database instance. If possible, separate busy databases on to their own database instances to avoid I/O contention. Further, use node label targeting to land database instances onto separate nodes so as to distribute overall I/O traffic across multiple nodes. If you are using virtualization, be sure to consider distributing I/O traffic not just at the node level but also the combined I/O activity happening by all the node VMs on a given physical host. ## Estimating storage requirements
-Every pod that contains stateful data uses two persistent volumes in this release - one persistent volume for data and another persistent volume for logs. The table below lists the number of persistent volumes required for a single Data Controller, Azure SQL Managed instance, Azure Database for PostgreSQL instance and Azure PostgreSQL HyperScale instance:
+Every pod that contains stateful data uses at least two persistent volumes - one persistent volume for data and another persistent volume for logs. The table below lists the number of persistent volumes required for a single Data Controller, Azure SQL Managed instance, Azure Database for PostgreSQL instance and Azure PostgreSQL HyperScale instance:
|Resource Type|Number of stateful pods|Required number of persistent volumes| ||||
This calculation can be used to plan the storage for your Kubernetes cluster bas
### On-premises and edge sites
-Microsoft and its OEM, OS, and Kubernetes partners are working on a certification program for Azure Arc data services. This program will provide customers comparable test results from a certification testing toolkit. The tests will evaluate feature compatibility, stress testing results, and performance and scalability. Each of these test results will indicate the OS used, Kubernetes distribution used, HW used, the CSI add-on used, and the storage classes used. This will help customers choose the best storage class, OS, Kubernetes distribution, and HW for their requirements. More information on this program and initial test results will be provided closer to the General Availability of Azure Arc data services.
+Microsoft and its OEM, OS, and Kubernetes partners have a validation program for Azure Arc data services. This program will provide customers comparable test results from a certification testing toolkit. The tests will evaluate feature compatibility, stress testing results, and performance and scalability. Each of these test results will indicate the OS used, Kubernetes distribution used, HW used, the CSI add-on used, and the storage classes used. This will help customers choose the best storage class, OS, Kubernetes distribution, and hardware for their requirements. More information on this program and test results can be found [here](validation-program.md).
#### Public cloud, managed Kubernetes services
azure-arc Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/troubleshoot-guide.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
This article identifies troubleshooting resources for Azure Arc-enabled data services. ## Resources by type
azure-arc Troubleshoot Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/troubleshoot-postgresql-hyperscale-server-group.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Troubleshooting PostgreSQL Hyperscale server groups This article describes some techniques you may use to troubleshoot your server group. In addition to this article you may want to read how to use [Kibana](monitor-grafana-kibana.md) to search the logs or use [Grafana](monitor-grafana-kibana.md) to visualize metrics about your server group.
-## Getting more details about the execution of an azdata command
-You may add the parameter **--debug** to any azdata command you execute. Doing so will display to your console additional information about the execution of that command. You should find it useful to get details to help you understand the behavior of that command.
+## Getting more details about the execution of a CLI command
+You may add the parameter **--debug** to any CLI command you execute. Doing so will display to your console additional information about the execution of that command. You should find it useful to get details to help you understand the behavior of that command.
For example you could run
-```console
-azdata arc postgres server create -n postgres01 -w 2 --debug
+```azurecli
+az postgres arc-server create -n postgres01 -w 2 --debug --k8s-namespace <namespace> --use-k8s
``` or
-```console
-azdata arc postgres server edit -n postgres01 --extension SomeExtensionName --debug
+```azurecli
+az postgres arc-server edit -n postgres01 --extension --k8s-namespace <namespace> --use-k8s SomeExtensionName --debug
```
-In addition, you may use the parameter --help on any azdata command to display some help, list of parameters for a specific command. For example:
-```console
-azdata arc postgres server create --help
+In addition, you may use the parameter --help on any CLI command to display some help, list of parameters for a specific command. For example:
+```azurecli
+az postgres arc-server create --help
```
Read the article about [getting logs for Azure Arc-enabled data services](troubl
## Interactive troubleshooting with Jupyter notebooks in Azure Data Studio+ Notebooks can document procedures by including markdown content to describe what to do/how to do it. It can also provide executable code to automate a procedure. This pattern is useful for everything from standard operating procedures to troubleshooting guides. For example, let's troubleshoot a PostgreSQL Hyperscale server group that might have some problems using Azure Data Studio. [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] + ### Install tools Install Azure Data Studio, `kubectl`, and Azure (`az`) CLI with the `arcdata` extension on the client machine you are using to run the notebook in Azure Data Studio. To do this, please follow the instructions at [Install client tools](install-client-tools.md)
Install Azure Data Studio, `kubectl`, and Azure (`az`) CLI with the `arcdata` ex
Make sure that these tools can be invoked from anywhere on this client machine. For example, on a Windows client machine, update the PATH system environment variable and add the folder in which you installed kubectl.
-### Sign in with [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]
-
-Sign in your Arc Data Controller from this client machine and before you launch Azure Data Studio. To do this, run a command like:
-
-```console
-azdata login --endpoint https://<IP address>:<port>
-```
-
-Replace `<IP address>` with the IP address of your Kubernetes cluster, and `<port>` the port on which Kubernetes is listening. You will be prompted for user name and password. To see more details, run:_
-
-```console
-azdata login --help
-```
- ### Log into your Kubernetes cluster with kubectl To do this, you may want to use the example commands provided in [this](https://blog.christianposta.com/kubernetes/logging-into-a-kubernetes-cluster-with-kubectl/) blog post.
azure-arc Troubleshooting Get Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/troubleshooting-get-logs.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 # Get logs to troubleshoot Azure Arc-enabled data services ## Prerequisites
You can get service logs across all pods or specific pods for troubleshooting pu
Run the following command to dump the logs:
- ```console
- az arcdata dc debug copy-logs --namespace <namespace name> --exclude-dumps --skip-compress
+ ```azurecli
+ az arcdata dc debug copy-logs --exclude-dumps --skip-compress
``` For example:
- ```console
- #az arcdata dc debug copy-logs --namespace arc --exclude-dumps --skip-compress
+ ```azurecli
+ #az arcdata dc debug copy-logs --exclude-dumps --skip-compress
``` The data controller creates the log files in the current working directory in a subdirectory called `logs`.
The `az arcdata dc debug copy-logs` command provides the following options to ma
With these parameters, you can replace the `<parameters>` in the following example: ```azurecli
-az arcdata dc debug copy-logs --target-folder <desired folder> --exclude-dumps --skip-compress -resource-kind <custom resource definition name> --resource-name <resource name> --namespace <namespace name>
+az arcdata dc debug copy-logs --target-folder <desired folder> --exclude-dumps --skip-compress -resource-kind <custom resource definition name> --resource-name <resource name>
``` For example: ```console
-#az arcdata dc debug copy-logs --target-folder C:\temp\logs --exclude-dumps --skip-compress --resource-kind postgresql-12 --resource-name pg1 --namespace arc
+#az arcdata dc debug copy-logs --target-folder C:\temp\logs --exclude-dumps --skip-compress --resource-kind postgresql-12 --resource-name pg1
``` The following folder hierarchy is an example. It's organized by pod name, then container, and then by directory hierarchy within the container.
The following folder hierarchy is an example. It's organized by pod name, then c
ΓööΓöÇΓöÇΓöÇopenvpn ```
-## Next steps
-
-[az `arcdata` dc debug copy-logs](/sql/azdata/reference/reference-azdata-arc-dc-debug#azdata-arc-dc-debug-copy-logs?toc=/azure/azure-arc/data/toc.json&bc=/azure/azure-arc/data/breadcrumb/toc.json)
azure-arc Uninstall Azure Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/uninstall-azure-arc-data-controller.md
Previously updated : 07/13/2021 Last updated : 07/30/2021
Before you proceed, ensure all the data services that have been create on the da
Run the following command to check if there are any SQL managed instances created: ```azurecli
-az sql mi-arc list
+az sql mi-arc list --k8s-namespace <namespace> --use-k8s
``` For each SQL managed instance from the list above, run the delete command as follows: ```azurecli
-az sql mi-arc delete -n <name>
-# for example: az sql mi-arc delete -n sqlinstance1
+az sql mi-arc delete -n <name> --k8s-namespace <namespace> --use-k8s
+# for example: az sql mi-arc delete -n sqlinstance1 --k8s-namespace <namespace> --use-k8s
``` Similarly, to check for PostgreSQL Hyperscale instances, run:
-```
-azdata login
-azdata arc postgres server list
+```azurecli
+az postgres arc-server list --k8s-namespace <namespace> --use-k8s
``` And, for each PostgreSQL Hyperscale instance, run the delete command as follows:
-```
-azdata arc postgres server delete -n <name>
-# for example: azdata arc postgres server delete -n pg1
+
+```azurecli
+az postgres arc-server delete -n <name> --k8s-namespace <namespace> --use-k8s
+# for example: az postgres arc-server delete -n pg1 --k8s-namespace <namespace> --use-k8s
``` ## Delete controller
azure-arc Update Service Principal Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/update-service-principal-credentials.md
Previously updated : 12/09/2020 Last updated : 07/30/2021
When the service principal credentials change, you need to update the secrets in
For example, if you deployed the data controller using a specific set of values for service principal tenant ID, client ID, and client secret, and then change one or more of these values, you need to update the secrets in the data controller. Following are the instructions to update Tenant ID, Client ID or the Client secret. ## Background
azure-arc Upload Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-logs.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
Periodically, you can export logs and then upload them to Azure. Exporting and uploading logs also creates and updates the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure.
-> [!NOTE]
-> During the preview period, there is no cost for using Azure Arc-enabled data services.
-- ## Before you begin Before you can upload logs, you need to:
With the environment variables set, you can upload logs to the log workspace.
1. Log in to to the Azure Arc data controller with Azure (`az`) CLI with the `arcdata` extension.
- ```console
+ ```azurecli
az arcdata login ```
With the environment variables set, you can upload logs to the log workspace.
1. Export all logs to the specified file:
- ```console
+> [!NOTE]
+> Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently.
+
+ ```azurecli
az arcdata dc export --type logs --path logs.json ``` 2. Upload logs to an Azure monitor log analytics workspace:
- ```console
+ ```azurecli
az arcdata dc upload --path logs.json ```
If you want to upload metrics and logs on a scheduled basis, you can create a sc
In your favorite text/code editor, add the following script to the file and save as a script executable file such as .sh (Linux/Mac) or .cmd, .bat, .ps1. ```azurecli
-az arcdata dc export --type metrics --path metrics.json --force
-az arcdata dc upload --path metrics.json
+az arcdata dc export --type logs --path logs.json --force
+az arcdata dc upload --path logs.json
``` Make the script file executable
azure-arc Upload Metrics And Logs To Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-metrics-and-logs-to-azure-monitor.md
Title: Upload usage data, metrics, and logs to Azure Monitor
-description: Upload resource inventory, usage data, metrics, and logs to Azure Monitor
+ Title: Upload usage data, metrics, and logs to Azure
+description: Upload resource inventory, usage data, metrics, and logs to Azure
Previously updated : 07/13/2021 Last updated : 07/30/2021 zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
-# Upload usage data, metrics, and logs to Azure Monitor
+# Upload usage data, metrics, and logs to Azure
Periodically, you can export out usage information for billing purposes, monitoring metrics, and logs and then upload it to Azure. The export and upload of any of these three types of data will also create and update the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure.
-> [!NOTE]
-> During the preview period, there is no cost for using Azure Arc-enabled data services.
-- Before you can upload usage data, metrics, or logs you need to: * Install tools
Example output:
With the service principal assigned to the appropriate role, you can proceed to upload metrics, or user data.
-## Upload logs, metrics, or user data
+## Upload logs, metrics, or usage data
-The specific steps for uploading logs, metrics, or user data vary depending about the type of information you are uploading.
+The specific steps for uploading logs, metrics, or usage data vary depending about the type of information you are uploading.
[Upload logs to Azure Monitor](upload-logs.md) [Upload metrics to Azure Monitor](upload-metrics.md)
-[Upload usage data to Azure Monitor](upload-usage-data.md)
+[Upload usage data to Azure](upload-usage-data.md)
-## General guidance on exporting and uploading usage, metrics
+## General guidance on exporting and uploading usage, and metrics
Create, read, update, and delete (CRUD) operations on Azure Arc-enabled data services are logged for billing and monitoring purposes. There are background services that monitor for these CRUD operations and calculate the consumption appropriately. The actual calculation of usage or consumption happens on a scheduled basis and is done in the background.
-During preview, this process happens nightly. The general guidance is to upload the usage only once per day. When usage information is exported and uploaded multiple times within the same 24 hour period, only the resource inventory is updated in Azure portal but not the resource usage.
+Upload the usage only once per day. When usage information is exported and uploaded multiple times within the same 24 hour period, only the resource inventory is updated in Azure portal but not the resource usage.
For uploading metrics, Azure monitor only accepts the last 30 minutes of data ([Learn more](../../azure-monitor/essentials/metrics-store-custom-rest-api.md#troubleshooting)). The guidance for uploading metrics is to upload the metrics immediately after creating the export file so you can view the entire data set in Azure portal. For instance, if you exported the metrics at 2:00 PM and ran the upload command at 2:50 PM. Since Azure Monitor only accepts data for the last 30 minutes, you may not see any data in the portal.
azure-arc Upload Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-metrics.md
Previously updated : 07/13/2021 Last updated : 07/30/2021 zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
Periodically, you can export monitoring metrics and then upload them to Azure. The export and upload of data also creates and update the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure.
-> [!NOTE]
-> During the preview period, there is no cost for using Azure Arc-enabled data services.
- ## Prerequisites
To upload metrics for your Azure Arc-enabled SQL managed instances and Azure Arc
1. Export all metrics to the specified file:
- ```console
+> [!NOTE]
+> Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently.
+
+ ```azurecli
az arcdata dc export --type metrics --path metrics.json ``` 2. Upload metrics to Azure monitor:
- ```console
+ ```azurecli
az arcdata dc upload --path metrics.json ```
You could also use a job scheduler like cron or Windows Task Scheduler or an orc
Create, read, update, and delete (CRUD) operations on Azure Arc-enabled data services are logged for billing and monitoring purposes. There are background services that monitor for these CRUD operations and calculate the consumption appropriately. The actual calculation of usage or consumption happens on a scheduled basis and is done in the background.
-During preview, this process happens nightly. The general guidance is to upload the usage only once per day. When usage information is exported and uploaded multiple times within the same 24 hour period, only the resource inventory is updated in Azure portal but not the resource usage.
+Upload the usage only once per day. When usage information is exported and uploaded multiple times within the same 24 hour period, only the resource inventory is updated in Azure portal but not the resource usage.
For uploading metrics, Azure monitor only accepts the last 30 minutes of data ([Learn more](../../azure-monitor/essentials/metrics-store-custom-rest-api.md#troubleshooting)). The guidance for uploading metrics is to upload the metrics immediately after creating the export file so you can view the entire data set in Azure portal. For instance, if you exported the metrics at 2:00 PM and ran the upload command at 2:50 PM. Since Azure Monitor only accepts data for the last 30 minutes, you may not see any data in the portal.
azure-arc Upload Usage Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upload-usage-data.md
Title: Upload usage data to Azure Monitor
-description: Upload usage Azure Arc-enabled data services data to Azure Monitor
+ Title: Upload usage data to Azure
+description: Upload usage Azure Arc-enabled data services data to Azure
Previously updated : 07/13/2021 Last updated : 07/30/2021 zone_pivot_groups: client-operating-system-macos-and-linux-windows-powershell
-# Upload usage data to Azure Monitor
+# Upload usage data to Azure
Periodically, you can export out usage information. The export and upload of this information creates and updates the data controller, SQL managed instance, and PostgreSQL Hyperscale server group resources in Azure. > [!NOTE] > During the preview period, there is no cost for using Azure Arc-enabled data services. > [!NOTE]
Usage information such as inventory and resource usage can be uploaded to Azure
1. Export the usage data using `az arcdata dc export` command, as follows:
- ```console
- az arcdata dc export --type usage --path usage.json
+> [!NOTE]
+> Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently.
+
+ ```azurecli
+ az arcdata dc export --type usage --path usage.json --k8s-namespace <namespace> --use-k8s
``` This command creates a `usage.json` file with all the Azure Arc-enabled data resources such as SQL managed instances and PostgreSQL Hyperscale instances etc. that are created on the data controller.
-2. Upload the usage data using ```azdata upload``` command
+2. Upload the usage data using the `upload` command.
- ```console
+ ```azurecli
az arcdata dc upload --path usage.json ```
If you want to upload metrics and logs on a scheduled basis, you can create a sc
In your favorite text/code editor, add the following script to the file and save as a script executable file such as `.sh` (Linux/Mac) or `.cmd`, `.bat`, or `.ps1`. ```azurecli
-az arcdata dc export --type metrics --path metrics.json --force
-az arcdata dc upload --path metrics.json
+az arcdata dc export --type usage --path usage.json --force --k8s-namespace <namespace> --use-k8s
+az arcdata dc upload --path usage.json
``` Make the script file executable
Make the script file executable
chmod +x myuploadscript.sh ```
-Run the script every 20 minutes:
+Run the script every day for usage:
```console watch -n 1200 ./myuploadscript.sh
azure-arc Using Extensions In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/using-extensions-in-postgresql-hyperscale-server-group.md
Previously updated : 09/22/2020 Last updated : 07/30/2021
For details about that are `shared_preload_libraries`, read the PostgreSQL docum
- this step isn't required for extensions that are not required to pre-load by shared_preload_libraries. For these extensions you may jump the next paragraph [Create extensions](#create-extensions). ### Add an extension at the creation time of a server group
-```console
-azdata arc postgres server create -n <name of your postgresql server group> --extensions <extension names>
+```azurecli
+az postgres arc-server create -n <name of your postgresql server group> --extensions <extension names>
``` ### Add an extension to an instance that already exists
-```console
-azdata arc postgres server edit -n <name of your postgresql server group> --extensions <extension names>
+```azurecli
+az postgres arc-server server edit -n <name of your postgresql server group> --extensions <extension names>
```
azdata arc postgres server edit -n <name of your postgresql server group> --exte
## Show the list of extensions added to shared_preload_libraries Run either of the following command.
-### With an azdata CLI command
-```console
-azdata arc postgres server show -n <server group name>
+### With CLI command
+```azurecli
+az postgres arc-server show -n <server group name>
``` Scroll in the output and notice the engine\extensions sections in the specifications of your server group. For example: ```console
SELECT name, address FROM coffee_shops ORDER BY geom <-> ST_SetSRID(ST_MakePoint
Now, let's enable `pg_cron` on our PostgreSQL server group by adding it to the shared_preload_libraries:
-```console
-azdata postgres server update -n pg2 -ns arc --extensions pg_cron
+```azurecli
+az postgres arc-server update -n pg2 -ns arc --extensions pg_cron
``` Your server group will restart complete the installation of the extensions. It may take 2 to 3 minutes.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/validation-program.md
+
+ Title: "Azure Arc-enabled data services validation"
+++ Last updated : 07/30/2021+++
+description: "Describes validation program for Kubernetes distributions for Azure Arc-enabled data services."
+keywords: "Kubernetes, Arc, Azure, K8s, validation, data services, SQL Managed Instance"
++
+# Azure Arc-enabled data services Kubernetes validation
+
+Azure Arc-enabled data services team has worked with industry partners to validate specific distributions and solutions to host Azure Arc-enabled data services. This validation extends the [Azure Arc-enabled Kubernetes validation](../kubernetes/validation-program.md) for the data services. This article identifies partner solutions, versions, Kubernetes versions, SQL Server versions, and PostgreSQL Hyperscale versions that have been verified to support the data services.
+
+To see how all Azure Arc-enabled components are validated, see [Validation program overview](../validation-program/overview.md)
+
+> [!NOTE]
+> At the current time, Azure Arc-enabled SQL Managed Instance is generally available in select regions.
+>
+> Azure Arc-enabled PostgreSQL Hyperscale is available for preview in select regions.
+
+## Partners
+
+### Dell
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL Engine version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+| Dell EMC PowerFlex |1.19.7|v1.0.0_2021-07-30|SQL Server 2019 (15.0.4123) | |
+| PowerFlex version 3.6 |1.19.7|v1.0.0_2021-07-30|SQL Server 2019 (15.0.4123) | |
+| PowerFlex CSI version 1.4 |1.19.7|v1.0.0_2021-07-30|SQL Server 2019 (15.0.4123) | |
+| PowerStore X|1.20.6|v1.0.0_2021-07-30|SQL Server 2019 (15.0.4123) |postgres 12.3 (Ubuntu 12.3-1) |
+| Powerstore T|1.20.6|v1.0.0_2021-07-30|SQL Server 2019 (15.0.4123) |postgres 12.3 (Ubuntu 12.3-1)|
+
+### Nutanix
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL Server version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV:20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | v1.0.0_2021-07-30 | SQL Server 2019 (15.0.4123)|postgres 12.3 (Ubuntu 12.3-1)|
+
+### PureStorage
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL Server version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+| Portworx Enterprise 2.7 | 1.20.7 | v1.0.0_2021-07-30 | SQL Server 2019 (15.0.4138)||
+
+### Red Hat
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL Server version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+| OpenShift 7.13 | 1.20.0 | v1.0.0_2021-07-30 | SQL Server 2019 (15.0.4138)|postgres 12.3 (Ubuntu 12.3-1)|
+
+### VMware
+
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL Server version | PostgreSQL Hyperscale version
+|--|--|--|--|--|
+| TKGm v1.3.1 | 1.20.5 | v1.0.0_2021-07-30 | SQL Server 2019 (15.0.4123)|postgres 12.3 (Ubuntu 12.3-1)|
+
+## Data services validation process
+
+The Sonobuoy Arc-enabled data services plug-in automates the provisioning and testing of Azure Arc enabled data services on a Kubernetes cluster.
+
+### Prerequisites
+
+Install tools:
+
+- [Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata)
+- [kubectl](https://kubernetes.io/docs/home/)
+- [Azure Data Studio - Insider build](https://github.com/microsoft/azuredatastudio)
+
+Create a Kubernetes config file configured to access the target Kubernetes cluster and set as the current context. How this is generated and brought local to your computer is different from platform to platform. See [Kubernetes.io](https://kubernetes.io/docs/home/)
+
+### Process
+
+The conformance tests run as part of the Azure Arc-enabled Data services validation. A pre-requisite to running these tests is to pass on the Azure Arc-enabled Kubernetes tests for the Kubernetes distribution in use.
+
+These tests verify that the product is compliant with the requirements of running and operating data services. This will help assess if the product is Enterprise ready for deployments.
+
+The tests for data services cover the following in indirectly connected mode
+
+1. Deploy data controller in indirect mode
+2. Deploy [Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
+3. Deploy [Azure Arc-enabled PostgreSQL Hyperscale](create-postgresql-hyperscale-server-group.md)
+4. Scale out [PostgreSQL Hyperscale](scale-out-in-postgresql-hyperscale-server-group.md)
+
+More tests will be added in future releases of Azure Arc-enabled data services.
+
+## Additional information
+
+- [Validation program overview](../validation-program/overview.md)
+- [Azure Arc-enabled Kubernetes validation](../kubernetes/validation-program.md)
+- [Azure Arc validation program - GitHub project](https://github.com/Azure/azure-arc-validation/)
+
+## Next steps
+
+[Create a data controller](create-data-controller.md)
azure-arc View Billing Data In Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/view-billing-data-in-azure.md
Previously updated : 07/13/2021 Last updated : 07/30/2021
> [!IMPORTANT] > There is no cost to use Azure Arc-enabled data services during the preview period. Although the billing system works end to end the billing meter is set to $0. If you follow this scenario, you will see entries in your billing for a service currently named **hybrid data services** and for resources of a type called **Microsoft.AzureArcData/`<resource type>`**. You will be able to see a record for each data service - Azure Arc that you create, but each record will be billed for $0. ## Connectivity Modes - Implications for billing data In the future, there will be two modes in which you can run your Azure Arc-enabled data -- **Indirectly connected** - There is no direct connection to Azure. Data is sent to Azure only through an export/upload process. All Azure Arc data services deployments work in this mode today in preview.
+- **Indirectly connected** - There is no direct connection to Azure. Data is sent to Azure only through an export/upload process.
- **Directly connected** - In this mode there will be a dependency on the Azure Arc-enabled Kubernetes service to provide a direct connection between Azure and the Kubernetes cluster on which the Azure Arc-enabled data services are running. This will enable more capabilities and will also enable you to use the Azure portal and the Azure CLI to manage your Azure Arc-enabled data services just like you manage your data services in Azure PaaS. This connectivity mode is not yet available in preview, but will be coming soon. You can read more about the difference between the [connectivity modes](./connectivity.md).
To upload billing data to Azure, the following should happen first:
Run the following command to export out the billing data: ```azurecli
-az arcdata dc export -t usage -p usage.json
+az arcdata dc export -t usage -p usage.json --k8s-namespace <namespace> --use-k8s
``` For now, the file is not encrypted so that you can see the contents. Feel free to open in a text editor and see what the contents look like.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/troubleshooting.md
Error: list: failed to list: secrets is forbidden: User "myuser" cannot list res
The user connecting the cluster to Azure Arc should have `cluster-admin` role assigned to them on the cluster. +
+### Unable to connect OpenShift cluster to Azure Arc
+
+If `az connectedk8s connect` is timing out and failing when connecting an OpenShift cluster to Azure Arc, check the following:
+
+1. The OpenShift cluster needs to meet the version prerequisites: 4.5.41+ or 4.6.35+ or 4.7.18+.
+
+1. Before running `az connectedk8s connnect`, the following command needs to be run on the cluster:
+
+ ```console
+ oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa
+ ```
+ ### Installation timeouts Connecting a Kubernetes cluster to Azure Arc enabled Kubernetes requires installation of Azure Arc agents on the cluster. If the cluster is running over a slow internet connection, the container image pull for agents may take longer than the Azure CLI timeouts.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/validation-program.md
Title: "Azure Arc enabled Kubernetes Validation Program"
+ Title: "Azure Arc enabled Kubernetes validation"
Last updated 03/03/2021
description: "Describes Arc validation program for Kubernetes distributions"
keywords: "Kubernetes, Arc, Azure, K8s, validation"
-# Azure Arc validation program
+# Azure Arc-enabled Kubernetes validation
Azure Arc enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. The Azure Arc team has also worked with key industry Kubernetes offering providers to validate Azure Arc enabled Kubernetes with their Kubernetes distributions. Future major and minor versions of Kubernetes distributions released by these providers will be validated for compatibility with Azure Arc enabled Kubernetes.
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - |
-| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.5](https://docs.openshift.com/container-platform/4.5/release_notes/ocp-4-5-release-notes.html), [4.6](https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-release-notes.html), [4.7](https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html) |
+| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.5.41+](https://docs.openshift.com/container-platform/4.5/release_notes/ocp-4-5-release-notes.html), [4.6.35+](https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-release-notes.html), [4.7.18+](https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html) |
| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | Kubernetes version: v1.17.5 | | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.19](https://ubuntu.com/kubernetes/docs/1.19/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.2.4](https://github.com/rancher/rke/releases/tag/v1.2.4); Kubernetes versions: [1.19.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.6)), [1.18.14](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.14)), [1.17.16](https://github.com/kubernetes/kubernetes/releases/tag/v1.17.16)) |
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/validation-program/overview.md
+
+ Title: Azure Arc-enabled services validation overview
+description: Explains the Azure Arc validation process to conform to the Arc-enabled Kubernetes, Data Services, and cluster extensions.
Last updated : 07/30/2021+++
+# Overview of Azure Arc-enabled service validation
+
+Microsoft recommends running Azure Arc-enabled services on validated platforms. This article points you to content to explain how various Azure Arc-enabled components are validated.
+
+Currently, validated solutions are available from partners for Kubernetes and data services.
+
+## Kubernetes
+
+Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. The Azure Arc team has worked with key industry Kubernetes offering providers to validate Azure Arc-enabled Kubernetes with their [Kubernetes distributions](../kubernetes/validation-program.md). Future major and minor versions of Kubernetes distributions released by these providers will be validated for compatibility with Azure Arc enabled Kubernetes.
+
+## Data services
+
+We have also partnered with original equipment manufacturer (OEM) partners and storage providers to validate [Azure Arc-enabled data services](../dat) solutions.
+
+## Validation process
+
+The Azure Arc validation process is available in GitHub. To find out more details on how to validate your offering with Azure Arc, the test harness and strategy, please refer to the [Azure Arc validation process](https://github.com/Azure/azure-arc-validation/) in GitHub.
+
+## Next steps
+
+* [Validated Kubernetes distributions](../kubernetes/validation-program.md?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json)
+
+* [Validated Kubernetes distributions for data services](../dat?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json)
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
Last updated 02/08/2021
> Azure Cache for Redis supports both classic deployment model and Azure Resource Manager virtual networks. >
+> [!IMPORTANT]
+> Azure Cache for Redis now supports Azure Private Link, which simplifies the network architecture and secures the connection between endpoints in Azure. You can connect to an Azure Cache instance from your virtual network via a private endpoint, which is assigned a private IP address in a subnet within the virtual network. Azure Private Links is offered on all our tiers, includes Azure Policy support, and simplified NSG rule management. To learn more, see [Private Link Documentation](cache-private-link.md). To migrate your VNet injected caches to Private Link, see [here](cache-vnet-migration.md).
+>
+ ## Set up virtual network support Virtual network support is configured on the **New Azure Cache for Redis** pane during cache creation.
azure-cache-for-redis Cache Vnet Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-vnet-migration.md
+
+ Title: Migrate from VNet injection caches to Private Link caches
+description: Learn how to migrate your Azure Cache for Redis Virtual Network (VNet) caches to Private Link caches.
+++++ Last updated : 07/19/2021++
+# Migrate from VNet injection caches to Private Link caches
+This article describes a number of approaches to migrate an Azure Cache for Redis Virtual Network (VNet) injected cache instances to Azure Cache for Redis Private Link cache instances.
+
+[Azure Private Link](../private-link/private-link-overview.md) simplifies the network architecture and secures the connection between endpoints in Azure. You can connect to an Azure Cache instance from your virtual network via a private endpoint, which is assigned a private IP address in a subnet within the virtual network. Advantages of using Azure Private Link for Azure Cache for Redis include:
+
+* **Tier flexibility** ΓÇô Azure Private Link is supported on all our tiers; Basic, Standard, Premium, Enterprise, and Enterprise Flash. Compared to Virtual Network injection, which is only offered on our premium tier.
+
+* **Azure Policy Support** ΓÇô Ensure all caches in your organization are created with Private Link and audit your organizationΓÇÖs existing caches to verify they all utilize Private Link.
+
+* **Simplified Network Security Group (NSG) Rule Management** - NSG rules do not need to be configured such that the client's network traffic is allowed to reach the Azure Cache for Redis instance.
+
+## Migration options
+
+You can switch from VNet injection to Private Link using a few different ways. Depending on where your cache is and how your application interacts with it, one method will be more useful than the others. Some of the frequently used migration strategies are detailed below.
+
+### If you're using any combination of geo-replication, clustering, or ARM VNet:
+
+ | Option | Advantages | Disadvantages |
+ | | - | - |
+ | Dual-write data to two caches | No data loss or downtime. Uninterrupted operations of the existing cache. Easier testing of the new cache. | Needs two caches for an extended period of time. |
+ | Create a new cache | Simplest to implement. | Need to repopulate data to the new cache, which might not work with many applications. |
+ | Export and import data via RDB file | Data migration is required. | Some data could be lost, if they're written to the existing cache after the RDB file is generated. |
+ | Migrate data programmatically | Full control over how data are moved. | Requires custom code. |
+
+### Write to two Redis caches simultaneously during migration period
+
+Rather than moving data directly between caches, you may use your application to write data to both an existing cache and a new one you're setting up. The application will still read data from the existing cache initially. When the new cache has the necessary data, you switch the application to that cache and retire the old one. Let's say, for example, you use Redis as a session store and the application sessions are valid for seven days. After writing to the two caches for a week, you'll be certain the new cache contains all non-expired session information. You can safely rely on it from that point onward without concern over data loss.
+
+General steps to implement this option are:
+
+1. Create a new [Azure Cache for Redis instance with private endpoints](cache-private-link.md) that is the same size as (or bigger than) the existing cache.
+
+2. Modify application code to write to both the new and the original instances.
+
+3. Continue reading data from the original instance until the new instance is sufficiently populated with data.
+
+4. Update the application code to reading and writing from the new instance only.
+
+5. Delete the original instance.
+
+### Create a new Azure Cache for Redis
+
+This approach technically isn't a migration. If losing data isn't a concern, the easiest way to move to Azure Cache for Redis is to create cache instance and connect your application to it. For example, if you use Redis as a look-aside cache of database records, you can easily rebuild the cache from scratch.
+
+General steps to implement this option are:
+
+1. Create a new [Azure Cache for Redis instance with private endpoints](cache-private-link.md).
+
+2. Update your application to use the new instance.
+
+3. Delete the old Redis instance.
+
+### Export data to an RDB file and import it into Azure Cache for Redis (Premium tier only)
+
+Open-source Redis defines a standard mechanism for taking a snapshot of a cache's in-memory dataset and saving it to a file. This file, called RDB, can be read by another Redis cache. [Azure Cache for Redis premium tier](cache-overview.md#service-tiers) supports importing data into a cache instance via RDB files. You can use an RDB file to transfer data from an existing cache to Azure Cache for Redis.
+
+> [!IMPORTANT]
+> RDB file format can change between Redis versions and may not maintain backward-compatibility. The Redis version of the cache you're exporting from should be equal or less than the version provided by Azure Cache for Redis.
+>
+
+General steps to implement this option are:
+
+1. Create a new [Azure Cache for Redis instance with private endpoints](cache-private-link.md) in the premium tier that is the same size as (or bigger than) the existing cache.
+
+2. Save a snapshot of the existing Redis cache. You can [configure Redis to save snapshots](https://redis.io/topics/persistence) periodically, or run the process manually using the [SAVE](https://redis.io/commands/save) or [BGSAVE](https://redis.io/commands/bgsave) commands. The RDB file is named ΓÇ£dump.rdbΓÇ¥ by default and will be located at the path specified in the *redis.conf* configuration file.
+
+ > [!NOTE]
+ > If youΓÇÖre migrating data within Azure Cache for Redis, see [these instructions on how to export an RDB file](cache-how-to-import-export-data.md) or use the [PowerShell Export cmdlet](/powershell/module/azurerm.rediscache/export-azurermrediscache) instead.
+ >
+
+3. Copy the RDB file to an Azure storage account in the region where your new cache is located. You can use AzCopy for this task.
+
+4. Import the RDB file into the new cache using these [import instructions](cache-how-to-import-export-data.md) or the [PowerShell Import cmdlet](/powershell/module/azurerm.rediscache/import-azurermrediscache).
+
+5. Update your application to use the new cache instance.
+
+### Migrate programmatically
+
+Create a custom migration process by programmatically reading data from an existing cache and writing them into Azure Cache for Redis. This [open-source tool](https://github.com/deepakverma/redis-copy) can be used to copy data from one Azure Cache for Redis instance to another. This tool is useful for moving data between cache instances in different Azure Cache regions. A [compiled version](https://github.com/deepakverma/redis-copy/releases/download/alpha/Release.zip) is available as well. You may also find the source code to be a useful guide for writing your own migration tool.
+
+> [!NOTE]
+> This tool isn't officially supported by Microsoft.
+>
+
+General steps to implement this option are:
+
+1. Create a VM in the region where the existing cache is located. If your dataset is large, choose a relatively powerful VM to reduce copying time.
+
+2. Create a new [Azure Cache for Redis instance with private endpoints](cache-private-link.md)
+
+3. Flush data from the new cache to ensure that it's empty. This step is required because the copy tool itself doesn't overwrite any existing key in the target cache.
+
+ > [!IMPORTANT]
+ > Make sure to NOT flush from the source cache.
+ >
+
+4. Use an application such as the open-source tool above to automate the copying of data from the source cache to the target. Remember that the copy process could take a while to complete depending on the size of your dataset.
++
+## Next steps
+* Learn more about [network isolation options](cache-network-isolation.md).
+* Learn how to [configure private endpoints for all Azure Cache for Redis tiers](cache-private-link.md).
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob.md
Working with the trigger and bindings requires that you reference the appropriat
#### Storage extension 5.x and higher
-A new version of the Storage bindings extension is available as a [preview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0-beta.3). This preview introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For .NET applications, it also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs).
+A new version of the Storage bindings extension is available as a [preview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0-beta.3). This preview introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For .NET applications, it also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about these new types are different and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md).
> [!NOTE] > The preview package is not included in an extension bundle and must be installed manually. For .NET apps, add a reference to the package. For all other app types, see [Update your extensions].
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/deploy.md
To simplify management and removal, we recommend you deploy Start/Stop VMs v2 (p
:::image type="content" source="media/deploy/deployment-results-resource-list.png" alt-text="Start/Stop VMs template deployment resource list":::
+> [!NOTE]
+> The naming format for the function app and storage account has changed. To guarantee global uniqueness, a random and unique string is now appended to the names of these resource.
+ ## Enable multiple subscriptions After the Start/Stop deployment completes, perform the following steps to enable Start/Stop VMs v2 (preview) to take action across multiple subscriptions.
To learn more about how Azure Monitor metric alerts work and how to configure th
## Next steps
-To learn how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 (preview) feature and perform other management tasks, see the [Manage Start/Stop VMs](manage.md) article.
+To learn how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 (preview) feature and perform other management tasks, see the [Manage Start/Stop VMs](manage.md) article.
azure-monitor Tutorial Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/tutorial-users.md
Title: Understand your customers in Azure Application Insights | Microsoft Docs
-description: Tutorial on using Azure Application Insights to understand how customers are using your application.
+ Title: Understand your customers in Application Insights | Microsoft Docs
+description: Tutorial on using Application Insights to understand how customers are using your application.
Previously updated : 09/20/2017 Last updated : 07/30/2021 # Use Azure Application Insights to understand how customers are using your application
-Azure Application Insights collects usage information to help you understand how your users interact with your application. This tutorial walks you through the different resources that are available to analyze this information. You will learn how to:
+ Application Insights collects usage information to help you understand how your users interact with your application. This tutorial walks you through the different resources that are available to analyze this information. You'll learn how to:
> [!div class="checklist"] > * Analyze details about users accessing your application
Log in to the Azure portal at [https://portal.azure.com](https://portal.azure.co
## Get information about your users The **Users** panel allows you to understand important details about your users in a variety of ways. You can use this panel to understand such information as where your users are connecting from, details of their client, and what areas of your application they're accessing.
-1. Select **Application Insights** and then select your subscription.
-2. Select **Users** in the menu.
-3. The default view shows the number of unique users that have connected to your application over the past 24 hours. You can change the time window and set various other criteria to filter this information.
+1. In your Application Insights resource under *Usage*, select **Users** in the menu.
+2. The default view shows the number of unique users that have connected to your application over the past 24 hours. You can change the time window and set various other criteria to filter this information.
- ![Query Builder](media/tutorial-users/QueryBuilder.png)
-
-6. Click the **During** dropdown and change the time window to 7 days. This increases the data included in the different charts in the panel.
-
- ![Change Time Range](media/tutorial-users/TimeRange.png)
+3. Click the **During** dropdown and change the time window to 7 days. This increases the data included in the different charts in the panel.
4. Click the **Split by** dropdown to add a breakdown by a user property to the graph. Select **Country or region**. The graph includes the same data but allows you to view a breakdown of the number of users for each country/region.
- ![Country or Region graph](media/tutorial-users/CountryorRegion.png)
+ :::image type="content" source="./media/tutorial-users/user-1.png" alt-text="Screenshot of the User tab's query builder." lightbox="./media/tutorial-users/user-1.png":::
5. Position the cursor over different bars in the chart and note that the count for each country/region reflects only the time window represented by that bar.
-6. Have a look at the **Insights** column at the right that perform analysis on your user data. This provides information such as the number of unique sessions over the time period and records with common properties that make up significant of the user data
+6. Select **View More Insights** for more information.
- ![Insights column](media/tutorial-users/insights.png)
+ :::image type="content" source="./media/tutorial-users/user-2.png" alt-text="Screenshot of the User tab of view more insights." lightbox="./media/tutorial-users/user-2.png":::
## Analyze user sessions The **Sessions** panel is similar to the **Users** panel. Where **Users** helps you understand details about the users accessing your application, **Sessions** helps you understand how those users used your application.
-1. Select **Sessions** in the menu.
+1. User *Usage*, select **Sessions**.
2. Have a look at the graph and note that you have the same options to filter and break down the data as in the **Users** panel.
- ![Sessions Query Builder](media/tutorial-users/SessionsBuilder.png)
-
-3. The **Sample of these sessions** pane on the right lists sessions that include a large number of events. These are interesting sessions to analyze.
-
- ![Sample of these sessions](media/tutorial-users/SessionsSample.png)
+ :::image type="content" source="./media/tutorial-users/sessions.png" alt-text="Screenshot of the Sessions tab with a bar chart displayed." lightbox="./media/tutorial-users/sessions.png":::
-4. Click on one of the sessions to view its **Session Timeline**, which shows every action in the sessions. This can help you identify information such as the sessions with a large number of exceptions.
+4. To view the sessions timeline, select **View More Insights** then under active sessions select **View session timeline** on one of the timelines. Session Timeline shows every action in the sessions. This can help you identify information such as the sessions with a large number of exceptions.
- ![Sessions Timeline](media/tutorial-users/SessionsTimeline.png)
+ :::image type="content" source="./media/tutorial-users/timeline.png" alt-text="Screenshot of the Sessions tab with a timeline selected." lightbox="./media/tutorial-users/timeline.png":::
## Group together similar users A **Cohort** is a set of users grouped on similar characteristics. You can use cohorts to filter data in other panels allowing you to analyze particular groups of users. For example, you might want to analyze only users who completed a purchase.
-1. Select **Cohorts** in the menu.
-2. Click **New** to create a new cohort.
-3. Select the **Who used** dropdown and select an action. Only users who performed this action within the time window of the report will be included.
+1. Select **Create a Cohort** at the top of one of the usage tabs ( Users, Sessions, Events and so on).
- ![Cohort who performed specified actions](media/tutorial-users/CohortsDropdown.png)
+1. Select a template from the gallery.
-4. Select **Users** in the menu.
-5. In the **Show** dropdown, select the cohort you just created. The data for the graph is limited to those users.
+ :::image type="content" source="./media/tutorial-users/cohort.png" alt-text="Screenshot of the template gallery for cohorts." lightbox="./media/tutorial-users/cohort.png":::
+1. Edit your Cohort then select **save**.
+1. To see your Cohort select it from the **Show** dropdown menu.
- ![Cohort in users tool](media/tutorial-users/UsersCohort.png)
+ :::image type="content" source="./media/tutorial-users/cohort-2.png" alt-text="Screenshot of the Show dropdown, showing a cohort." lightbox="./media/tutorial-users/cohort-2.png":::
## Compare desired activity to reality While the previous panels are focused on what users of your application did, **Funnels** focus on what you want users to do. A funnel represents a set of steps in your application and the percentage of users who move between steps. For example, you could create a funnel that measures the percentage of users who connect to your application who search product. You can then see the percentage of users who add that product to a shopping cart, and then the percentage of those who complete a purchase.
-1. Select **Funnels** in the menu and then click **New**.
+1. Select **Funnels** in the menu and then select **Edit**.
- ![Screenshot showing how to create a new funnel.](media/tutorial-users/funnelsnew.png)
-
-2. Type in a **Funnel Name**.
3. Create a funnel with at least two steps by selecting an action for each step. The list of actions is built from usage data collected by Application Insights.
- ![Screenshot showing how to create steps in a new funnel.](media/tutorial-users/funnelsedit.png)
+ :::image type="content" source="./media/tutorial-users/funnel.png" alt-text="Screenshot of the Funnel tab and selecting steps on the edit tab." lightbox="./media/tutorial-users/funnel.png":::
-4. Click **Save** to save the funnel and then view its results. The window to the right of the funnel shows the most common events before the first activity and after the last activity to help you understand user tendencies around the particular sequence.
+4. Select the **View** tab to see the results. The window to the right shows the most common events before the first activity and after the last activity to help you understand user tendencies around the particular sequence.
- ![Screenshot showing the event results of a newly created funnel.](media/tutorial-users/funnelsright.png)
+ :::image type="content" source="./media/tutorial-users/funnel-2.png" alt-text="Screenshot of the funnel tab on view." lightbox="./media/tutorial-users/funnel-2.png":::
+4. To save the funnel, select **Save**.
## Learn which customers return+ **Retention** helps you understand which users are coming back to your application.
-1. Select **Retention** in the menu.
+1. Select **Retention** in the menu, then *Retention Analysis Workbook.
2. By default, the analyzed information includes users who performed any action and then returned to perform any action. You can change this filter to any include, for example, only those users who returned after completing a purchase.
- ![Screenshot showing how to set a retention filter.](media/tutorial-users/retentionquery.png)
+ :::image type="content" source="./media/tutorial-users/retention.png" alt-text="Screenshot showing a graph for users that match the criteria set for a retention filter." lightbox="./media/tutorial-users/retention.png":::
-3. The returning users that match the criteria are shown in graphical and table form for different time durations. The typical pattern is for a gradual drop in returning users over time. A sudden drop from one time period to the next might raise a concern.
+3. The returning users that match the criteria are shown in graphical and table form for different time durations. The typical pattern is for a gradual drop in returning users over time. A sudden drop from one time period to the next might raise a concern.
- ![Screenshot showing a graph for users that match the criteria set for a retention filter.](media/tutorial-users/retentiongraph.png)
+ :::image type="content" source="./media/tutorial-users/retention-2.png" alt-text="Screenshot of the retention workbook, showing user return after # of weeks chart." lightbox="./media/tutorial-users/retention-2.png":::
## Analyze user navigation A **User flow** visualizes how users navigate between the pages and features of your application. This helps you answer questions such as where users typically move from a particular page, how they typically exit your application, and if there are any actions that are regularly repeated. 1. Select **User flows** in the menu.
-2. Click **New** to create a new user flow and then click **Edit** to edit its details.
+2. Click **New** to create a new user flow and then select **Edit** to edit its details.
3. Increase the **Time Range** to 7 days and then select an initial event. The flow will track user sessions that start with that event.
- ![Screenshot showing how to create a new user flow.](media/tutorial-users/flowsedit.png)
-
+ :::image type="content" source="./media/tutorial-users/flowsedit.png" alt-text="Screenshot showing how to create a new user flow." lightbox="./media/tutorial-users/flowsedit.png":::
+
4. The user flow is displayed, and you can see the different user paths and their session counts. Blue lines indicate an action that the user performed after the current action. A red line indicates the end of the user session.
- ![Screenshot showing the display of user paths and session counts for a user flow.](media/tutorial-users/flows.png)
+ :::image type="content" source="./media/tutorial-users/flows.png" alt-text="Screenshot showing the display of user paths and session counts for a user flow." lightbox="./media/tutorial-users/flows.png":::
-5. To remove an event from the flow, click the **x** in the corner of the action and then click **Create Graph**. The graph is redrawn with any instances of that event removed. Click **Edit** to see that the event is now added to **Excluded events**.
+5. To remove an event from the flow, select the **x** in the corner of the action and then select **Create Graph**. The graph is redrawn with any instances of that event removed. Select **Edit** to see that the event is now added to **Excluded events**.
- ![Screenshot showing the list of excluded events for a user flow.](media/tutorial-users/flowsexclude.png)
+ :::image type="content" source="./media/tutorial-users/flowsexclude.png" alt-text="Screenshot showing the list of excluded events for a user flow." lightbox="./media/tutorial-users/flowsexclude.png":::
## Consolidate usage data **Workbooks** combine data visualizations, Analytics queries, and text into interactive documents. You can use workbooks to group together common usage information, consolidate information from a particular incident, or report back to your team on your application's usage. 1. Select **Workbooks** in the menu.
-2. Click **New** to create a new workbook.
-3. A query is already provided that includes all usage data in the last day displayed as a bar chart. You can use this query, manually edit it, or click **Sample queries** to select from other useful queries.
-
- ![Screenshot showing a list of sample queries that you can use.](media/tutorial-users/samplequeries.png)
+2. Select **New** to create a new workbook.
+3. A query is already provided that includes all usage data in the last day displayed as a bar chart. You can use this query, manually edit it, or select **Samples** to select from other useful queries.
-4. Click **Done editing**.
-5. Click **Edit** in the top pane to edit the text at the top of the workbook. This is formatted using markdown.
+ :::image type="content" source="./media/tutorial-users/sample-queries.png" alt-text="Screenshot showing the sample button and list of sample queries that you can use." lightbox="./media/tutorial-users/sample-queries.png":::
- ![Screenshot showing how to edit the text at the top of the workbook.](media/tutorial-users/markdown.png)
+4. Select **Done editing**.
+5. Select **Edit** in the top pane to edit the text at the top of the workbook. This is formatted using markdown.
-6. Click **Add users** to add a graph with user information. Edit the details of the graph if you want and then click **Done editing** to save it.
+6. Select **Add users** to add a graph with user information. Edit the details of the graph if you want and then select **Done editing** to save it.
+To learn more about workbooks, visit [the workbooks overview](../visualize/workbooks-overview.md).
## Next steps Now that you've learned how to analyze your users, advance to the next tutorial to learn how to create custom dashboards that combine this information with other useful data about your application.
azure-monitor Usage Cohorts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-cohorts.md
Title: Azure Application Insights usage cohorts | Microsoft Docs
+ Title: Application Insights usage cohorts | Microsoft Docs
description: Analyze different sets or users, sessions, events, or operations that have something in common -- Previously updated : 04/10/2018++ Last updated : 07/30/2021 - # Application Insights cohorts
-A cohort is a set of users, sessions, events, or operations that have something in common. In Azure Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set youΓÇÖre interested in.
-
-![Cohorts pane](./media/usage-cohorts/001.png)
+A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set youΓÇÖre interested in.
## Cohorts versus basic filters
You might define a cohort of users who have all tried a new feature in your app.
Your team defines an engaged user as anyone who uses your app five or more times in a given month. In this section, you define a cohort of these engaged users.
-1. Open the Cohorts tool.
+1. Select **Create a Cohort**
2. Select the **Template Gallery** tab. You see a collection of templates for various cohorts.
Your team defines an engaged user as anyone who uses your app five or more times
4. Change **UsedAtLeastCustom** to **5+ days**, and leave **Period** on the default of 28 days.
- ![Engaged users](./media/usage-cohorts/003.png)
-
+
Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28. 5. Select **Save**.
Your team defines an engaged user as anyone who uses your app five or more times
Open the Users tool. In the **Show** drop-down box, choose the cohort you created under **Users who belong to**.
-Now the Users tool is filtered to this cohort of users:
-![Users pane filtered to a particular cohort](./media/usage-cohorts/004.png)
A few important things to notice:
These filters support more sophisticated questions that are impossible to expres
You can also make cohorts of events. In this section, you define a cohort of the events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers _active usage_ or a set related to a certain new feature.
-1. Open the Cohorts tool.
+1. Select **Create a Cohort**
2. Select the **Template Gallery** tab. YouΓÇÖll see a collection of templates for various cohorts. 3. Select **Events Picker**.
- ![Screenshot of events picker](./media/usage-cohorts/006.png)
- 4. In the **Activities** drop-down box, select the events you want to be in the cohort. 5. Save the cohort and give it a name.
You can also make cohorts of events. In this section, you define a cohort of the
The previous two cohorts were defined by using drop-down boxes. But you can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom.
-![Animated image walking through use of Cohorts tool](./media/usage-cohorts/cohorts0001.gif)
1. Open the Cohorts tool, select the **Template Gallery** tab, and select **Blank Users cohort**.
- ![Blank users cohort](./media/usage-cohorts/001.png)
+ :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot of the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png":::
There are three sections: * A Markdown text section, where you describe the cohort in more detail for others on your team.
The previous two cohorts were defined by using drop-down boxes. But you can also
In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a ΓÇ£| summarize by user_IdΓÇ¥ clause to the query. This data is previewed below the query in a table, so you can make sure your query is returning results. > [!NOTE]
- > If you donΓÇÖt see the query, try resizing the section to make it taller and reveal the query. The animated .gif at the beginning of this section illustrates the resizing behavior.
+ > If you donΓÇÖt see the query, try resizing the section to make it taller and reveal the query.
2. Copy and paste the following text into the query editor:
Cohorts and filters are different. Suppose you have a cohort of users from the U
* [Analytics query language](../logs/log-analytics-tutorial.md?toc=%2fazure%2fazure-monitor%2ftoc.json) * [Users, sessions, events](usage-segmentation.md) * [User flows](usage-flows.md)
-* [Usage overview](usage-overview.md)
+* [Usage overview](usage-overview.md)
azure-monitor Usage Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-flows.md
Title: Azure Application Insights User Flows analyzes navigation flows
+ Title: Application Insights User Flows analyzes navigation flows
description: Analyze how users navigate between the pages and features of your web app. -- Previously updated : 01/24/2018++ Last updated : 07/30/2021 - # Analyze user navigation patterns with User Flows in Application Insights
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-funnels.md
Title: Azure Application Insights Funnels
+ Title: Application Insights Funnels
description: Learn how you can use Funnels to discover how customers are interacting with your application. -- Previously updated : 07/17/2017++ Last updated : 07/30/2021 - # Discover how customers are using your application with Application Insights Funnels
-Understanding the customer experience is of the utmost importance to your business. If your application involves multiple stages, you need to know if most customers are progressing through the entire process, or if they are ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Azure Application Insights Funnels to gain insights into your users, and monitor step-by-step conversion rates.
+Understanding the customer experience is of the utmost importance to your business. If your application involves multiple stages, you need to know if most customers are progressing through the entire process, or if they're ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights Funnels to gain insights into your users, and monitor step-by-step conversion rates.
## Create your funnel
-Before you create your funnel, decide on the question you want to answer. For example, you might want to know how many users are viewing the home page, viewing a customer profile, and creating a ticket. In this example, the owners of the Fabrikam Fiber company want to know the percentage of customers who successfully create a customer ticket.
+Before you create your funnel, decide on the question you want to answer. For example, you might want to know how many users are viewing the home page, viewing a customer profile, and creating a ticket.
-Here are the steps they take to create their funnel.
+To create a funnel:
-1. In the Application Insights Funnels tool, select **New**.
-1. From the **Time Range** drop-down menu, select **Last 90 days**. Select either **My funnels** or **Shared funnels**.
-1. From the **Step 1** drop-down list, select **Index**.
-1. From the **Step 2** list, select **Customer**.
-1. From the **Step 3** list, select **Create**.
-1. Add a name to the funnel, and select **Save**.
+1. In the **Funnels** tab, select **Edit**.
+1. Choose your *Top step*.
-The following screenshot shows an example of the kind of data the Funnels tool generates. The Fabrikam owners can see that during the last 90 days, 54.3 percent of their customers who visited the home page created a customer ticket. They can also see that 2,700 of their customers came to the index from the home page. This might indicate a refresh issue.
+ :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot of the Funnel tab and selecting steps on the edit tab." lightbox="./media/usage-funnels/funnel.png":::
+1. To apply filters to the step select **Add filters**, which will appear after you choose an item for the top step.
+1. Then choose your *Second step* and so on.
+1. Select the **View** tab to see your funnel results
-![Screenshot of Funnels tool with data](media/usage-funnels/funnel1.png)
+ :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot of the funnel tab on view tab showing results from the top and second step." lightbox="./media/usage-funnels/funnel-2.png":::
+
+1. To save your funnel to view at another time, select **Save** at the top. You can use **Open** to open your saved funnels.
### Funnels features
-The preceding screenshot includes five highlighted areas. These are features of Funnels. The following list explains more about each corresponding area in the screenshot:
-1. If your app is sampled, you will see a sampling banner. Selecting the banner opens a context pane, explaining how to turn sampling off.
-2. You can export your funnel to [Power BI](./export-power-bi.md).
-3. Select a step to see more details on the right.
-4. The historical conversion graph shows the conversion rates over the last 90 days.
-5. Understand your users better by accessing the users tool. You can use filters in each step.
+
+- If your app is sampled, you'll see a sampling banner. Selecting the banner opens a context pane, explaining how to turn sampling off.
+- Select a step to see more details on the right.
+- The historical conversion graph shows the conversion rates over the last 90 days.
+- Understand your users better by accessing the users tool. You can use filters in each step.
## Next steps * [Usage overview](usage-overview.md)
azure-monitor Usage Impact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-impact.md
Title: Azure Application Insights Usage Impact | Microsoft docs
+ Title: Application Insights Usage Impact - Azure Monitor
description: Analyze how different properties potentially impact conversion rates for parts of your apps. -- Previously updated : 01/08/2019++ Last updated : 07/30/2021 - # Impact analysis with Application Insights Impact analyzes how load times and other properties influence conversion rates for various parts of your app. To put it more precisely, it discovers how **any dimension** of a **page view**, **custom event**, or **request** affects the usage of a different **page view** or **custom event**.
-![Impact tool](./media/usage-impact/0001-impact.png)
- ## Still not sure what Impact does? One way to think of Impact is as the ultimate tool for settling arguments with someone on your team about how slowness in some aspect of your site is affecting whether users stick around. While users may tolerate a certain amount of slowness, Impact gives you insight into how best to balance optimization and performance to maximize user conversion. But analyzing performance is just a subset of Impact's capabilities. Since Impact supports custom events and dimensions, answering questions like how does user browser choice correlate with different rates of conversion are just a few clicks away.
-![Screenshot conversion by browsers](./media/usage-impact/0004-browsers.png)
- > [!NOTE]
-> Your Application Insights resource must contain page views or custom events to use the Impact tool. [Learn how to set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). Also keep in mind that since you are analyzing correlation, sample size matters.
->
->
+> Your Application Insights resource must contain page views or custom events to use the Impact analysis workbook. [Learn how to set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). Also keep in mind that since you are analyzing correlation, sample size matters.
+
+## Impact Analytics Workbook
+
+To use the Impact Analytics Workbook, in your Application Insights resources navigate to **Usage** > **Impact** and select **Impact Analysis Workbook**. Or in the **Workbooks** tab select **Public Templates** then select **User Impact Analysis** under *Usage*.
+
-## Is page load time impacting how many people convert on my page?
-To begin answering questions with the Impact tool, choose an initial page view, custom event, or request.
+### Using the workbook
-![Screenshot that shows where to choose an initial page view, custom event, or request.](./media/usage-impact/0002-dropdown.png)
-1. Select a page view from the **For the page view** dropdown.
+1. Select an event from the **Selected event** dropdown
+2. Select a metric in **analyze how its** dropdown
+3. Select an event in **impacting event** dropdown
+1. If you liked to add a filter to do so in **Add selected event filters** and/or **Add impacting event filters**.
++
+## Is page load time impacting how many people convert on my page?
+
+To begin answering questions with the Impact workbook, choose an initial page view, custom event, or request.
+
+1. Select an event from the **Selected event** dropdown.
2. Leave the **analyze how its** dropdown on the default selection of **Duration** (In this context **Duration** is an alias for **Page Load Time**.)
-3. For the **impacts the usage of** dropdown, select a custom event. This event should correspond to a UI element on the page view you selected in step 1.
+3. For the **impacting event** dropdown, select a custom event. This event should correspond to a UI element on the page view you selected in step 1.
-![Screenshot of results](./media/usage-impact/0003-results.png)
-In this instance as **Product Page** load time increases the conversion rate to **Purchase Product clicked** goes down. Based on the distribution above, an optimal page load duration of 3.5 seconds could be targeted to achieve a potential 55% conversion rate. Further performance improvements to reduce load time below 3.5 seconds do not currently correlate with additional conversion benefits.
## What if IΓÇÖm tracking page views or load times in custom ways?
use filters on the primary and secondary events to get more specific.
## Do users from different countries or regions convert at different rates?
-1. Select a page view from the **For the page view** dropdown.
+1. Select an event from the **Selected Event** dropdown.
2. Choose ΓÇ£Country or regionΓÇ¥ in **analyze how its** dropdown
-3. For the **impacts the usage of** dropdown, select a custom event that corresponds to a UI element on the page view you chose in step 1.
-
-In this case, the results no longer fit into a continuous x-axis model as they did in the first example. Instead, a visualization similar to a segmented funnel is presented. Sort by **Usage** to view the variation of conversion to your custom event based on country/region.
+3. For the **impacting event** dropdown, select a custom event that corresponds to a UI element on the page view you chose in step 1.
-## How does the Impact tool calculate these conversion rates?
+## How does the Impact analysis workbook calculate these conversion rates?
-Under the hood, the Impact tool relies on the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). Results are computed between -1 and 1 with -1 representing a negative linear correlation and 1 representing a positive linear correlation.
+Under the hood, the Impact analysis workbook relies on the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). Results are computed between -1 and 1 with -1 representing a negative linear correlation and 1 representing a positive linear correlation.
The basic breakdown of how Impact Analysis works is as follows:
-Let _A_ = the main page view/custom event/request you select in the first dropdown. (**For the page view**).
+Let _A_ = the main page view/custom event/request you select in the first dropdown. (**Selected event**).
Let _B_ = the secondary page view/custom event you select (**impacts the usage of**).
How Impact is ultimately calculated varies based on whether we are analyzing by
## Next steps
+- To learn more about workbooks, visit [the workbooks overview](../visualize/workbooks-overview.md).
- To enable usage experiences, start sending [custom events](./api-custom-events-metrics.md#trackevent) or [page views](./api-custom-events-metrics.md#page-views). - If you already send custom events or page views, explore the Usage tools to learn how users use your service. - [Funnels](usage-funnels.md)
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-overview.md
Title: Usage analysis with Azure Application Insights | Microsoft docs
+ Title: Usage analysis with Application Insights | Azure Monitor
description: Understand your users and what they do with your app. Previously updated : 03/25/2019 Last updated : 07/30/2021 # Usage analysis with Application Insights
-Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? [Azure Application Insights](./app-insights-overview.md) helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data driven decisions about your next development cycles.
+Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? [Application Insights](./app-insights-overview.md) helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data driven decisions about your next development cycles.
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4Cijb]
## Send telemetry from your app
The best experience is obtained by installing Application Insights both in your
3. **Mobile app code:** Use the App Center SDK to collect events from your app, then send copies of these events to Application Insights for analysis by [following this guide](../app/mobile-center-quickstart.md).
-4. **Get telemetry:** Run your project in debug mode for a few minutes, and then look for results in the Overview blade in Application Insights.
+4. **Get telemetry:** Run your project in debug mode for a few minutes, and then look for results in the Overview pane in Application Insights.
Publish your app to monitor your app's performance and find out what your users are doing with your app.
Find out when people use your app, what pages they're most interested in, where
The Users and Sessions reports filter your data by pages or custom events, and segment them by properties such as location, environment, and page. You can also add your own filters.
-![Screen capture shows the Users Overview page for a fictitious company.](./media/usage-overview/users.png)
Insights on the right point out interesting patterns in the set of data.
Retention helps you understand how often your users return to use their app, bas
- Form hypotheses based on real user data - Determine whether retention is a problem in your product
-![Screen capture shows the Retention Overview page which displays information about how often users return to use their app.](./media/usage-overview/retention.png)
The retention controls on top allow you to define specific events and time range to calculate retention. The graph in the middle gives a visual representation of the overall retention percentage by the time range specified. The graph on the bottom represents individual retention in a given time period. This level of detail allows you to understand what your users are doing and what might affect returning users on a more detailed granularity.
-[More about the Retention tool](usage-retention.md)
+[More about the Retention workbook](usage-retention.md)
## Custom business events
You can also use the [Click Analytics Auto-collection Plugin](javascript-click-a
Although in some cases, page views can represent useful events, it isn't true in general. A user can open a product page without buying the product.
-With specific business events, you can chart your users' progress through your site. You can find out their preferences for different options, and where they drop out or have difficulties. With this knowledge, you can make informed decisions about the priorities in your development backlog.
+With specific business events, you can chart your users' progress through your site. Find out their preferences for different options, and where they drop out or have difficulties. With this knowledge, you can make informed decisions about the priorities in your development backlog.
Events can be logged from the client side of the app: ```JavaScript- appInsights.trackEvent("ExpandDetailTab", {DetailTab: tabName}); ```
Or from the server side:
tc.TrackEvent("CompletedPurchase"); ```
-You can attach property values to these events, so that you can filter or split the events when you inspect them in the portal. In addition, a standard set of properties is attached to each event, such as anonymous user ID, which allows you to trace the sequence of activities of an individual user.
+You can attach property values to these events, so that you can filter or split the events when you inspect them in the portal. A standard set of properties is also attached to each event, such as anonymous user ID, which allows you to trace the sequence of activities of an individual user.
Learn more about [custom events](./api-custom-events-metrics.md#trackevent) and [properties](./api-custom-events-metrics.md#properties). ### Slice and dice events In the Users, Sessions, and Events tools, you can slice and dice custom events by user, event name, and properties.
-![Screen capture shows the Users Overview page for a fictitious company.](./media/usage-overview/users.png)
+ ## Design the telemetry with the app
-When you are designing each feature of your app, consider how you are going to measure its success with your users. Decide what business events you need to record, and code the tracking calls for those events into your app from the start.
+When you're designing each feature of your app, consider how you're going to measure its success with your users. Decide what business events you need to record, and code the tracking calls for those events into your app from the start.
## A | B Testing If you don't know which variant of a feature will be more successful, release both of them, making each accessible to different users. Measure the success of each, and then move to a unified version.
azure-monitor Usage Retention https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-retention.md
Title: Analyze web app user retention with Azure Application Insights
+ Title: Analyze web app user retention with Application Insights
description: How many users return to your app? -- Previously updated : 05/03/2017++ Last updated : 07/30/2021 - # User retention analysis for web applications with Application Insights
-The retention feature in [Azure Application Insights](./app-insights-overview.md) helps you analyze how many users return to your app, and how often they perform particular tasks or achieve goals. For example, if you run a game site, you could compare the numbers of users who return to the site after losing a game with the number who return after winning. This knowledge can help you improve both your user experience and your business strategy.
+The retention feature in [Application Insights](./app-insights-overview.md) helps you analyze how many users return to your app, and how often they perform particular tasks or achieve goals. For example, if you run a game site, you could compare the numbers of users who return to the site after losing a game with the number who return after winning. This knowledge can help you improve both your user experience and your business strategy.
## Get started If you don't yet see data in the retention tool in the Application Insights portal, [learn how to get started with the usage tools](usage-overview.md).
-## The Retention tool
+## The Retention workbook
-![Retention tool](./media/usage-retention/retention.png)
+To use the Retention Workbook, in your Application Insights resources navigate to **Usage** > **Retention** and select **Retention Analysis Workbook**. Or in the **Workbooks** tab select **Public Templates** then select **User Retention Analysis** under *Usage*.
-1. The toolbar allows users to create new retention reports, open existing retention reports, save current retention report or save as, revert changes made to saved reports, refresh data on the report, share report via email or direct link, and access the documentation page.
-2. By default, retention shows all users who did anything then came back and did anything else over a period. You can select different combination of events to narrow the focus on specific user activities.
-3. Add one or more filters on properties. For example, you can focus on users in a particular country or region. Click **Update** after setting the filters.
-4. The overall retention chart shows a summary of user retention across the selected time period.
-5. The grid shows the number of users retained according to the query builder in #2. Each row represents a cohort of users who performed any event in the time period shown. Each cell in the row shows how many of that cohort returned at least once in a later period. Some users may return in more than one period.
-6. The insights cards show top five initiating events, and top five returned events to give users a better understanding of their retention report.
-![Retention mouse hover](./media/usage-retention/hover.png)
-Users can hover over cells on the retention tool to access the analytics button and tool tips explaining what the cell means. The Analytics button takes users to the Analytics tool with a pre-populated query to generate users from the cell.
+
+### Using the workbook
++
+- By default, retention shows all users who did anything then came back and did anything else over a period. You can select different combination of events to narrow the focus on specific user activities.
+- Add one or more filters on properties by selecting **Add Filters**. For example, you can focus on users in a particular country or region.
+- The overall retention chart shows a summary of user retention across the selected time period.
+- The grid shows the number of users retained. Each row represents a cohort of users who performed any event in the time period shown. Each cell in the row shows how many of that cohort returned at least once in a later period. Some users may return in more than one period.
+- The insights cards show top five initiating events, and top five returned events to give users a better understanding of their retention report.
+
+ :::image type="content" source="./media/usage-retention/retention-2.png" alt-text="Screenshot of the retention workbook, showing user return after # of weeks chart." lightbox="./media/usage-retention/retention-2.png":::
## Use business events to track retention
Or in ASP.NET server code:
## Next steps+ - To enable usage experiences, start sending [custom events](./api-custom-events-metrics.md#trackevent) or [page views](./api-custom-events-metrics.md#page-views). - If you already send custom events or page views, explore the Usage tools to learn how users use your service. - [Users, Sessions, Events](usage-segmentation.md)
azure-monitor Usage Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-segmentation.md
Title: User, session, and event analysis in Azure Application Insights
+ Title: User, session, and event analysis in Application Insights
description: Demographic analysis of users of your web app. -- Previously updated : 01/24/2018++ Last updated : 07/30/2021 - # Users, sessions, and events analysis in Application Insights
-Find out when people use your web app, what pages they're most interested in, where your users are located, and what browsers and operating systems they use. Analyze business and usage telemetry by using [Azure Application Insights](./app-insights-overview.md).
+Find out when people use your web app, what pages they're most interested in, where your users are located, and what browsers and operating systems they use. Analyze business and usage telemetry by using [Application Insights](./app-insights-overview.md).
-![Screenshot of Application Insights Users](./media/usage-segmentation/0001-users.png)
## Get started
Three of the usage blades use the same tool to slice and dice telemetry from you
* **Users tool**: How many people used your app and its features. Users are counted by using anonymous IDs stored in browser cookies. A single person using different browsers or machines will be counted as more than one user. * **Sessions tool**: How many sessions of user activity have included certain pages and features of your app. A session is counted after half an hour of user inactivity, or after 24 hours of continuous use.
-* **Events tool**: How often certain pages and features of your app are used. A page view is counted when a browser loads a page from your app, provided you have [instrumented it](./javascript.md).
+* **Events tool**: How often certain pages and features of your app are used. A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md).
- A custom event represents one occurrence of something happening in your app, often a user interaction like a button click or the completion of some task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent).
+ A custom event represents one occurrence of something happening in your app, often a user interaction like a button select or the completion of some task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent).
## Querying for certain users Explore different groups of users by adjusting the query options at the top of the Users tool:
-* Show: Choose a cohort of users to analyze.
-* Who used: Choose custom events and page views.
-* During: Choose a time range.
-* By: Choose how to bucket the data, either by a period of time or by another property such as browser or city.
-* Split By: Choose a property by which to split or segment the data. 
-* Add Filters: Limit the query to certain users, sessions, or events based on their properties, such as browser or city. 
+- During: Choose a time range.
+- Show: Choose a cohort of users to analyze.
+- Who used: Choose which custom events, requests, and page views.
+- Events: Choose multiple events, requests, and page views that will show users who did at least one, not necessarily all of the selected.
+- By value x-axis: Choose how to bucket the data, either by time range or by another property such as browser or city.
+- Split By: Choose a property by which to split or segment the data. 
+- Add Filters: Limit the query to certain users, sessions, or events based on their properties, such as browser or city. 
 
-## Saving and sharing reports 
-You can save Users reports, either private just to you in the My Reports section, or shared with everyone else with access to this Application Insights resource in the Shared Reports section.
-
-To share a link to a Users, Sessions, or Events report; click **Share** in the toolbar, then copy the link.
-
-To share a copy of the data in a Users, Sessions, or Events report; click **Share** in the toolbar, then click the **Word icon** to create a Word document with the data. Or, click the **Word icon** above the main chart.
- ## Meet your users
-The **Meet your users** section shows information about five sample users matched by the current query. Considering and exploring the behaviors of individuals, in addition to aggregates, can provide insights about how people actually use your app.
+The **Meet your users** section shows information about five sample users matched by the current query. Exploring the behaviors of individuals and in aggregate, can provide insights about how people actually use your app.
## Next steps
azure-monitor Usage Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-troubleshoot.md
Title: Troubleshoot user analytics tools - Azure Application Insights
+ Title: Troubleshoot user analytics tools - Application Insights
description: Troubleshooting guide - analyzing site and app usage with Application Insights. -- Previously updated : 07/11/2018++ Last updated : 07/30/2021 - # Troubleshoot user behavior analytics tools in Application Insights
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-agent-config.md
The output will show similar to the following with the annotation schema-version
- With monitoring enabled to collect health and resource utilization of your AKS or hybrid cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights. -- View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
+- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-analyze.md
You can [split](../essentials/metrics-charts.md#apply-splitting) a metric to vie
When you switch to the **Nodes**, **Controllers**, and **Containers** tabs, a property pane automatically displays on the right side of the page. It shows the properties of the item selected, which includes the labels you defined to organize Kubernetes objects. When a Linux node is selected, the **Local Disk Capacity** section also shows the available disk space and the percentage used for each disk presented to the node. Select the **>>** link in the pane to view or hide the pane.
-As you expand the objects in the hierarchy, the properties pane updates based on the object selected. From the pane, you also can view Kubernetes container logs (stdout/stderror), events, and pod metrics by selecting the **View live data (preview)** link at the top of the pane. For more information about the configuration required to grant and control access to view this data, see [Setup the Live Data (preview)](container-insights-livedata-setup.md). While you review cluster resources, you can see this data from the container in real-time. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real time](container-insights-livedata-overview.md). To view Kubernetes log data stored in your workspace based on pre-defined log searches, select **View container logs** from the **View in analytics** drop-down list. For additional information about this topic, see [Search logs to analyze data](container-insights-log-search.md#search-logs-to-analyze-data).
+As you expand the objects in the hierarchy, the properties pane updates based on the object selected. From the pane, you also can view Kubernetes container logs (stdout/stderror), events, and pod metrics by selecting the **View live data (preview)** link at the top of the pane. For more information about the configuration required to grant and control access to view this data, see [Setup the Live Data (preview)](container-insights-livedata-setup.md). While you review cluster resources, you can see this data from the container in real-time. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real time](container-insights-livedata-overview.md). To view Kubernetes log data stored in your workspace based on pre-defined log searches, select **View container logs** from the **View in analytics** drop-down list. For additional information about this topic, see [How to query logs from Container insights](container-insights-log-query.md).
Use the **+ Add Filter** option at the top of the page to filter the results for the view by **Service**, **Node**, **Namespace**, or **Node Pool**. After you select the filter scope, select one of the values shown in the **Select value(s)** field. After the filter is configured, it's applied globally while viewing any perspective of the AKS cluster. The formula only supports the equal sign. You can add additional filters on top of the first one to further narrow your results. For example, if you specify a filter by **Node**, you can only select **Service** or **Namespace** for the second filter.
Azure Network Policy Manager includes informative Prometheus metrics that allow
## Workbooks
-Workbooks combine text, log queries, metrics, and parameters into rich interactive reports that allow you to analyze cluster performance. See [Workbooks in Container insights](../insights/container-insights-reports.md) for a description of the workbooks available for Container insights.
+Workbooks combine text, log queries, metrics, and parameters into rich interactive reports that allow you to analyze cluster performance. See [Workbooks in Container insights](container-insights-reports.md) for a description of the workbooks available for Container insights.
## Next steps - Review [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures. -- View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see predefined queries and examples to evaluate or customize to alert, visualize, or analyze your clusters.
+- View [log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize to alert, visualize, or analyze your clusters.
- View [monitor cluster health](./container-insights-overview.md) to learn about viewing the health status your Kubernetes cluster.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-cost.md
The following is a summary of what types of data are collected from a Kubernetes
- Active scraping of Prometheus metrics -- [Diagnostic log collection](../../aks/view-control-plane-logs.md) of Kubernetes master node logs in your AKS cluster to analyze log data generated by master components such as the *kube-apiserver* and *kube-controller-manager*.
+- [Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes master node logs in your AKS cluster to analyze log data generated by master components such as the *kube-apiserver* and *kube-controller-manager*.
## What is collected from Kubernetes clusters
If you enabled monitoring of an AKS cluster configured as follows,
- Five Kubernetes services (includes kube-system pods, services, and namespace) - Collection frequency = 60 secs (default)
-You can see the tables and volume of data generated per hour in the assigned Log Analytics workspace. For more information about each of these tables, see [Container records](container-insights-log-search.md#container-records).
+You can see the tables and volume of data generated per hour in the assigned Log Analytics workspace. For more information about each of these tables, see [Azure Monitor Logs tables](../../aks/monitor-aks-reference.md#azure-monitor-logs-tables).
|Table | Size estimate (MB/hour) | |||
This workbook helps you to visualize the source of your data without having to b
- Billable data ingested by Container logs(application logs) - Billable container logs data ingested per by Kubernetes namespace - Billable container logs data ingested segregated by Cluster name-- Billable container log data ingested by logsource entry
+- Billable container log data ingested by log source entry
- Billable diagnostic data ingested by diagnostic master node logs [![Data usage workbook](media/container-insights-cost/data-usage-workbook.png)](media/container-insights-cost/data-usage-workbook.png#lightbox)
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Container insights automatically starts monitoring GPU usage on nodes, and GPU r
## GPU performance charts
-Container insights includes pre-configured charts for the metrics listed earlier in the table as a GPU workbook for every cluster. See [Workbooks in Container insights](../insights/container-insights-reports.md) for a description of the workbooks available for Container insights.
+Container insights includes pre-configured charts for the metrics listed earlier in the table as a GPU workbook for every cluster. See [Workbooks in Container insights](container-insights-reports.md) for a description of the workbooks available for Container insights.
## Next steps
azure-monitor Container Insights Livedata Deployments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-livedata-deployments.md
You can also filter by namespace or cluster level events. To learn more about th
- To continue learning how to use Azure Monitor and monitor other aspects of your AKS cluster, see [View Azure Kubernetes Service health](container-insights-analyze.md). -- View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
+- View [log query examples](container-insights-log-query.md) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
azure-monitor Container Insights Livedata Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-livedata-metrics.md
This performance chart maps to an equivalent of invoking `kubectl get pods ΓÇôal
## Next steps
-View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
+View [log query examples](container-insights-log-query.md) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-livedata-overview.md
You can view real-time log data as they are generated by the container engine fr
4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure. >[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [View in analytics](container-insights-log-search.md#search-logs-to-analyze-data) feature to learn more about viewing historical logs, events and metrics.
+ >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [How to query logs from Container insights](container-insights-log-query.md) feature to learn more about viewing historical logs, events and metrics.
After successfully authenticating, the Live Data console pane will appear below the performance data grid where you can view log data in a continuous stream. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
You can view real-time event data as they are generated by the container engine
4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure. >[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [View in analytics](container-insights-log-search.md#search-logs-to-analyze-data) feature to learn more about viewing historical logs, events and metrics.
+ >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [How to query logs from Container insights](container-insights-log-query.md) feature to learn more about viewing historical logs, events and metrics.
After successfully authenticating, the Live Data console pane will appear below the performance data grid. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
You can view real-time metric data as they are generated by the container engine
4. Select a **Pod** object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure. >[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [View in analytics](container-insights-log-search.md#search-logs-to-analyze-data) feature to learn more about viewing historical logs, events and metrics.
+ >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review [How to query logs from Container insights](container-insights-log-query.md) to learn more about viewing historical logs, events and metrics.
After successfully authenticating, the Live Data console pane will appear below the performance data grid. Metric data is retrieved and begins streaming to your console for presentation in the two charts. The pane title shows the name of the pod the container is grouped with.
To suspend autoscroll and control the behavior of the pane, allowing you to manu
- To continue learning how to use Azure Monitor and monitor other aspects of your AKS cluster, see [View Azure Kubernetes Service health](container-insights-analyze.md). -- View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
+- View [How to query logs from Container insights](container-insights-log-query.md) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-log-alerts.md
Title: Log alerts from Container insights | Microsoft Docs description: This article describes how to create custom log alerts for memory and CPU utilization from Container insights. Previously updated : 01/05/2021 Last updated : 07/29/2021
Container insights monitors the performance of container workloads that are depl
To alert for high CPU or memory utilization, or low free disk space on cluster nodes, use the queries that are provided to create a metric alert or a metric measurement alert. While metric alerts have lower latency than log alerts, log alerts provide advanced querying and greater sophistication. Log alert queries compare a datetime to the present by using the *now* operator and going back one hour. (Container insights stores all dates in Coordinated Universal Time (UTC) format.)
+> [!IMPORTANT]
+> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you create any alert rules.
+ If you're not familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log alerts in Azure Monitor](../alerts/alerts-unified-log.md). For more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md).
-## Resource utilization log search queries
+## Log query measurements
+Log query alerts can perform two different measurements of the result of a log query, each of which support distinct scenarios for monitoring virtual machines.
+
+[Metric measurement](../alerts/alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) create a separate alert for each record in the query results that has a numeric value that exceeds a threshold defined in the alert rule. These are ideal for numeric data such as CPU.
+
+[Number of results](../alerts/alerts-unified-log.md#count-of-the-results-table-rows) create a single alert when a query returns at least a specified number of records. These are ideal for non-numeric data such or for analyzing performance trends across multiple computers. You may also choose this strategy if you want to minimize your number of alerts or possibly create an alert only when multiple components have the same error condition.
+
+> [!NOTE]
+> Resource-centric log alert rules, currently in public preview, will simplify log query alerts and replace the functionality currently provided by metric measurement queries. You can use the AKS cluster as a target for the rule which will better identify it as the affected resource. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
-The queries in this section support each alerting scenario. They're used in step 7 of the [create alert](#create-an-alert-rule) section of this article.
+## Create a log query alert rule
+[Comparison of log query alert measures](../vm/monitor-virtual-machine-alerts.md#comparison-of-log-query-alert-measures) provides a complete walkthrough of log query alert rules for each type of measurement, including a comparison of the log queries supporting each. You can use these same processes to create alert rules for AKS clusters using queries similar to the ones in this article.
-The following query calculates average CPU utilization as an average of member nodes' CPU utilization every minute.
+## Resource utilization
+
+**Average CPU utilization as an average of member nodes' CPU utilization every minute (metric measurement)**
```kusto let endDateTime = now();
KubeNodeInventory
| summarize AggregatedValue = avg(UsagePercent) by bin(TimeGenerated, trendBinSize), ClusterName ```
-The following query calculates average memory utilization as an average of member nodes' memory utilization every minute.
+**Average memory utilization as an average of member nodes' memory utilization every minute (metric measurement)**
```kusto let endDateTime = now();
KubeNodeInventory
| project ClusterName, Computer, TimeGenerated, UsagePercent = UsageValue * 100.0 / LimitValue | summarize AggregatedValue = avg(UsagePercent) by bin(TimeGenerated, trendBinSize), ClusterName ```++ >[!IMPORTANT] >The following queries use the placeholder values \<your-cluster-name> and \<your-controller-name> to represent your cluster and controller. Replace them with values specific to your environment when you set up alerts.
-The following query calculates the average CPU utilization of all containers in a controller as an average of CPU utilization of every container instance in a controller every minute. The measurement is a percentage of the limit set up for a container.
+**Average CPU utilization of all containers in a controller as an average of CPU utilization of every container instance in a controller every minute (metric measurement)**
```kusto let endDateTime = now();
KubePodInventory
| summarize AggregatedValue = avg(UsagePercent) by bin(TimeGenerated, trendBinSize) , ContainerName ```
-The following query calculates the average memory utilization of all containers in a controller as an average of memory utilization of every container instance in a controller every minute. The measurement is a percentage of the limit set up for a container.
+**Average memory utilization of all containers in a controller as an average of memory utilization of every container instance in a controller every minute (metric measurement)**
```kusto let endDateTime = now();
KubePodInventory
| summarize AggregatedValue = avg(UsagePercent) by bin(TimeGenerated, trendBinSize) , ContainerName ```
-The following query returns all nodes and counts that have a status of *Ready* and *NotReady*.
+## Resource availability
+
+**Nodes and counts that have a status of Ready and NotReady (metric measurement)**
```kusto let endDateTime = now();
InsightsMetrics
| where AggregatedValue >= 90 ```
-## Create an alert rule
-This section walks through the creation of a metric measurement alert rule using performance data from Container insights. You can use this basic process with a variety of log queries to alert on different performance counters. Use one of the log search queries provided earlier to start with. To create using an ARM template, see [Samples of Log alert creation using Azure Resource Template](../alerts/alerts-log-create-templates.md).
->[!NOTE]
->The following procedure to create an alert rule for container resource utilization requires you to switch to a new log alerts API as described in [Switch API preference for log alerts](../alerts/alerts-log-api-switch.md).
->
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, search for and select **Log Analytics workspaces**.
-3. In your list of Log Analytics workspaces, select the workspace supporting Container insights.
-4. In the pane on the left side, select **Logs** to open the Azure Monitor logs page. You use this page to write and execute Azure log queries.
-5. On the **Logs** page, paste one of the [queries](#resource-utilization-log-search-queries) provided earlier into the **Search query** field and then select **Run** to validate the results. If you do not perform this step, the **+New alert** option is not available to select.
-6. Select **+New alert** to create a log alert.
-7. In the **Condition** section, select the **Whenever the Custom log search is \<logic undefined>** pre-defined custom log condition. The **custom log search** signal type is automatically selected because we're creating an alert rule directly from the Azure Monitor logs page.
-8. Paste one of the [queries](#resource-utilization-log-search-queries) provided earlier into the **Search query** field.
-9. Configure the alert as follows:
-
- 1. From the **Based on** drop-down list, select **Metric measurement**. A metric measurement creates an alert for each object in the query that has a value above our specified threshold.
- 1. For **Condition**, select **Greater than**, and enter **75** as an initial baseline **Threshold** for the CPU and memory utilization alerts. For the low disk space alert, enter **90**. Or enter a different value that meets your criteria.
- 1. In the **Trigger Alert Based On** section, select **Consecutive breaches**. From the drop-down list, select **Greater than**, and enter **2**.
- 1. To configure an alert for container CPU or memory utilization, under **Aggregate on**, select **ContainerName**. To configure for cluster node low disk alert, select **ClusterId**.
- 1. In the **Evaluated based on** section, set the **Period** value to **60 minutes**. The rule will run every 5 minutes and return records that were created within the last hour from the current time. Setting the time period to a wide window accounts for potential data latency. It also ensures that the query returns data to avoid a false negative in which the alert never fires.
-
-10. Select **Done** to complete the alert rule.
-11. Enter a name in the **Alert rule name** field. Specify a **Description** that provides details about the alert. And select an appropriate severity level from the options provided.
-12. To immediately activate the alert rule, accept the default value for **Enable rule upon creation**.
-13. Select an existing **Action Group** or create a new group. This step ensures that the same actions are taken every time that an alert is triggered. Configure based on how your IT or DevOps operations team manages incidents.
-14. Select **Create alert rule** to complete the alert rule. It starts running immediately.
+**Individual container restarts (number of results)**<br>
+Alerts when the individual system container restart count exceeds a threshold for last 10 minutes.
+
+
+```kusto
+let _threshold = 10m;
+let _alertThreshold = 2;
+let Timenow = (datetime(now) - _threshold);
+let starttime = ago(5m);
+KubePodInventory
+| where TimeGenerated >= starttime
+| where Namespace in ('default', 'kube-system') // the namespace filter goes here
+| where ContainerRestartCount > _alertThreshold
+| extend Tags = todynamic(ContainerLastStatus)
+| extend startedAt = todynamic(Tags.startedAt)
+| where startedAt >= Timenow
+| summarize arg_max(TimeGenerated, *) by Name
+```
## Next steps -- View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
+- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
- To learn more about Azure Monitor and how to monitor other aspects of your Kubernetes cluster, see [View Kubernetes cluster performance](container-insights-analyze.md) and [View Kubernetes cluster health](./container-insights-overview.md).
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-log-query.md
+
+ Title: How to query logs from Container insights
+description: Container insights collects metrics and log data and this article describes the records and includes sample queries.
+ Last updated : 07/19/2021+++
+# How to query logs from Container insights
+
+Container insights collects performance metrics, inventory data, and health state information from container hosts and containers. The data is collected every three minutes and forwarded to the Log Analytics workspace in Azure Monitor where it's available for [log queries](../logs/log-query-overview.md) using [Log Analytics](../logs/log-analytics-overview.md) in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting. Azure Monitor Logs can help you look for trends, diagnose bottlenecks, forecast, or correlate data that can help you determine whether the current cluster configuration is performing optimally.
+
+See [Using queries in Azure Monitor Log Analytics](../logs/queries.md) for information on using these queries and [Log Analytics tutorial](../logs/log-analytics-tutorial.md) for a complete tutorial on using Log Analytics to run queries and work with their results.
+
+## Open Log Analytics
+There are multiple options for starting Log Analytics, each starting with a different [scope](../logs/scope.md). For access to all data in the workspace, select **Logs** from the **Monitor** menu. To limit the data to a single Kubernetes cluster, select **Logs** from that cluster's menu.
++
+## Existing log queries
+You don't necessarily need to understand how to write a log query to use Log Analytics. There are multiple prebuilt queries that you can select and either run without modification or use as a start to a custom query. Click **Queries** at the top of the Log Analytics screen and view queries with a **Resource type** of **Kubernetes Services**.
++
+## Container tables
+See [Azure Monitor table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) for a list of tables and their detailed descriptions used by Container insights. All of these tables are available for log queries.
++
+## Example log queries
+It's often useful to build queries that start with an example or two and then modify them to fit your requirements. To help build more advanced queries, you can experiment with the following sample queries:
+
+### List all of a container's lifecycle information
+
+```kusto
+ContainerInventory
+| project Computer, Name, Image, ImageTag, ContainerState, CreatedTime, StartedTime, FinishedTime
+| render table
+```
+
+### Kubernetes events
+
+``` kusto
+KubeEvents_CL
+| where not(isempty(Namespace_s))
+| sort by TimeGenerated desc
+| render table
+```
+### Image inventory
+
+``` kusto
+ContainerImageInventory
+| summarize AggregatedValue = count() by Image, ImageTag, Running
+```
+
+### Container CPU
+
+``` kusto
+Perf
+| where ObjectName == "K8SContainer" and CounterName == "cpuUsageNanoCores"
+| summarize AvgCPUUsageNanoCores = avg(CounterValue) by bin(TimeGenerated, 30m), InstanceName
+```
+
+### Container memory
+
+```kusto
+Perf
+| where ObjectName == "K8SContainer" and CounterName == "memoryRssBytes"
+| summarize AvgUsedRssMemoryBytes = avg(CounterValue) by bin(TimeGenerated, 30m), InstanceName
+```
+
+### Requests Per Minute with Custom Metrics
+
+```kusto
+InsightsMetrics
+| where Name == "requests_count"
+| summarize Val=any(Val) by TimeGenerated=bin(TimeGenerated, 1m)
+| sort by TimeGenerated asc<br> &#124; project RequestsPerMinute = Val - prev(Val), TimeGenerated
+| render barchart
+```
+### Pods by name and names