Updates from: 06/29/2022 01:09:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
+
+ Title: Configure authentication in an Azure Static Web App by using Azure Active Directory B2C
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Static Web App.
++++++ Last updated : 06/28/2022+++++
+# Configure authentication in an Azure Static Web App by using Azure AD B2C
+
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Static Web App. For more information, check out the [Custom authentication in Azure Static Web Apps](../static-web-apps/authentication-custom.md) article.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. Use the OIDC to securely sign users in to an Azure Static Web App. The sign in flow involves the following steps:
+
+1. Users go to the Azure Static Web App and select **Sign-in**.
+1. The Azure Static Web App initiates an authentication request and redirects users to Azure AD B2C.
+1. Users [sign up or sign in](add-sign-up-and-sign-in-policy.md) and [reset the password](add-password-reset-policy.md). Alternatively, they can sign in with a [social account](add-identity-provider.md).
+1. After users sign in successfully, Azure AD B2C returns an ID token to the Azure Static Web App.
+1. Azure Static Web App validates the ID token, reads the claims, and returns a secure page to users.
+
+When the access token expires or the app session is invalidated, Azure Static Web App initiates a new authentication request and redirects users to Azure AD B2C. If the Azure AD B2C [SSO session](session-behavior.md) is active, Azure AD B2C issues an access token without prompting users to sign in again. If the Azure AD B2C session expires or becomes invalid, users are prompted to sign in again.
+
+## Prerequisites
+
+- If you haven't created an app yet, follow the guidance how to create an [Azure Static Web App](../static-web-apps/overview.md).
+- Familiarize yourself with the Azure Static Web App [staticwebapp.config.json](../static-web-apps/configuration.md) configuration file.
+- Familiarize yourself with the Azure Static Web App [App Settings](../static-web-apps/application-settings.md).
+
+## Step 1: Configure your user flow
++
+## Step 2: Register a web application
+
+To enable your application to sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app that you register establishes a trust relationship between the app and Azure AD B2C.
+
+During app registration, you specify a *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID and the redirect URI to create authentication requests. You also create a *client secret*, which your app uses to securely acquire the tokens.
+
+### Step 2.1: Register the app
+
+To register your application, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. Under **Name**, enter a name for the application (for example, *My Azure Static web app*).
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://<YOUR_SITE>/.auth/login/aadb2c/callback`. Replace the `<YOUR_SITE>` with your Azure Static Web App name. For example: `https://witty-island-11111111.azurestaticapps.net/.auth/login/aadb2c/callback`. If you configured an [Azure Static Web App's custom domains](../static-web-apps/custom-domain.md), use the custom domain in the redirect URI. For example, `https://www.example.com/.auth/login/aadb2c/callback`
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
+1. Select **Register**.
+1. Select **Overview**.
+1. Record the **Application (client) ID** for later use, when you configure the web application.
+
+ ![Screenshot of the web app Overview page for recording your web application I D.](./media/configure-authentication-in-azure-static-app/get-azure-ad-b2c-app-id.png)
+
+### Step 2.2: Create a client secret
+
+1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *My Azure Static web app*.
+1. In the left menu, under **Manage**, select **Certificates & secrets**.
+1. Select **New client secret**.
+1. Enter a description for the client secret in the **Description** box. For example, *clientsecret1*.
+1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**.
+1. Record the secret's **Value** for use in your client application code. This secret value is never displayed again after you leave this page. You use this value as the application secret in your application's code.
+
+## Step 3: Configure the Azure Static App
+
+Once the application is registered with Azure AD B2C, create the following application secrets in the Azure Static Web App's [application settings](../static-web-apps/application-settings.md). You can configure application settings via the Azure portal or with the Azure CLI. For more information, check out the [Configure application settings for Azure Static Web Apps](../static-web-apps/application-settings.md#configure-application-settings) article.
+
+Add the following keys to the app settings:
+
+| Setting Name | Value |
+| | |
+| `AADB2C_PROVIDER_CLIENT_ID` | The Web App (client) ID from [step 2.1](#step-21-register-the-app). |
+| `AADB2C_PROVIDER_CLIENT_SECRET` | The Web App (client) secret from [step 2.2](#step-22-create-a-client-secret). |
+
+> [!IMPORTANT]
+> Application secrets are sensitive security credentials. Don't share this secret with anyone, distribute it within a client application, or check into source control.
+
+### 3.1 Add an OpenID Connect identity provider
+
+Once you've added the app ID and secrete, use the following steps to add the Azure AD B2C as OpenId Connect identity provider.
+
+1. Add an `auth` section of the [configuration file](../static-web-apps/configuration.md) with a configuration block for the OIDC providers, and your provider definition.
+
+ ```json
+ {
+ "auth": {
+ "identityProviders": {
+ "customOpenIdConnectProviders": {
+ "aadb2c": {
+ "registration": {
+ "clientIdSettingName": "AADB2C_PROVIDER_CLIENT_ID",
+ "clientCredential": {
+ "clientSecretSettingName": "AADB2C_PROVIDER_CLIENT_SECRET"
+ },
+ "openIdConnectConfiguration": {
+ "wellKnownOpenIdConfiguration": "https://<TENANT_NAME>.b2clogin.com/<TENANT_NAME>.onmicrosoft.com/<POLICY_NAME>/v2.0/.well-known/openid-configuration"
+ }
+ },
+ "login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": []
+ }
+ }
+ }
+ }
+ }
+ }
+ ```
+
+1. Replace `<TENANT_NAME>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com/contoso.onmicrosoft.com`).
+1. Replace `<POLICY_NAME>` with the user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).
+
+## Step 4: Check the Azure Static Web APP
+
+1. Navigate to `/.auth/login/aadb2c`. The `/.auth/login` points the Azure Static app login endpoint. The `aadb2c` references to your [OpenID Connect identity provider](#31-add-an-openid-connect-identity-provider). The following URL demonstrates an Azure Static app login endpoint: `https://witty-island-11111111.azurestaticapps.net/.auth/login/aadb2c`.
+1. Complete the sign up or sign in process.
+1. In your browser debugger, [run the following JavaScript in the Console](/microsoft-edge/devtools-guide-chromium/console/console-javascript.md). The JavaScript code will present information about the sign in user.
+
+ ```javascript
+ async function getUserInfo() {
+ const response = await fetch('/.auth/me');
+ const payload = await response.json();
+ const { clientPrincipal } = payload;
+ return clientPrincipal;
+ }
+
+ await getUserInfo();
+ ```
++
+> [!TIP]
+> If you can't run the above JavaScript code in your browser, navigate to the following URL `https://<app-name>.azurewebsites.net/.auth/me`. Replace the `<app-name>` with your Azure Web App.
+
+## Next steps
+
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out [Accessing user information in Azure Static Web Apps](../static-web-apps/user-information.md).
+* Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-azure-static-app-options.md).
active-directory-b2c Configure Authentication In Azure Web App File Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app-file-based.md
+
+ Title: Configure authentication in an Azure Web App configuration file by using Azure Active Directory B2C
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Web App using configuration file.
++++++ Last updated : 06/28/2022+++++
+# Configure authentication in an Azure Web App configuration file by using Azure AD B2C
+
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [File-based configuration in Azure App Service authentication](/app-service/configure-authentication-file-based.md) article.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. Use the OIDC to securely sign users in to an Azure Web App. The sign-in flow involves the following steps:
+
+1. Users go to the Azure Web App and select **Sign-in**.
+1. The Azure Web App initiates an authentication request and redirects users to Azure AD B2C.
+1. Users [sign up or sign in](add-sign-up-and-sign-in-policy.md) and [reset the password](add-password-reset-policy.md). Alternatively, they can sign in with a [social account](add-identity-provider.md).
+1. After users sign in successfully, Azure AD B2C returns an ID token to the Azure Web App.
+1. Azure Web App validates the ID token, reads the claims, and returns a secure page to users.
+
+When the ID token expires or the app session is invalidated, Azure Web App initiates a new authentication request and redirects users to Azure AD B2C. If the Azure AD B2C [SSO session](session-behavior.md) is active, Azure AD B2C issues an access token without prompting users to sign in again. If the Azure AD B2C session expires or becomes invalid, users are prompted to sign in again.
+
+## Prerequisites
+
+- If you haven't created an app yet, follow the guidance how to create an [Azure Web App](../app-service/quickstart-dotnetcore.md).
+
+## Step 1: Configure your user flow
++
+## Step 2: Register a web application
+
+To enable your application to sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app that you register establishes a trust relationship between the app and Azure AD B2C.
+
+During app registration, you'll specify the *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID and the redirect URI to create authentication requests. You also create a client secret, which your app uses to securely acquire the tokens.
+
+### Step 2.1: Register the app
+
+To register your application, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. Under **Name**, enter a name for the application (for example, *My Azure web app*).
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://<YOUR_SITE>/.auth/login/aadb2c/callback`. Replace the `<YOUR_SITE>` with your Azure Web App name. For example: `https://contoso.azurewebsites.net/.auth/login/aadb2c/callback`. If you configured an [Azure Web App's custom domains](../app-service/app-service-web-tutorial-custom-domain.md), user the custom domain in the redirect URI. For example, `https://www.contoso.com/.auth/login/aadb2c/callback`
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
+1. Select **Register**.
+1. Select **Overview**.
+1. Record the **Application (client) ID** for later use, when you configure the web application.
+
+ ![Screenshot of the web app Overview page for recording your web application I D.](./media/configure-authentication-in-azure-web-app/get-azure-ad-b2c-app-id.png)
+
+### Step 2.2: Create a client secret
+
+1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *My Azure web app*.
+1. In the left menu, under **Manage**, select **Certificates & secrets**.
+1. Select **New client secret**.
+1. Enter a description for the client secret in the **Description** box. For example, *clientsecret1*.
+1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**.
+1. Record the secret's **Value** for use in your client application code. This secret value is never displayed again after you leave this page. You use this value as the application secret in your application's code.
+
+## Step 3: Configure the Azure Web App
+
+Once the application is registered with Azure AD B2C, create the following application secrets in the Azure Web App's [application settings](../app-service/configure-common.md#configure-app-settings). You can configure application settings via the Azure portal or with the Azure CLI. For more information, check out the [File-based configuration in Azure App Service authentication](../app-service/configure-authentication-file-based.md) article.
+
+Add the following keys to the app settings:
+
+| Setting Name | Value |
+| | |
+| `AADB2C_PROVIDER_CLIENT_ID` | The Web App (client) ID from [step 2.1](#step-21-register-the-app). |
+| `AADB2C_PROVIDER_CLIENT_SECRET` | The Web App (client) secret from [step 2.2](#step-22-create-a-client-secret). |
+
+> [!IMPORTANT]
+> Application secrets are sensitive security credentials. Do not share this secret with anyone. Don't distribute it within a client application, or check in into a source control.
+
+### 3.1 Add an OpenID Connect identity provider
+
+Once you've the added the app ID and secret, use the following steps to add the Azure AD B2C as OpenId Connect identity provider.
+
+1. Add an `auth` section of the [configuration file](../app-service/configure-authentication-file-based.md#configuration-file-reference) with a configuration block for the OIDC providers, and your provider definition.
+
+ ```json
+ {
+ "auth": {
+ "identityProviders": {
+ "customOpenIdConnectProviders": {
+ "aadb2c": {
+ "registration": {
+ "clientIdSettingName": "AADB2C_PROVIDER_CLIENT_ID",
+ "clientCredential": {
+ "clientSecretSettingName": "AADB2C_PROVIDER_CLIENT_SECRET"
+ },
+ "openIdConnectConfiguration": {
+ "wellKnownOpenIdConfiguration": "https://<TENANT_NAME>.b2clogin.com/<TENANT_NAME>.onmicrosoft.com/<POLICY_NAME>/v2.0/.well-known/openid-configuration"
+ }
+ },
+ "login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": []
+ }
+ }
+ }
+ }
+ }
+ }
+ ```
+
+1. Replace `<TENANT_NAME>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com/contoso.onmicrosoft.com`).
+1. Replace `<POLICY_NAME>` with the user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).
+
+## Step 4: Check the Azure Web app
+
+1. Navigate to your Azure Web App.
+1. Complete the sign up or sign in process.
+1. In your browser, navigate you the following URL `https://<app-name>.azurewebsites.net/.auth/me`. Replace the `<app-name>` with your Azure Web App
+
+## Retrieve tokens in app code
+
+From your server code, the provider-specific tokens are injected into the request header, so you can easily access them. The following table shows possible token header names:
++
+|Header name |Description |
+|||
+|X-MS-CLIENT-PRINCIPAL-NAME| The user's display name. |
+|X-MS-CLIENT-PRINCIPAL-ID| The ID token sub claim. |
+|X-MS-CLIENT-PRINCIPAL-IDP| The identity provider name, `aadb2c`.|
+|X-MS-TOKEN-AADB2C-ID-TOKEN| The ID token issued by Azure AD B2C|
+
+## Next steps
+
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/app-service/configure-authentication-user-identities).
+* Lear how to [Work with OAuth tokens in Azure App Service authentication](/app-service/configure-authentication-oauth-tokens).
+
active-directory-b2c Configure Authentication In Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app.md
+
+ Title: Configure authentication in an Azure Web App by using Azure Active Directory B2C
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Web App.
++++++ Last updated : 06/28/2022+++++
+# Configure authentication in an Azure Web App by using Azure AD B2C
+
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [configure your App Service or Azure Functions app to login using an OpenID Connect provider](/app-service/configure-authentication-provider-openid-connect.md) article.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. Use the OIDC to securely sign users in to an Azure Web App. The sign-in flow involves the following steps:
+
+1. Users go to the Azure Web App and select **Sign-in**.
+1. The Azure Web App initiates an authentication request and redirects users to Azure AD B2C.
+1. Users [sign up or sign in](add-sign-up-and-sign-in-policy.md) and [reset the password](add-password-reset-policy.md). Alternatively, they can sign in with a [social account](add-identity-provider.md).
+1. After users sign in successfully, Azure AD B2C returns an ID token to the Azure Web App.
+1. Azure Web App validates the ID token, reads the claims, and returns a secure page to users.
+
+When the ID token expires or the app session is invalidated, Azure Web App initiates a new authentication request and redirects users to Azure AD B2C. If the Azure AD B2C [SSO session](session-behavior.md) is active, Azure AD B2C issues an access token without prompting users to sign in again. If the Azure AD B2C session expires or becomes invalid, users are prompted to sign in again.
+
+## Prerequisites
+
+- If you haven't created an app yet, follow the guidance how to create an [Azure Web App](../app-service/quickstart-dotnetcore.md).
+
+## Step 1: Configure your user flow
++
+## Step 2: Register a web application
+
+To enable your application to sign in with Azure AD B2C, register your app in the Azure AD B2C directory. Registering your app establishes a trust relationship between the app and Azure AD B2C.
+
+During app registration, you'll specify the *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID and the redirect URI to create authentication requests. You also create a client secret, which your app uses to securely acquire the tokens.
+
+### Step 2.1: Register the app
+
+To register your application, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. Under **Name**, enter a name for the application (for example, *My Azure web app*).
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://<YOUR_SITE>/.auth/login/aadb2c/callback`. Replace the `<YOUR_SITE>` with your Azure Web App name. For example: `https://contoso.azurewebsites.net/.auth/login/aadb2c/callback`. If you configured an [Azure Web App's custom domains](../app-service/app-service-web-tutorial-custom-domain.md), user the custom domain in the redirect URI. For example, `https://www.contoso.com/.auth/login/aadb2c/callback`
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
+1. Select **Register**.
+1. Select **Overview**.
+1. Record the **Application (client) ID** for later use, when you configure the web application.
+
+ ![Screenshot of the web app Overview page for recording your web application I D.](./media/configure-authentication-in-azure-web-app/get-azure-ad-b2c-app-id.png)
+
+### Step 2.2: Create a client secret
+
+1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *My Azure web app*.
+1. In the left menu, under **Manage**, select **Certificates & secrets**.
+1. Select **New client secret**.
+1. Enter a description for the client secret in the **Description** box. For example, *clientsecret1*.
+1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**.
+1. Record the secret's **Value** for use in your client application code. This secret value is never displayed again after you leave this page. You use this value as the application secret in your application's code.
+
+## Step 3: Configure the Azure App
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD tenant (not the Azure AD B2C tenant). Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find the Azure AD directory in the **Directory name** list, and then select **Switch**.
+1. Navigate to your Azure web app.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+1. Select **OpenID Connect** in the identity provider dropdown.
+1. For **OpenID provider name** type `aadb2c`.
+1. For **Metadata entry**, select **Document URL**. Then for the **Document URL** provide the following URL:
+
+ ```http
+ https://<TENANT_NAME>.b2clogin.com/<TENANT_NAME>.onmicrosoft.com/<POLICY_NAME>/v2.0/.well-known/openid-configuration
+ ```
+
+ 1. Replace `<TENANT_NAME>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com/contoso.onmicrosoft.com`). If you have a [custom domains](custom-domain.md) configure, you can use that custom domain. Replace your B2C tenant name, contoso.onmicrosoft.com, in the authentication request URL with your tenant ID GUID. For example, you can change `https://fabrikamb2c.b2clogin.com/contoso.onmicrosoft.com/` to `https://account.contosobank.co.uk/<tenant ID GUID>/`.
+
+ 1. Replace the `<POLICY_NAME>` with the user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).
+
+1. For the **Client ID** provide the Web App (client) ID from [step 2.1](#step-21-register-the-app).
+1. For the **Client Secret** provide the Web App (client) secret from [step 2.2](#step-22-create-a-client-secret).
+
+ > [!TIP]
+ > Your client secret will be stored as an app setting to ensure secrets are stored in a secure fashion. You can update that setting later to use [Key Vault references](/app-service/app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
+
+1. Keep the rest of the settings with the default values.
+1. Press the **Add** button to finish setting up the identity provider.
+
+## Step 4: Check the Azure Web app
+
+1. In your browser, navigate to your Azure Web App using `https://<app-name>.azurewebsites.net` . Replace the `<app-name>` with your Azure Web App.
+1. Complete the sign up or sign in process.
+1. In your browser, navigate you the following URL `https://<app-name>.azurewebsites.net/.auth/me` to see the information about the signed in user. Replace the `<app-name>` with your Azure Web App.
+
+## Retrieve tokens in app code
+
+From your server code, the provider-specific tokens are injected into the request header, so you can easily access them. The following table shows possible token header names:
++
+|Header name |Description |
+|||
+|X-MS-CLIENT-PRINCIPAL-NAME| The user's display name. |
+|X-MS-CLIENT-PRINCIPAL-ID| The ID token sub claim. |
+|X-MS-CLIENT-PRINCIPAL-IDP| The identity provider name, `aadb2c`.|
+|X-MS-TOKEN-AADB2C-ID-TOKEN| The ID token issued by Azure AD B2C|
+
+## Next steps
+
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/app-service/configure-authentication-user-identities).
+* Lear how to [Work with OAuth tokens in Azure App Service authentication](/app-service/configure-authentication-oauth-tokens).
+
active-directory-b2c Configure Authentication Sample Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md
Previously updated : 06/08/2022 Last updated : 06/28/2022
The sign-in flow involves the following steps:
A computer that's running: * [Visual Studio Code](https://code.visualstudio.com/) or another code editor
-* [Python](https://nodejs.org/en/download/) 2.7+ or 3+
+* [Python](https://www.python.org/downloads/) 3.9 or above
## Step 1: Configure your user flow
During app registration, you'll specify the *Redirect URI*. The redirect URI is
### Step 2.1: Register the app
-To create the web app registration, do the following:
+To create the web app registration, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
To create the web app registration, do the following:
1. Select **Overview**. 1. Record the **Application (client) ID** for later use, when you configure the web application.
- ![Screenshot of the web app Overview page for recording your web app ID.](./media/configure-authentication-sample-python-web-app/get-azure-ad-b2c-app-id.png)
+ ![Screenshot of the web app Overview page for recording your web app I D.](./media/configure-authentication-sample-python-web-app/get-azure-ad-b2c-app-id.png)
### Step 2.2: Create a web app client secret
Extract the sample file to a folder where the total length of the path is 260 or
## Step 4: Configure the sample web app
-In the project's root directory, do the following:
+In the project's root directory, follow these steps:
1. Rename the *app_config.py* file to *app_config.py.OLD*. 1. Rename the *app_config_b2c.py* file to *app_config.py*.
CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxx" # Placeholder - for use ONLY during t
``` 1. Install the required packages from PyPi and run the web app on your local machine by running the following commands:
- ```console
- pip install -r requirements.txt
- flask run --host localhost --port 5000
+ # [Linux](#tab/linux)
+
+ ```bash
+ python -m pip install -r requirements.txt
+ python -m flask run --host localhost --port 5000
+ ```
+
+ # [macOS](#tab/macos)
+
+ ```bash
+ python -m pip install -r requirements.txt
+ python -m flask run --host localhost --port 5000
+ ```
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ py -m pip install -r requirements.txt
+ py -m flask run --host localhost --port 5000
```
+
+
The console window displays the port number of the locally running application:
CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxx" # Placeholder - for use ONLY during t
1. Select **Sign In**.
- ![Screenshot showing the sign-in with Azure AD B2C.](./media/configure-authentication-sample-python-web-app/web-app-sign-in.png)
+ ![Screenshot showing the sign-in flow.](./media/configure-authentication-sample-python-web-app/web-app-sign-in.png)
1. Complete the sign-up or sign-in process.
To enable your app to sign in with Azure AD B2C and call a web API, you must reg
The app registrations and the application architecture are described in the following diagrams:
-![Diagram describing a web app with web API, registrations, and tokens.](./media/configure-authentication-sample-python-web-app/web-app-with-api-architecture.png)
+![Diagram describing a web app with web A P I, registrations, and tokens.](./media/configure-authentication-sample-python-web-app/web-app-with-api-architecture.png)
[!INCLUDE [active-directory-b2c-app-integration-call-api](../../includes/active-directory-b2c-app-integration-call-api.md)]
SCOPE = ["https://contoso.onmicrosoft.com/api/demo.read", "https://contoso.onmic
1. Stop the app. and then rerun it. 1. Select **Call Microsoft Graph API**.
- ![Screenshot showing how to call a web API.](./media/configure-authentication-sample-python-web-app/call-web-api.png)
+ ![Screenshot showing how to call a web A P I.](./media/configure-authentication-sample-python-web-app/call-web-api.png)
## Step 7: Deploy your application
active-directory-b2c Enable Authentication Azure Static App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-azure-static-app-options.md
+
+ Title: Enable Azure Static Web App authentication options using Azure Active Directory B2C
+description: This article discusses several ways to enable Azure Static Web App authentication options.
++++++ Last updated : 06/28/2022+++++
+# Enable authentication options in an Azure Static Web App by using Azure AD B2C
+
+This article describes how to enable, customize, and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your Azure Static Web Apps.
+
+Before you start, it's important to familiarize yourself with the [Configure authentication in an Azure Static Web App by using Azure AD B2C](configure-authentication-in-azure-static-app.md) article.
++
+To use a custom domain and your tenant ID in the authentication URL, follow the guidance in [Enable custom domains](custom-domain.md). Open the [configuration file](../static-web-apps/configuration.md). This file contains information about your Azure AD B2C identity provider.
+
+In the configuration file, follow these steps:
+
+1. Under the `customOpenIdConnectProviders` locate the `wellKnownOpenIdConfiguration` element.
+1. Update the URL of your Azure AD B2C well-Known configuration endpoint with your custom domain and [tenant ID](tenant-management.md#get-your-tenant-id). For more information, see [Use tenant ID](custom-domain.md#optional-use-tenant-id).
+
+The following JSON shows the app settings before the change:
+
+```JSON
+"openIdConnectConfiguration": {
+ "wellKnownOpenIdConfiguration": "https://contoso.b2clogin.com/contoso.onmicrosoft.com/<POLICY_NAME>/v2.0/.well-known/openid-configuration"
+ }
+}
+```
+
+The following JSON shows the app settings after the change:
+
+```JSON
+"openIdConnectConfiguration": {
+ "wellKnownOpenIdConfiguration": "https://login.contoso.com/00000000-0000-0000-0000-000000000000/<POLICY_NAME>/v2.0/.well-known/openid-configuration"
+ }
+```
+++
+1. Check the domain name of your external identity provider. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider).
+1. Open the [configuration file](../static-web-apps/configuration.md).
+1. Under the `login` element, locate the `loginParameterNames`.
+1. Add the domain_hint parameter with its corresponding value, such as facebook.com.
+
+The following code snippets demonstrate how to pass the domain hint parameter. It uses facebook.com as the attribute value.
+
+```json
+"login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": ["domain_hint=facebook.com"]
+}
+```
+++
+1. [Configure language customization](language-customization.md).
+1. Open the [configuration file](../static-web-apps/configuration.md).
+1. Under the `login` element, locate the `loginParameterNames`.
+1. Add the ui_locales parameter with its corresponding value, such as `es-es`.
+
+The following code snippets demonstrate how to pass the `ui_locales` parameter. It uses `es-es` as the attribute value.
+
+```json
+"login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": ["ui_locales=es-es"]
+}
+```
++
+1. Configure the [ContentDefinitionParameters](customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) element.
+1. Open the [configuration file](../static-web-apps/configuration.md).
+1. Under the `login` element, locate the `loginParameterNames`.
+1. Add the custom parameter, such as `campaignId`.
+
+The following code snippets demonstrate how to pass the `campaignId` custom query string parameter. It uses `germany-promotion` as the attribute value.
+
+```json
+"login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": ["campaignId=germany-promotion"]
+}
+```
+
+## Next steps
+
+- Check out the [Azure Static App configuration overview](../static-web-apps/configuration-overview.md) article.
active-directory-b2c Enable Authentication Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md
+
+ Title: Enable authentication in your own Python web application using Azure Active Directory B2C
+description: This article explains how to enable authentication in your own Python web application using Azure AD B2C
+++++++ Last updated : 06/28/2022++++
+# Enable authentication in your own Python web application using Azure Active Directory B2C
+
+In this article, you'll learn how to add Azure Active Directory B2C (Azure AD B2C) authentication in your own Python web application. You'll enable users to sign in, sign out, update profile and reset password using Azure AD B2C user flows. This article uses [Microsoft Authentication Library (MSAL) for Python](https://github.com/AzureAD/microsoft-authentication-library-for-python/tree/main) to simplify adding authentication to your Python web application.
+
+The aim of this article is to substitute the sample application you used in [Configure authentication in a sample Python web application by using Azure AD B2C](configure-authentication-sample-python-web-app.md) with your own Python application.
+
+This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https://flask.palletsprojects.com/en/2.1.x/) to create a basic web app. The application's views uses [Jinja2 templates](https://flask.palletsprojects.com/en/2.1.x/templating/).
+
+## Prerequisites
+
+- Complete the steps in [Configure authentication in a sample Python web application by using Azure AD B2C](configure-authentication-sample-python-web-app.md). You'll create Azure AD B2C user flows and register a web application in Azure portal.
+- Install [Python](https://www.python.org/downloads/) 3.9 or above
+- [Visual Studio Code](https://code.visualstudio.com/) or another code editor
+- Install the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code
+
+## Step 1: Create the Python project
+
+1. On your file system, create a project folder for this tutorial, such as `my-python-web-app`.
+1. In your terminal, change directory into your Python app folder, such as `cd my-python-web-app`.
+1. Run the following command to create and activate a virtual environment named `.venv` based on your current interpreter.
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ sudo apt-get install python3-venv # If needed
+ python3 -m venv .venv
+ source .venv/bin/activate
+ ```
+
+ # [macOS](#tab/macos)
+
+ ```bash
+ python3 -m venv .venv
+ source .venv/bin/activate
+ ```
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ py -3 -m venv .venv
+ .venv\scripts\activate
+ ```
+
+
+1. Update pip in the virtual environment by running the following command in the terminal:
+
+ ```bash
+ python -m pip install --upgrade pip
+ ```
+
+1. To enable the Flask debug features, switch Flask to the development environment to `development` mode. For more information about debugging Flask apps, check out the [Flask documentation](https://flask.palletsprojects.com/en/2.1.x/config/#environment-and-debug-features).
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ export FLASK_ENV=development
+ ```
+
+ # [macOS](#tab/macos)
+
+ ```bash
+ export FLASK_ENV=development
+ ```
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ set FLASK_ENV=development
+ ```
+
+
+1. Open the project folder in VS Code by running the `code .` command, or by opening VS Code and selecting the **File** > **Open Folder**.
++
+## Step 2: Install app dependencies
+
+Under your web app root folder, create the `requirements.txt` file. The requirements file [lists the packages](https://pip.pypa.io/en/stable/user_guide/) to be installed using pip install. Add the following content to the requirements.txt file:
++
+```
+Flask>=2
+werkzeug>=2
+
+flask-session>=0.3.2,<0.5
+requests>=2,<3
+msal>=1.7,<2
+```
+
+In your terminal, install the dependencies by running the following commands:
+
+# [Linux](#tab/linux)
+
+```bash
+python -m pip install -r requirements.txt
+```
+
+# [macOS](#tab/macos)
+
+```bash
+python -m pip install -r requirements.txt
+```
+
+# [Windows](#tab/windows)
+
+```bash
+py -m pip install -r requirements.txt
+```
+++
+## Step 3: Build app UI components
+
+Flask is a lightweight Python framework for web applications that provides the basics for URL routing and page rendering. It leverages Jinja2 as its template engine to render the content of your app. For more information, check out the [template designer documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/). In this section, you add the required templates that provide the basic functionality of your web app.
+
+### Step 3.1 Create a base template
+
+A base page template in Flask contains all the shared parts of a set of pages, including references to CSS files, script files, and so forth. Base templates also define one or more block tags that other templates that extend the base are expected to override. A block tag is delineated by `{% block <name> %}` and `{% endblock %}` in both the base template and the extended template.
++
+In the root folder of your web app, create the `templates` folder. In the templates folder, create a file named `base.html`, and then add the contents below:
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+
+<head>
+ <meta charset="UTF-8">
+ {% block metadata %}{% endblock %}
+
+ <title>{% block title %}{% endblock %}</title>
+ <!-- Bootstrap CSS file reference -->
+ <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.0-beta1/dist/css/bootstrap.min.css" rel="stylesheet"
+ integrity="sha384-0evHe/X+R7YkIZDRvuzKMRqM+OrBnVFBL6DOitfPri4tjfHxaWutUpFmBp4vmVor" crossorigin="anonymous">
+</head>
+
+<body>
+ <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
+ <div class="container-fluid">
+ <a class="navbar-brand" href="{{ url_for('index')}}">Python Flask demo</a>
+ <button class="navbar-toggler" type="button" data-bs-toggle="collapse"
+ data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false"
+ aria-label="Toggle navigation">
+ <span class="navbar-toggler-icon"></span>
+ </button>
+ <div class="collapse navbar-collapse" id="navbarSupportedContent">
+ <ul class="navbar-nav me-auto mb-2 mb-lg-0">
+ <li class="nav-item">
+ <a class="nav-link active" aria-current="page" href="{{ url_for('index')}}">Home</a>
+ </li>
+ <li class="nav-item">
+ <a class="nav-link" href="{{ url_for('graphcall')}}">Graph API</a>
+ </li>
+ </ul>
+ </div>
+ </div>
+ </nav>
+
+ <div class="container body-content">
+ <br />
+ {% block content %}
+ {% endblock %}
+
+ <hr />
+ <footer>
+ <p>Powered by MSAL Python {{ version }}</p>
+ </footer>
+ </div>
+</body>
+
+</html>
+```
+
+### Step 3.2 Create the web app templates
+
+Add the following templates under the templates folder. These templates extend the `base.html` template:
+
+- **https://docsupdatetracker.net/index.html**: the home page of the web app. The templates use the following logic: if a user doesn't sign-in, it renders the sing-in button. If users sings-in, it renders the access token's claims, link to edit profile, and call a Graph API.
+
+ ```html
+ {% extends "base.html" %}
+ {% block title %}Home{% endblock %}
+ {% block content %}
+
+ <h1>Microsoft Identity Python Web App</h1>
+
+ {% if user %}
+ <h2>Claims:</h2>
+ <pre>{{ user |tojson(indent=4) }}</pre>
+
+
+ {% if config.get("ENDPOINT") %}
+ <li><a href='/graphcall'>Call Microsoft Graph API</a></li>
+ {% endif %}
+
+ {% if config.get("B2C_PROFILE_AUTHORITY") %}
+ <li><a href='{{_build_auth_code_flow(authority=config["B2C_PROFILE_AUTHORITY"])["auth_uri"]}}'>Edit Profile</a></li>
+ {% endif %}
+
+ <li><a href="/logout">Logout</a></li>
+
+ {% else %}
+ <li><a href='{{ auth_url }}'>Sign In</a></li>
+ {% endif %}
+
+ {% endblock %}
+ ```
+
+- **graph.html**: Demonstrates how to call a REST API.
+
+ ```html
+ {% extends "base.html" %}
+ {% block title %}Graph API{% endblock %}
+ {% block content %}
+ <a href="javascript:window.history.go(-1)">Back</a>
+ <!-- Displayed on top of a potentially large JSON response, so it will remain visible -->
+ <h1>Graph API Call Result</h1>
+ <pre>{{ result |tojson(indent=4) }}</pre> <!-- Just a generic json viewer -->
+ {% endblock %}
+ ```
+
+- **auth_error.html**: Handles authentication errors.
+
+ ```html
+ {% extends "base.html" %}
+ {% block title%}Error{% endblock%}
+
+ {% block metadata %}
+ {% if config.get("B2C_RESET_PASSWORD_AUTHORITY") and "AADB2C90118" in result.get("error_description") %}
+ <!-- See also https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-reference-policies#linking-user-flows -->
+ <meta http-equiv="refresh"
+ content='0;{{_build_auth_code_flow(authority=config["B2C_RESET_PASSWORD_AUTHORITY"])["auth_uri"]}}'>
+ {% endif %}
+ {% endblock %}
+
+ {% block content %}
+ <h2>Login Failure</h2>
+ <dl>
+ <dt>{{ result.get("error") }}</dt>
+ <dd>{{ result.get("error_description") }}</dd>
+ </dl>
+
+ <a href="{{ url_for('index') }}">Homepage</a>
+ {% endblock %}
+ ```
+
+## Step 4: Configure your web app
+
+In the root folder of your web app, create a file named `app_config.py`. This file contains information about your Azure AD B2C identity provider. The web app uses this information to establish a trust relationship with Azure AD B2C, sign users in and out, acquire tokens, and validate them. Add the following contents into the file:
+
+```python
+import os
+
+b2c_tenant = "fabrikamb2c"
+signupsignin_user_flow = "B2C_1_signupsignin1"
+editprofile_user_flow = "B2C_1_profileediting1"
+
+resetpassword_user_flow = "B2C_1_passwordreset1" # Note: Legacy setting.
+
+authority_template = "https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{user_flow}"
+
+CLIENT_ID = "Enter_the_Application_Id_here" # Application (client) ID of app registration
+
+CLIENT_SECRET = "Enter_the_Client_Secret_Here" # Application secret.
+
+AUTHORITY = authority_template.format(
+ tenant=b2c_tenant, user_flow=signupsignin_user_flow)
+B2C_PROFILE_AUTHORITY = authority_template.format(
+ tenant=b2c_tenant, user_flow=editprofile_user_flow)
+
+B2C_RESET_PASSWORD_AUTHORITY = authority_template.format(
+ tenant=b2c_tenant, user_flow=resetpassword_user_flow)
+
+REDIRECT_PATH = "/getAToken"
+
+# This is the API resource endpoint
+ENDPOINT = '' # Application ID URI of app registration in Azure portal
+
+# These are the scopes you've exposed in the web API app registration in the Azure portal
+SCOPE = [] # Example with two exposed scopes: ["demo.read", "demo.write"]
+
+SESSION_TYPE = "filesystem" # Specifies the token cache should be stored in server-side session
+```
+
+Update the code above with your Azure AD B2C environment settings as explained in the [Configure the sample web app](configure-authentication-sample-python-web-app.md#step-4-configure-the-sample-web-app) section of the [Configure authentication in a sample Python web app](configure-authentication-sample-python-web-app.md) article.
+
+## Step 5: Add the web app code
+
+In this section, you add the Flask view functions, and the MSAL library authentication methods. Under the root folder of your project, add a file named `app.py` with the following code:
+
+```python
+import uuid
+import requests
+from flask import Flask, render_template, session, request, redirect, url_for
+from flask_session import Session # https://pythonhosted.org/Flask-Session
+import msal
+import app_config
++
+app = Flask(__name__)
+app.config.from_object(app_config)
+Session(app)
+
+# This section is needed for url_for("foo", _external=True) to automatically
+# generate http scheme when this sample is running on localhost,
+# and to generate https scheme when it is deployed behind reversed proxy.
+# See also https://flask.palletsprojects.com/en/1.0.x/deploying/wsgi-standalone/#proxy-setups
+from werkzeug.middleware.proxy_fix import ProxyFix
+app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1, x_host=1)
++
+@app.route("/anonymous")
+def anonymous():
+ return "anonymous page"
+
+@app.route("/")
+def index():
+ #if not session.get("user"):
+ # return redirect(url_for("login"))
+
+ if not session.get("user"):
+ session["flow"] = _build_auth_code_flow(scopes=app_config.SCOPE)
+ return render_template('https://docsupdatetracker.net/index.html', auth_url=session["flow"]["auth_uri"], version=msal.__version__)
+ else:
+ return render_template('https://docsupdatetracker.net/index.html', user=session["user"], version=msal.__version__)
+
+@app.route("/login")
+def login():
+ # Technically we could use empty list [] as scopes to do just sign in,
+ # here we choose to also collect end user consent upfront
+ session["flow"] = _build_auth_code_flow(scopes=app_config.SCOPE)
+ return render_template("login.html", auth_url=session["flow"]["auth_uri"], version=msal.__version__)
+
+@app.route(app_config.REDIRECT_PATH) # Its absolute URL must match your app's redirect_uri set in AAD
+def authorized():
+ try:
+ cache = _load_cache()
+ result = _build_msal_app(cache=cache).acquire_token_by_auth_code_flow(
+ session.get("flow", {}), request.args)
+ if "error" in result:
+ return render_template("auth_error.html", result=result)
+ session["user"] = result.get("id_token_claims")
+ _save_cache(cache)
+ except ValueError: # Usually caused by CSRF
+ pass # Simply ignore them
+ return redirect(url_for("index"))
+
+@app.route("/logout")
+def logout():
+ session.clear() # Wipe out user and its token cache from session
+ return redirect( # Also logout from your tenant's web session
+ app_config.AUTHORITY + "/oauth2/v2.0/logout" +
+ "?post_logout_redirect_uri=" + url_for("index", _external=True))
+
+@app.route("/graphcall")
+def graphcall():
+ token = _get_token_from_cache(app_config.SCOPE)
+ if not token:
+ return redirect(url_for("login"))
+ graph_data = requests.get( # Use token to call downstream service
+ app_config.ENDPOINT,
+ headers={'Authorization': 'Bearer ' + token['access_token']},
+ ).json()
+ return render_template('graph.html', result=graph_data)
++
+def _load_cache():
+ cache = msal.SerializableTokenCache()
+ if session.get("token_cache"):
+ cache.deserialize(session["token_cache"])
+ return cache
+
+def _save_cache(cache):
+ if cache.has_state_changed:
+ session["token_cache"] = cache.serialize()
+
+def _build_msal_app(cache=None, authority=None):
+ return msal.ConfidentialClientApplication(
+ app_config.CLIENT_ID, authority=authority or app_config.AUTHORITY,
+ client_credential=app_config.CLIENT_SECRET, token_cache=cache)
+
+def _build_auth_code_flow(authority=None, scopes=None):
+ return _build_msal_app(authority=authority).initiate_auth_code_flow(
+ scopes or [],
+ redirect_uri=url_for("authorized", _external=True))
+
+def _get_token_from_cache(scope=None):
+ cache = _load_cache() # This web app maintains one cache per session
+ cca = _build_msal_app(cache=cache)
+ accounts = cca.get_accounts()
+ if accounts: # So all account(s) belong to the current signed-in user
+ result = cca.acquire_token_silent(scope, account=accounts[0])
+ _save_cache(cache)
+ return result
+
+app.jinja_env.globals.update(_build_auth_code_flow=_build_auth_code_flow) # Used in template
+
+if __name__ == "__main__":
+ app.run()
+
+```
+
+## Step 6: Run your web app
+
+In the Terminal, run the app by entering the following command, which runs the Flask development server. The development server looks for `app.py` by default. Then, open your browser and navigate to the web app URL: <http://localhost:5000>.
+
+# [Linux](#tab/linux)
+
+```bash
+python -m flask run --host localhost --port 5000
+```
+
+# [macOS](#tab/macos)
+
+```bash
+python -m flask run --host localhost --port 5000
+```
+
+# [Windows](#tab/windows)
+
+```bash
+py -m flask run --host localhost --port 5000
+```
+++
+## [Optional] Debug your app
+
+The debugging feature gives you the opportunity to pause a running program on a particular line of code. When you pause the program, you can examine variables, run code in the Debug Console panel, and otherwise take advantage of the features described on [Debugging](https://code.visualstudio.com/docs/python/debugging). To use the Visual Studio Code debugger, check out the [VS Code documentation](https://code.visualstudio.com/docs/python/tutorial-flask#_create-multiple-templates-that-extend-a-base-template).
+
+To change the host name and/or port number, use the `args` array of the `launch.json` file. The following example demonstrates how to configure the host name to `localhost` and port number to `5001`. Note, if you change the host name, or the port number, you must update the redirect URI or your application. For more information, check out the [Register a web application](configure-authentication-sample-python-web-app.md#step-2-register-a-web-application) step.
+
+```json
+{
+ // Use IntelliSense to learn about possible attributes.
+ // Hover to view descriptions of existing attributes.
+ // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "name": "Python: Flask",
+ "type": "python",
+ "request": "launch",
+ "module": "flask",
+ "env": {
+ "FLASK_APP": "app.py",
+ "FLASK_ENV": "development"
+ },
+ "args": [
+ "run",
+ "--host=localhost",
+ "--port=5001"
+ ],
+ "jinja": true,
+ "justMyCode": true
+ }
+ ]
+}
+```
+++
+## Next steps
+
+- Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-python-web-app-options.md)
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
There are multiple scenarios that organizations can now enable using filter for
- Policy 2: Select users and groups and include group that contains service accounts only, accessing all cloud apps, excluding a filter for devices using rule expression device.extensionAttribute2 not equals TeamsPhoneDevice and for Access controls, Block. > [!NOTE]
-> Azure AD uses device authentication to evaluate device filter rules. For devices that are unregistered with Azure AD, all device properties are considered as null values.
+> Azure AD uses device authentication to evaluate device filter rules. For a device that is unregistered with Azure AD, all device properties are considered as null values and the device attributes cannot be determined since the device does not exist in the directory. The best way to target policies for unregistered devices is by using the negative operator since the configured filter rule would apply. If you were to use a positive operator, the filter rule would only apply when a device exists in the directory and the configured rule matches the attribute on the device.
## Create a Conditional Access policy
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
# Configure an application's publisher domain
-An applicationΓÇÖs publisher domain is displayed to users on the [applicationΓÇÖs consent prompt](application-consent-experience.md) to let users know where their information is being sent. Multi-tenant applications that are registered after May 21, 2019 that don't have a publisher domain show up as **unverified**. Multi-tenant applications are applications that support accounts outside of a single organizational directory; for example, support all Azure AD accounts, or support all Azure AD accounts and personal Microsoft accounts.
+An applicationΓÇÖs publisher domain informs the users where their information is being sent and acts as an input/prerequisite for [publisher verification](publisher-verification-overview.md). Depending on when the app was registered and it's verified publisher status, publisher domain may be displayed directly to the user on the [application's consent prompt](application-consent-experience.md). [Multi-tenant applications](/azure/architecture/guide/multitenant/overview) that are registered after May 21, 2019, that don't have a publisher domain show up asΓÇ»**unverified**. Multi-tenant applications are applications that support accounts outside of a single organizational directory; for example, support all Azure AD accounts, or support all Azure AD accounts and personal Microsoft accounts.
## New applications
The following table summarizes the default behavior of the publisher domain valu
| *.onmicrosoft.com | *.onmicrosoft.com | | - *.onmicrosoft.com<br/>- domain1.com<br/>- domain2.com (primary) | domain2.com |
-If a multi-tenant application's publisher domain isn't set, or if it's set to a domain that ends in .onmicrosoft.com, the app's consent prompt will show **unverified** in place of the publisher domain.
-
+1. If your multi-tenant was registered between **May 21, 2019 and November 30, 2020**:
+ - If the application's publisher domain isn't set, or if it's set to a domain that ends in .onmicrosoft.com, the app's consent prompt will show **unverified** in place of the publisher domain.
+ - If the application has a verified app domain, the consent prompt will show the verified domain.
+ - If the application is publisher verified, it will show a [blue "verified" badge] (publisher-verification-overview.md) indicating the same
+2. If your multi-tenant was registered after **November 30, 2020**:
+ - If the application is not publisher verified, the app will show as "**unverified**" in the consent prompt (i.e, no publisher domain related info is shown)
+ - If the application is publisher verified, it will show a [blue "verified" badge] (publisher-verification-overview.md) indicating the same
## Grandfathered applications
-If your app was registered before May 21, 2019, your application's consent prompt will not show **unverified** if you have not set a publisher domain. We recommend that you set the publisher domain value so that users can see this information on your app's consent prompt.
+If your app was registered before May 21, 2019, your application's consent prompt will not show **unverified** even if you have not set a publisher domain. We recommend that you set the publisher domain value so that users can see this information on your app's consent prompt.
## Configure publisher domain using the Azure portal
Configuring the publisher domain has an impact on what users see on the app cons
The following table describes the behavior for applications created before May 21, 2019.
-![Consent prompt for apps created before May 21, 2019](./media/howto-configure-publisher-domain/old-app-behavior-table.png)
+![Table that shows consent prompt behavior for apps created before May 21, 2019.](./media/howto-configure-publisher-domain/old-app-behavior-table.png)
+
+The behavior for applications created between May 21, 2019 and November 30, 2020 will depend on the publisher domain and the type of application. The following table describes what is shown on the consent prompt with the different combinations of configurations.
+
+![Table that shows consent prompt behavior for apps created betweeb May 21, 2019 and Nov 30, 2020.](./media/howto-configure-publisher-domain/new-app-behavior-table.png)
-The behavior for new applications created after May 21, 2019 will depend on the publisher domain and the type of application. The following table describes the changes you should expect to see with the different combinations of configurations.
+For multi-tenant applications created after November 30, 2020, only publisher verification status is surfaced in the consent prompt. The following table describes what is shown on the consent prompt depending on whether an app is verified or not. Consent prompt for single tenant applications will remain the same as above.
-![Consent prompt for apps created after May 21, 2019](./media/howto-configure-publisher-domain/new-app-behavior-table.png)
+![Table that shows consent prompt behavior for apps created after Nov 30, 2020.](./media/howto-configure-publisher-domain/new-app-behavior-publisher-verification-table.png)
## Implications on redirect URIs
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Publisher verification provides the following benefits:
## Requirements There are a few pre-requisites for publisher verification, some of which will have already been completed by many Microsoft partners. They are: -- An MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. This MPN account must be the [Partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for your organization.
+- An MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. This MPN account must be the [Partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for your organization. (**NOTE**: It can't be the Partner Location MPN ID. Location MPN IDs aren't currently supported)
- The application to be publisher verified must be registered using a Azure AD account. Applications registered using a Microsoft personal account aren't supported for publisher verification.
There are a few pre-requisites for publisher verification, some of which will ha
- In Azure AD this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Admin.
- - In Partner Center this user must have of the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Admin (this is a shared role mastered in Azure AD).
+ - In Partner Center this user must have of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or a Global Admin (this is a shared role mastered in Azure AD).
- The user performing verification must sign in using [multi-factor authentication](../authentication/howto-mfa-getstarted.md).
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
To add authentication with the Microsoft identity platform (formerly Azure AD v2
}).AddMicrosoftIdentityUI(); ```
-3. In the `Configure` method in *Startup.cs*, enable authentication with a call to `app.UseAuthentication();`
+3. In the `Configure` method in *Startup.cs*, enable authentication with a call to `app.UseAuthentication();` and `app.MapControllers();`.
```c# // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
To add authentication with the Microsoft identity platform (formerly Azure AD v2
// more code here app.UseAuthentication(); app.UseAuthorization();
+
+ app.MapRazorPages();
+ app.MapControllers();
// more code here } ```
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Possible reasons for failure:
| Error code | Reason | Resolution | | | | |
-| **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611) | Unable to read the service connection point (SCP) object and get the Azure AD tenant information. | Refer to the [Configure a service connection point](hybrid-azuread-join-federated-domains.md#configure-hybrid-azure-ad-join) section. |
+| **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611) | Unable to read the service connection point (SCP) object and get the Azure AD tenant information. | Refer to the [Configure a service connection point](hybrid-azuread-join-manual.md#configure-a-service-connection-point) section. |
| **DSREG_AUTOJOIN_DISC_FAILED** (0x801c0021/-2145648607) | Generic discovery failure. Failed to get the discovery metadata from the data replication service (DRS). | To investigate further, find the sub-error in the next sections. | | **DSREG_AUTOJOIN_DISC_WAIT_TIMEOUT** (0x801c001f/-2145648609) | Operation timed out while performing discovery. | Ensure that `https://enterpriseregistration.windows.net` is accessible in the system context. For more information, see the [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites) section. | | **DSREG_AUTOJOIN_USERREALM_DISCOVERY_FAILED** (0x801c003d/-2145648579) | Generic realm discovery failure. Failed to determine domain type (managed/federated) from STS. | To investigate further, find the sub-error in the next sections. |
active-directory Clean Up Unmanaged Azure Ad Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-unmanaged-azure-ad-accounts.md
+
+ Title: Clean up unmanaged Azure AD accounts - Azure Active Directory | Microsoft Docs
+description: Clean up unmanaged accounts using email OTP and PowerShell modules in Azure Active Directory
++++ Last updated : 06/28/2022++++++++
+# Clean up unmanaged Azure Active Directory accounts
+
+Azure Active Directory (Azure AD) supports self-service sign-up for
+email-verified users. Users can create Azure AD accounts if they can
+verify email ownership. To learn more, see, [What is self-service
+sign-up for Azure Active
+Directory?](https://docs.microsoft.com/azure/active-directory/enterprise-users/directory-self-service-signup)
+
+However, if a user creates an account, and the domain isn't verified in
+an Azure AD tenant, the user is created in an unmanaged, or viral
+tenant. The user can create an account with an organization's domain,
+not under the lifecycle management of the organization's IT. Access can
+persist after the user leaves the organization.
+
+## Remove unmanaged Azure AD accounts
+
+You can remove unmanaged Azure AD accounts from your Azure AD tenants
+and prevent these types of accounts from redeeming future invitations.
+
+1. Read how to enable [one-time
+ passcodes](https://docs.microsoft.com/azure/active-directory/external-identities/one-time-passcode#enable-email-one-time-passcode)
+ (OTP)
+
+2. Use the sample application in [Azure-samples/Remove-unmanaged-guests](https://github.com/Azure-Samples/Remove-Unmanaged-Guests) or
+ go to
+ [AzureAD/MSIdentityTools](https://github.com/AzureAD/MSIdentityTools/wiki/)
+ PowerShell module to identify viral users in an Azure AD tenant and
+ reset user redemption status.
+
+Once the above steps are complete, when users with unmanaged Azure AD accounts try to access your tenant, they'll re-redeem their invitations. However, because Email OTP is enabled, Azure AD will prevent users from redeeming with an existing unmanaged Azure AD account and theyΓÇÖll redeem with another account type. Google Federation and SAML/WS-Fed aren't enabled by default. So by default, these users will redeem with either an MSA or Email OTP, with MSA taking precedence. For a full explanation on the B2B redemption precedence, refer to the [redemption precedence flow chart](https://docs.microsoft.com/azure/active-directory/external-identities/redemption-experience#invitation-redemption-flow).
+
+## Overtaken tenants and domains
+
+Some tenants created as unmanaged tenants can be taken over and
+converted to a managed tenant. See, [take over an unmanaged directory as
+administrator in Azure AD](https://docs.microsoft.com/azure/active-directory/enterprise-users/domains-admin-takeover).
+
+In some cases, overtaken domains might not be updated, for example, missing a DNS TXT record and therefore become flagged as unmanaged. Implications are:
+
+- For guest users who belong to formerly unmanaged tenants, redemption status is reset and one consent prompt appears. Redemption occurs with same account as before.
+
+- After unmanaged user redemption status is reset, the tool might identify unmanaged users that are false positives.
+
+## Reset redemption using a sample application
+
+Before you begin, to identify and reset unmanaged Azure AD account redemption:
+
+1. Ensure email OTP is enabled.
+
+2. Use the sample application on
+ [Azure-Samples/Remove-Unmanaged-Guests](https://github.com/Azure-Samples/Remove-Unmanaged-Guests).
+
+## Reset redemption using MSIdentityTools PowerShell Module
+
+MSIdentityTools PowerShell Module is a collection of cmdlets and
+scripts. They are for use in the Microsoft identity platform and Azure
+AD; they augment capabilities in the PowerShell SDK. See, [Microsoft
+Graph PowerShell
+SDK](https://github.com/microsoftgraph/msgraph-sdk-powershell).
+
+Run the following cmdlets:
+
+- `Install-Module Microsoft.Graph -Scope CurrentUser`
+
+- `Install-Module MSIdentityTools`
+
+- `Import-Module msidentitytools,microsoft.graph`
+
+To identify unmanaged Azure AD accounts, run:
+
+- `Connect-MgGraph --Scope User.Read.All`
+
+- `Get-MsIdUnmanagedExternalUser`
+
+To reset unmanaged Azure AD account redemption status, run:
+
+- `Connect-MgGraph --Scope User.Readwrite.All`
+
+- `Get-MsIdUnmanagedExternalUser | Reset-MsIdExternalUser`
+
+To delete unmanaged Azure AD accounts, run:
+
+- `Connect-MgGraph --Scope User.Readwrite.All`
+
+- `Get-MsIdUnmanagedExternalUser | Remove-MgUser`
+
+## Next steps
+
+Examples of using
+[Get-MSIdUnmanagedExternalUser](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MsIdUnmanagedExternalUser)
active-directory 1 Secure Access Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/1-secure-access-posture.md
Title: Determine your security posture for external collaboration with Azure Active Directory description: Before you can execute an external access security plan, you must determine what you are trying to achieve. -+ Last updated 12/18/2020-+
active-directory 4 Secure Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/4-secure-access-groups.md
Title: Secure external access with groups in Azure Active Directory and Microsoft 365 description: Azure Active Directory and Microsoft 365 Groups can be used to increase security when external users access your resources. -+ Last updated 12/18/2020-+
active-directory 6 Secure Access Entitlement Managment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/6-secure-access-entitlement-managment.md
Title: Manage external access with Azure Active Directory Entitlement Management description: How to use Azure Active Directory Entitlement Management as a part of your overall external access security plan. -+ Last updated 12/18/2020-+
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
Title: Manage external access with Azure Active Directory Conditional Access description: How to use Azure Active Directory Conditional Access policies to secure external access to resources. -+ Last updated 01/25/2022-+
active-directory 8 Secure Access Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md
Title: Control external access to resources in Azure Active Directory with sensitivity labels. description: Use sensitivity labels as a part of your overall security plan for external access. -+ Last updated 12/18/2020-+
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
Title: Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure Active Directory description: Secure access to Microsoft 365 services as a part of your overall external access security. -+ Last updated 12/18/2020-+
active-directory Auth Header Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-header-based.md
Title: Header-based authentication with Azure Active Directory description: Architectural guidance on achieving header-based authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-kcd.md
Title: Kerberos constrained delegation with Azure Active Directory description: Architectural guidance on achieving Kerberos constrained delegation with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ldap.md
Title: LDAP authentication with Azure Active Directory description: Architectural guidance on achieving LDAP authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oauth2.md
Title: OAUTH 2.0 authentication with Azure Active Directory description: Architectural guidance on achieving OAUTH 2.0 authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oidc.md
Title: OpenID Connect authentication with Azure Active Directory description: Architectural guidance on achieving OpenID Connect authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-password-based-sso.md
Title: Password-based authentication with Azure Active Directory description: Architectural guidance on achieving password-based authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-radius.md
Title: RADIUS authentication with Azure Active Directory description: Architectural guidance on achieving RADIUS authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-remote-desktop-gateway.md
Title: Remote Desktop Gateway Services with Azure Active Directory description: Architectural guidance on achieving Remote Desktop Gateway Services with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-saml.md
Title: SAML authentication with Azure Active Directory description: Architectural guidance on achieving SAML authentication with Azure Active Directory -+
Last updated 10/10/2020-+
active-directory Auth Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ssh.md
Title: SSH authentication with Azure Active Directory description: Architectural guidance on achieving SSH integration with Azure Active Directory -+
Last updated 06/22/2022-+
active-directory Auth Sync Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-sync-overview.md
Title: Azure Active Directory authentication and synchronization protocol overview description: Architectural guidance on integrating Azure AD with legacy authentication protocols and sync patterns -+
Last updated 10/10/2020-+
active-directory Certificate Authorities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/certificate-authorities.md
Title: Azure Active Directory certificate authorities description: Listing of trusted certificates used in Azure -+
Last updated 10/10/2020-+
active-directory Monitor Sign In Health For Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/monitor-sign-in-health-for-resilience.md
Title: Monitor application sign-in health for resilience in Azure Active Directory description: Create queries and notifications to monitor the sign-in health of your applications. -+ Last updated 03/17/2021-+
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-considerations.md
Title: Common considerations for multi-tenant user management in Azure Active Directory description: Learn about the common design considerations for user access across Azure Active Directory tenants with guest accounts -+ Last updated 10/19/2021-+
active-directory Multi Tenant Common Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-solutions.md
Title: Common solutions for multi-tenant user management in Azure Active Directory description: Learn about common solutions used to configure user access across Azure Active Directory tenants with guest accounts -+ Last updated 09/25/2021-+
active-directory Multi Tenant User Management Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md
Title: Configuring multi-tenant user management in Azure Active Directory description: Learn about the different patterns used to configure user access across Azure Active Directory tenants with guest accounts -+ Last updated 09/25/2021-+
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-scenarios.md
Title: Common scenarios for using multi-tenant user management in Azure Active Directory description: Learn about common scenarios where guest accounts can be used to configure user access across Azure Active Directory tenants -+ Last updated 09/25/2021-+
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
Title: Protecting Microsoft 365 from on-premises attacks description: Learn how to configure your systems to help protect your Microsoft 365 cloud environment from on-premises compromise. -+ Last updated 04/29/2022-+ - it-pro
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
Title: Recover from deletions in Azure Active Directory description: Learn how to recover from unintended deletions. -+ Last updated 04/20/2022-+
active-directory Recover From Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-misconfigurations.md
Title: Recover from misconfigurations in Azure Active Directory description: Learn how to recover from misconfigurations. -+ Last updated 04/20/2022-+
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recoverability-overview.md
Title: Recoverability best practices in Azure Active Directory description: Learn the best practices for increasing recoverability. -+ Last updated 04/20/2022-+
active-directory Resilience B2b Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-b2b-authentication.md
Title: Build resilience in external user authentication with Azure Active Directory description: A guide for IT admins and architects to building resilient authentication for external users -+ Last updated 11/30/2020-+
active-directory Resilience In Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-in-credentials.md
Title: Build resilience with credential management in Azure Active Directory
description: A guide for architects and IT administrators on building a resilient credential strategy. -+ Last updated 11/30/2020-+
active-directory Resilience In Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-in-hybrid.md
Title: Build more resilient hybrid authentication in Azure Active Directory description: A guide for architects and IT administrators on building a resilient hybrid infrastructure. -+ Last updated 11/30/2020-+
active-directory Resilience In Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-in-infrastructure.md
Title: Build resilience in your IAM infrastructure with Azure Active Directory description: A guide for architects and IT administrators on building resilience to disruption of their IAM infrastructure. -+ Last updated 11/30/2020-+
active-directory Resilience On Premises Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-on-premises-access.md
Title: Build resilience in application access with Application Proxy description: A guide for architects and IT administrators on using Application Proxy for resilient access to on-premises applications -+ Last updated 11/30/2020-+
active-directory Resilience Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-overview.md
Title: Resilience in identity and access management with Azure Active Directory description: Learn how to build resilience into identity and access management. Resilience helps endure disruption to system components and recover with minimal effort. -+
Last updated 04/29/2022-+ - it-pro
active-directory Resilience With Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-with-continuous-access-evaluation.md
Title: Build resilience by using Continuous Access Evaluation in Azure Active Directory description: A guide for architects and IT administrators on using CAE -+
Last updated 11/30/2020-+
active-directory Resilience With Device States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-with-device-states.md
Title: Build resilience by using device states in Azure Active Directory description: A guide for architects and IT administrators to building resilience by using device states -+ Last updated 11/30/2020-+
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
Title: Azure Active Directory security operations for applications description: Learn how to monitor and alert on applications to identify security threats. -+ Last updated 07/15/2021-+
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
Title: Azure Active Directory security operations for devices description: Learn to establish baselines, and monitor and report on devices to identity potential security risks with devices. -+ Last updated 07/15/2021-+
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-infrastructure.md
Title: Azure Active Directory security operations for infrastructure description: Learn how to monitor and alert on infrastructure components to identify security threats. -+ Last updated 07/15/2021-+
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
Title: Azure Active Directory security operations guide description: Learn to monitor, identify, and alert on security issues with accounts, applications, devices, and infrastructure in Azure Active Directory. -+ Last updated 04/29/2022-+ - it-pro - seodec18
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-identity-management.md
Title: Azure Active Directory security operations for Privileged Identity Management description: Guidance to establish baselines and use Azure Active Directory Privileged Identity Management (PIM) to monitor and alert on potential issues with accounts that are governed by PIM. -+ Last updated 07/15/2021-+
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
Title: Azure Active Directory security operations for user accounts description: Guidance to establish baselines and how to monitor and alert on potential security issues with user accounts. -+ Last updated 07/15/2021-+
active-directory Service Accounts Computer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-computer.md
Title: Secure computer accounts | Azure Active Directory description: A guide to helping secure on-premises computer accounts. -+ Last updated 2/15/2021-+
active-directory Service Accounts Govern On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-govern-on-premises.md
Title: Govern on-premises service accounts | Azure Active Directory description: Use this guide to create and run an account lifecycle process for service accounts. -+ Last updated 2/15/2021-+
active-directory Service Accounts Governing Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-governing-azure.md
Title: Governing Azure Active Directory service accounts description: Principles and procedures for managing the lifecycle of service accounts in Azure Active Directory. -+ Last updated 3/1/2021-+
active-directory Service Accounts Group Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-group-managed.md
Title: Secure group managed service accounts | Azure Active Directory description: A guide to securing group managed service account (gMSA) computer accounts. -+ Last updated 2/15/2021-+
active-directory Service Accounts Introduction Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-introduction-azure.md
Title: Introduction to securing Azure Active Directory service accounts description: Explanation of the types of service accounts available in Azure Active Directory. -+ Last updated 04/21/2022-+
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-managed-identities.md
Title: Securing managed identities in Azure Active Directory description: Explanation of how to find, assess, and increase the security of managed identities. -+ Last updated 3/1/2021-+
active-directory Service Accounts On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-on-premises.md
Title: Introduction to Active Directory service accounts description: An introduction to the types of service accounts in Active Directory, and how to secure them. -+ Last updated 04/21/2022-+
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-principal.md
Title: Securing service principals in Azure Active Directory description: Find, assess, and secure service principals. -+ Last updated 2/15/2021-+
active-directory Service Accounts Standalone Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-standalone-managed.md
Title: Secure standalone managed service accounts | Azure Active Directory description: A guide to securing standalone managed service accounts. -+ Last updated 2/15/2021-+
active-directory Service Accounts User On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-user-on-premises.md
Title: Secure user-based service accounts | Azure Active Directory description: A guide to securing user-based service accounts. -+ Last updated 2/15/2021-+
active-directory Sync Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-directory.md
Title: Directory synchronization with Azure Active Directory description: Architectural guidance on achieving directory synchronization with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Sync Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-ldap.md
Title: LDAP synchronization with Azure Active Directory description: Architectural guidance on achieving LDAP synchronization with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Sync Scim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-scim.md
Title: SCIM synchronization with Azure Active Directory description: Architectural guidance on achieving SCIM synchronization with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Last updated 04/15/2022 --++
Although this deployment changes no other relying parties in your AD FS farm, yo
## Plan the project
-When technology projects fail, itΓÇÖs typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that youΓÇÖre engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md#include-the-right-stakeholders) and that stakeholder roles in the project are well understood.
+When technology projects fail, it's typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md#include-the-right-stakeholders) and that stakeholder roles in the project are well understood.
### Plan communications
Proactively communicate with your users how their experience will change, when i
After the domain conversion, Azure AD might continue to send some legacy authentication requests from Exchange Online to your AD FS servers for up to four hours. The delay is because the Exchange Online cache for [legacy applications authentication](../fundamentals/concept-fundamentals-block-legacy-authentication.md) can take up to 4 hours to be aware of the cutover from federation to cloud authentication.
-During this four-hour window, you may prompt users for credentials repeatedly when reauthenticating to applications that use legacy authentication. Although the user can still successfully authenticate against AD FS, Azure AD no longer accepts the userΓÇÖs issued token because that federation trust is now removed.
+During this four-hour window, you may prompt users for credentials repeatedly when reauthenticating to applications that use legacy authentication. Although the user can still successfully authenticate against AD FS, Azure AD no longer accepts the user's issued token because that federation trust is now removed.
-Existing Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't affected because Exchange Online keeps a cache of their credentials for a set period of time. The cache is used to silently reauthenticate the user. The user doesnΓÇÖt have to return to AD FS. Credentials stored on the device for these clients are used to silently reauthenticate themselves after the cached is cleared. Users arenΓÇÖt expected to receive any password prompts as a result of the domain conversion process.
+Existing Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't affected because Exchange Online keeps a cache of their credentials for a set period of time. The cache is used to silently reauthenticate the user. The user doesn't have to return to AD FS. Credentials stored on the device for these clients are used to silently reauthenticate themselves after the cached is cleared. Users aren't expected to receive any password prompts as a result of the domain conversion process.
Modern authentication clients (Office 2016 and Office 2013, iOS, and Android apps) use a valid refresh token to obtain new access tokens for continued access to resources instead of returning to AD FS. These clients are immune to any password prompts resulting from the domain conversion process. The clients will continue to function without extra configuration.
You can [customize the Azure AD sign-in page](../fundamentals/customize-branding
### Plan for conditional access policies
-Evaluate if youΓÇÖre currently using conditional access for authentication, or if you use access control policies in AD FS.
+Evaluate if you're currently using conditional access for authentication, or if you use access control policies in AD FS.
Consider replacing AD FS access control policies with the equivalent Azure AD [Conditional Access policies](../conditional-access/overview.md) and [Exchange Online Client Access Rules](/exchange/clients-and-mobile-in-exchange-online/client-access-rules/client-access-rules). You can use either Azure AD or on-premises groups for conditional access.
You have two options for enabling this change:
- **Option B:** Switch using Azure AD Connect and PowerShell
- *Available if you didnΓÇÖt initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services*.
+ *Available if you didn't initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services*.
To choose one of these options, you must know what your current settings are.
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
![View AD FS configuration](media/deploy-cloud-user-authentication/federation-configuration.png)
- If AD FS isnΓÇÖt listed in the current settings, you must manually convert your domains from federated identity to managed identity by using PowerShell.
+ If AD FS isn't listed in the current settings, you must manually convert your domains from federated identity to managed identity by using PowerShell.
#### Option A
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
Domain Administrator account credentials are required to enable seamless SSO. The process completes the following actions, which require these elevated permissions: - A computer account named AZUREADSSO (which represents Azure AD) is created in your on-premises Active Directory instance.
- - The computer accountΓÇÖs Kerberos decryption key is securely shared with Azure AD.
+ - The computer account's Kerberos decryption key is securely shared with Azure AD.
- Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in. The domain administrator credentials are not stored in Azure AD Connect or Azure AD and get discarded when the process successfully finishes. They are used to turn ON this feature.
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
##### Deploy more authentication agents for PTA >[!NOTE]
-> PTA requires deploying lightweight agents on the Azure AD Connect server and on your on-premises computer thatΓÇÖs running Windows server. To reduce latency, install the agents as close as possible to your Active Directory domain controllers.
+> PTA requires deploying lightweight agents on the Azure AD Connect server and on your on-premises computer that's running Windows server. To reduce latency, install the agents as close as possible to your Active Directory domain controllers.
For most customers, two or three authentication agents are sufficient to provide high availability and the required capacity. A tenant can have a maximum of 12 agents registered. The first agent is always installed on the Azure AD Connect server itself. To learn about agent limitations and agent deployment options, see [Azure AD pass-through authentication: Current limitations](how-to-connect-pta-current-limitations.md).
For most customers, two or three authentication agents are sufficient to provide
**Switch from federation to the new sign-in method by using Azure AD Connect and PowerShell**
-*Available if you didnΓÇÖt initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services.*
+*Available if you didn't initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services.*
On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a). You will notice that on the User sign-in page, the **Do not configure** option is pre-selected.
On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a).
![ Pass-through authentication settings](media/deploy-cloud-user-authentication/pass-through-authentication-settings.png)
- If the authentication agent isnΓÇÖt active, complete these [troubleshooting steps](tshoot-connect-pass-through-authentication.md) before you continue with the domain conversion process in the next step. You risk causing an authentication outage if you convert your domains before you validate that your PTA agents are successfully installed and that their status is **Active** in the Azure portal.
+ If the authentication agent isn't active, complete these [troubleshooting steps](tshoot-connect-pass-through-authentication.md) before you continue with the domain conversion process in the next step. You risk causing an authentication outage if you convert your domains before you validate that your PTA agents are successfully installed and that their status is **Active** in the Azure portal.
3. [Deploy more authentication agents](#deploy-more-authentication-agents-for-pta).
On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a).
**At this point, federated authentication is still active and operational for your domains**. To continue with the deployment, you must convert each domain from federated identity to managed identity. >[!IMPORTANT]
-> You donΓÇÖt have to convert all domains at the same time. You might choose to start with a test domain on your production tenant or start with your domain that has the lowest number of users.
+> You don't have to convert all domains at the same time. You might choose to start with a test domain on your production tenant or start with your domain that has the lowest number of users.
**Complete the conversion by using the Azure AD PowerShell module:**
Complete the following tasks to verify the sign-up method and to finish the conv
### Test the new sign-in method
-When your tenant used federated identity, users were redirected from the Azure AD sign-in page to your AD FS environment. Now that the tenant is configured to use the new sign-in method instead of federated authentication, users arenΓÇÖt redirected to AD FS.
+When your tenant used federated identity, users were redirected from the Azure AD sign-in page to your AD FS environment. Now that the tenant is configured to use the new sign-in method instead of federated authentication, users aren't redirected to AD FS.
**Instead, users sign in directly on the Azure AD sign-in page.**
If you used staged rollout, you should remember to turn off the staged rollout f
Historically, updates to the **UserPrincipalName** attribute, which uses the sync service from the on-premises environment, are blocked unless both of these conditions are true: - The user is in a managed (non-federated) identity domain.
- - The user hasnΓÇÖt been assigned a license.
+ - The user hasn't been assigned a license.
To learn how to verify or turn on this feature, see [Sync userPrincipalName updates](how-to-connect-syncservice-features.md).
Your support team should understand how to troubleshoot any authentication issue
Migration requires assessing how the application is configured on-premises, and then mapping that configuration to Azure AD.
-If you plan to keep using AD FS with on-premises & SaaS Applications using SAML / WS-FED or Oauth protocol, youΓÇÖll use both AD FS and Azure AD after you convert the domains for user authentication. In this case, you can protect your on-premises applications and resources with Secure Hybrid Access (SHA) through [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) or one of [Azure AD partner integrations](../manage-apps/secure-hybrid-access.md). Using Application Proxy or one of our partners can provide secure remote access to your on-premises applications. Users benefit by easily connecting to their applications from any device after a [single sign-on](../manage-apps/add-application-portal-setup-sso.md).
+If you plan to keep using AD FS with on-premises & SaaS Applications using SAML / WS-FED or Oauth protocol, you'll use both AD FS and Azure AD after you convert the domains for user authentication. In this case, you can protect your on-premises applications and resources with Secure Hybrid Access (SHA) through [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) or one of [Azure AD partner integrations](../manage-apps/secure-hybrid-access.md). Using Application Proxy or one of our partners can provide secure remote access to your on-premises applications. Users benefit by easily connecting to their applications from any device after a [single sign-on](../manage-apps/add-application-portal-setup-sso.md).
You can move SaaS applications that are currently federated with ADFS to Azure AD. Reconfigure to authenticate with Azure AD either via a built-in connector from the [Azure App gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps), or by [registering the application in Azure AD](../develop/quickstart-register-app.md).
For more information, see ΓÇô
### Remove relying party trust
-If you have Azure AD Connect Health, you can [monitor usage](how-to-connect-health-adfs.md) from the Azure portal. In case the usage shows no new auth req and you validate that all users and clients are successfully authenticating via Azure AD, itΓÇÖs safe to remove the Microsoft 365 relying party trust.
+If you have Azure AD Connect Health, you can [monitor usage](how-to-connect-health-adfs.md) from the Azure portal. In case the usage shows no new auth req and you validate that all users and clients are successfully authenticating via Azure AD, it's safe to remove the Microsoft 365 relying party trust.
-If you donΓÇÖt use AD FS for other purposes (that is, for other relying party trusts), you can decommission AD FS at this point.
+If you don't use AD FS for other purposes (that is, for other relying party trusts), you can decommission AD FS at this point.
## Next steps
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
Title: Manage user-assigned managed identities - Azure AD
description: Create user-assigned managed identities. -+ editor:
active-directory How Managed Identities Work Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md
description: Description of managed identities for Azure resources work with Azu
documentationcenter: -+ editor: ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
active-directory How To Managed Identity Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-managed-identity-regional-move.md
Title: Move managed identities to another region - Azure AD
description: Steps involved in getting a managed identity recreated in another region -+
active-directory How To Use Vm Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md
description: Code samples for using Azure SDKs with an Azure VM that has managed
documentationcenter: -+ editor:
active-directory How To Use Vm Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md
description: Step-by-step instructions and examples for using an Azure VM-manage
documentationcenter: -+ editor:
active-directory How To Use Vm Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md
description: Step-by-step instructions and examples for using managed identities
documentationcenter: -+ editor:
active-directory How To View Managed Identity Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-portal.md
description: Step-by-step instructions for viewing the service principal of a ma
documentationcenter: '' -+ editor: ''
active-directory How To View Managed Identity Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-powershell.md
description: Step-by-step instructions for viewing the service principal of a ma
documentationcenter: '' -+ editor: ''
active-directory Howto Assign Access Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-cli.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Howto Assign Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-portal.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Howto Assign Access Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/known-issues.md
description: Known issues with managed identities for Azure resources.
documentationcenter: -+ editor: ms.assetid: 2097381a-a7ec-4e3b-b4ff-5d2fb17403b6
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
description: Frequently asked questions about managed identities
documentationcenter: -+ editor:
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
Last updated 01/10/2022
-+
active-directory Managed Identity Best Practice Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md
description: Recommendations on when to use user-assigned versus system-assigned
documentationcenter: -+ editor:
active-directory Msi Tutorial Linux Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/msi-tutorial-linux-vm-access-arm.md
description: A tutorial that walks you through the process of using a user-assig
documentationcenter: '' -+ editor: daveba
active-directory Overview For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview-for-developers.md
description: An overview how developers can use managed identities for Azure res
documentationcenter: -+ editor: ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
active-directory Qs Configure Cli Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md
Title: Configure managed identities on Azure VM using Azure CLI - Azure AD description: Step-by-step instructions for configuring system and user-assigned managed identities on an Azure VM using Azure CLI. -+
active-directory Qs Configure Cli Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss.md
description: Step-by-step instructions for configuring system and user-assigned
documentationcenter: -+ editor:
active-directory Qs Configure Portal Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Portal Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Rest Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vm.md
description: Step-by-step instructions for configuring a system and user-assigne
documentationcenter: -+ editor:
active-directory Qs Configure Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vmss.md
description: Step-by-step instructions for configuring a system and user-assigne
documentationcenter: -+ editor:
active-directory Qs Configure Sdk Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md
description: Step-by-step instructions for configuring and using managed identit
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Template Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Template Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vmss.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Services Azure Active Directory Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md
Last updated 02/01/2022
-+ # Azure services that support Azure AD authentication
active-directory Tutorial Linux Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-arm.md
description: A quickstart that walks you through the process of using a Linux VM
documentationcenter: '' -+ editor: bryanla
active-directory Tutorial Linux Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-cosmos-db.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: -+ editor:
active-directory Tutorial Linux Vm Access Datalake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-datalake.md
description: A tutorial that shows you how to use a Linux VM system-assigned man
documentationcenter: -+ editor:
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Linux Vm Access Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-access-key.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Linux Vm Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: -+ editor:
active-directory Tutorial Vm Windows Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-cosmos-db.md
description: A tutorial that walks you through the process of using a system-ass
documentationcenter: '' -+ editor:
active-directory Tutorial Windows Vm Access Datalake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-datalake.md
description: A tutorial that shows you how to use a Windows VM system-assigned m
documentationcenter: -+ editor:
active-directory Tutorial Windows Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+
active-directory Tutorial Windows Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-storage-sas.md
description: A tutorial that shows you how to use a Windows VM system-assigned m
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Ua Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-ua-arm.md
description: A tutorial that walks you through the process of using a user-assig
documentationcenter: '' -+ editor:
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> | Read all configuration | [Security Reader](permissions-reference.md#security-reader) | | > | Read users flagged for risk | [Security Reader](permissions-reference.md#security-reader) | |
-## Temporary Access Pass (Preview)
+## Temporary Access Pass
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles |
You can further restrict permissions by assigning roles at smaller scopes or by
- [Assign Azure AD roles to users](manage-roles-portal.md) - [Assign Azure AD roles at different scopes](assign-roles-different-scopes.md) - [Create and assign a custom role in Azure Active Directory](custom-create.md)-- [Azure AD built-in roles](permissions-reference.md)
+- [Azure AD built-in roles](permissions-reference.md)
active-directory Articulate360 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/articulate360-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Articulate 360'
+description: Learn how to configure single sign-on between Azure Active Directory and Articulate 360.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with Articulate 360
+
+In this tutorial, you'll learn how to integrate Articulate 360 with Azure Active Directory (Azure AD). When you integrate Articulate 360 with Azure AD, you can:
+
+* Control in Azure AD who has access to Articulate 360.
+* Enable your users to be automatically signed-in to Articulate 360 with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Articulate 360 single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Articulate 360 supports **SP** and **IDP** initiated SSO.
+
+## Add Articulate 360 from the gallery
+
+To configure the integration of Articulate 360 into Azure AD, you need to add Articulate 360 from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Articulate 360** in the search box.
+1. Select **Articulate 360** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Articulate 360
+
+Configure and test Azure AD SSO with Articulate 360 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Articulate 360.
+
+To configure and test Azure AD SSO with Articulate 360, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Articulate 360 SSO](#configure-articulate-360-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Articulate 360 test user](#create-articulate-360-test-user)** - to have a counterpart of B.Simon in Articulate 360 that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Articulate 360** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<SAMPLE>`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://id.articulate.com/sso/saml2`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://id.articulate.com/`
+
+ > [!Note]
+ > The Identifier value is not real. Update this value with the actual Identifier. Contact [Articulate 360 support team](mailto:enterprise@articulate.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Articulate 360 application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Articulate 360 application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Articulate 360** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Articulate 360.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Articulate 360**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Articulate 360 SSO
+
+To configure single sign-on on **Articulate 360** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Articulate 360 support team](mailto:enterprise@articulate.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Articulate 360 test user
+
+In this section, a user called B.Simon is created in Articulate 360. Articulate 360 supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Articulate 360, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Articulate 360 Sign-On URL where you can initiate the login flow.
+
+* Go to Articulate 360 Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Articulate 360 for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Articulate 360 tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Articulate 360 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Articulate 360 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Infrascale Cloud Backup Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infrascale-cloud-backup-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Infrascale Cloud Backup'
+description: Learn how to configure single sign-on between Azure Active Directory and Infrascale Cloud Backup.
++++++++ Last updated : 06/24/2022++++
+# Tutorial: Azure AD SSO integration with Infrascale Cloud Backup
+
+In this tutorial, you'll learn how to integrate Infrascale Cloud Backup with Azure Active Directory (Azure AD). When you integrate Infrascale Cloud Backup with Azure AD, you can:
+
+* Control in Azure AD who has access to Infrascale Cloud Backup.
+* Enable your users to be automatically signed-in to Infrascale Cloud Backup with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Infrascale Cloud Backup single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Infrascale Cloud Backup supports **SP** initiated SSO.
+
+## Add Infrascale Cloud Backup from the gallery
+
+To configure the integration of Infrascale Cloud Backup into Azure AD, you need to add Infrascale Cloud Backup from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Infrascale Cloud Backup** in the search box.
+1. Select **Infrascale Cloud Backup** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Infrascale Cloud Backup
+
+Configure and test Azure AD SSO with Infrascale Cloud Backup using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Infrascale Cloud Backup.
+
+To configure and test Azure AD SSO with Infrascale Cloud Backup, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Infrascale Cloud Backup SSO](#configure-infrascale-cloud-backup-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Infrascale Cloud Backup test user](#create-infrascale-cloud-backup-test-user)** - to have a counterpart of B.Simon in Infrascale Cloud Backup that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Infrascale Cloud Backup** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://dashboard.sosonlinebackup.com/<ID>`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://dashboard.managedoffsitebackup.net/Account/AssertionConsumerService`
+
+ c. In the **Sign-on URL** text box, type one of the following URLs:
+
+ | **Sign-on URL** |
+ ||
+ | `https://dashboard.avgonlinebackup.com/Account/SingleSignOn` |
+ | `https://dashboard.infrascale.com/Account/SingleSignOn` |
+ | `https://dashboard.managedoffsitebackup.net/Account/SingleSignOn` |
+ | `https://dashboard.sosonlinebackup.com/Account/SingleSignOn` |
+ |`https://dashboard.trustboxbackup.com/Account/SingleSignOn` |
+ | `https://radialpoint-dashboard.managedoffsitebackup.net/Account/SingleSignOn` |
+ | `https://dashboard-cw.infrascale.com/Account/SingleSignOn` |
+ | `https://dashboard.digicelcloudbackup.com/Account/SingleSignOn` |
+ | `https://dashboard-cw.sosonlinebackup.com/Account/SingleSignOn` |
+ |`https://dashboard.my-data.dk/Account/SingleSignOn` |
+ |`https://dashboard.beesafe.nu/Account/SingleSignOn` |
+ |`https://dashboard.bekcloud.com/Account/SingleSignOn` |
+ | `https://dashboard.alltimesecure.com/Account/SingleSignOn` |
+ | `https://dashboard-ec1.sosonlinebackup.com/Account/SingleSignOn` |
+ | `https://dashboard.glcsecurecloud.com/Account/SingleSignOninfrascalecloudbackup.com/infrascalecloudbackup.com/` |
+
+ > [!Note]
+ > The Identifier value is not real. Update this value with the actual Identifier URL. Contact [Infrascale Cloud Backup support team](mailto:support@infrascale.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Infrascale Cloud Backup.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Infrascale Cloud Backup**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Infrascale Cloud Backup SSO
+
+1. Log in to your Infrascale Cloud Backup company site as an administrator.
+
+1. Go to **Settings** > **Single Sign-On** and select **Enable Single Sign-On (SSO)**.
+
+1. In the **Single Sign-On Settings** page, perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/infrascale-cloud-backup-tutorial/settings.png "Configuration")
+
+ a. Copy **Service Provider EntityID** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ b. Copy **Reply URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ c. Select **Via metadata URL** button under Identity Provider Settings section.
+
+ d. Copy **App Federation Metadata Url** from the Azure portal and paste it in the **Metadata URL** textbox.
+
+ e. Click **Save**.
+
+### Create Infrascale Cloud Backup test user
+
+In this section, you create a user called Britta Simon in Infrascale Cloud Backup. Work with [Infrascale Cloud Backup support team](mailto:support@infrascale.com) to add the users in the Infrascale Cloud Backup platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Infrascale Cloud Backup Sign-On URL where you can initiate the login flow.
+
+* Go to Infrascale Cloud Backup Sign-On URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Infrascale Cloud Backup tile in the My Apps, this will redirect to Infrascale Cloud Backup Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Infrascale Cloud Backup you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Lines Elibrary Advance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lines-elibrary-advance-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Lines eLibrary Advance'
+description: Learn how to configure single sign-on between Azure Active Directory and Lines eLibrary Advance.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with Lines eLibrary Advance
+
+In this tutorial, you'll learn how to integrate Lines eLibrary Advance with Azure Active Directory (Azure AD). When you integrate Lines eLibrary Advance with Azure AD, you can:
+
+* Control in Azure AD who has access to Lines eLibrary Advance.
+* Enable your users to be automatically signed-in to Lines eLibrary Advance with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Lines eLibrary Advance single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Lines eLibrary Advance supports **SP** and **IDP** initiated SSO.
+
+## Add Lines eLibrary Advance from the gallery
+
+To configure the integration of Lines eLibrary Advance into Azure AD, you need to add Lines eLibrary Advance from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Lines eLibrary Advance** in the search box.
+1. Select **Lines eLibrary Advance** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Lines eLibrary Advance
+
+Configure and test Azure AD SSO with Lines eLibrary Advance using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at Lines eLibrary Advance.
+
+To configure and test Azure AD SSO with Lines eLibrary Advance, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Lines eLibrary Advance SSO](#configure-lines-elibrary-advance-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Lines eLibrary Advance test user](#create-lines-elibrary-advance-test-user)** - to have a counterpart of B.Simon in Lines eLibrary Advance that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Lines eLibrary Advance** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://ela.education.ne.jp/students/gsso/metadata/gsuite/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/students/gsso/metadata/gsuite/<SSOID>` |
+ | `https://ela.education.ne.jp/teachers/gsso/metadata/gsuite/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/teachers/gsso/metadata/gsuite/<SSOID` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |-|
+ | `https://ela.education.ne.jp/students/gsso/acs/gsuite/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/students/gsso/acs/gsuite/<SSOID>` |
+ | `https://ela.education.ne.jp/teachers/gsso/acs/gsuite/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/teachers/gsso/acs/gsuite/<SSOID>` |
+
+1. Click **Set additional URLs** and perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ |--|
+ | `https://fms.live.fm.ks.irdeto.com/` |
+ | `https://ela.education.ne.jp/students/gsso/login/azure/<SSOID>` |
+ | `https://ela.education.ne.jp/teachers/gsso/login/azure/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/students/gsso/login/azure/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/teachers/gsso/login/azure/<SSOID>` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Lines eLibrary Advance support team](mailto:tech@education.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Lines eLibrary Advance** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lines eLibrary Advance.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Lines eLibrary Advance**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Lines eLibrary Advance SSO
+
+To configure single sign-on on **Lines eLibrary Advance** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Lines eLibrary Advance support team](mailto:tech@education.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Lines eLibrary Advance test user
+
+In this section, you create a user called Britta Simon at Lines eLibrary Advance. Work with [Lines eLibrary Advance support team](mailto:tech@education.jp) to add the users in the Lines eLibrary Advance platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Lines eLibrary Advance Sign-On URL where you can initiate the login flow.
+
+* Go to Lines eLibrary Advance Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lines eLibrary Advance for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Lines eLibrary Advance tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lines eLibrary Advance for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Lines eLibrary Advance you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Linkedin Learning Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedin-learning-provisioning-tutorial.md
- Title: 'Tutorial: Configure LinkedIn Learning for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to LinkedIn Learning.
--
-writer: twimmers
----- Previously updated : 06/30/2020---
-# Tutorial: Configure LinkedIn Learning for automatic user provisioning
-
-This tutorial describes the steps you need to perform in both LinkedIn Learning and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [LinkedIn Learning](https://learning.linkedin.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
--
-## Capabilities supported
-> [!div class="checklist"]
-> * Create users in LinkedIn Learning
-> * Remove users in LinkedIn Learning when they do not require access anymore
-> * Keep user attributes synchronized between Azure AD and LinkedIn Learning
-> * Provision groups and group memberships in LinkedIn Learning
-> * [Single sign-on](linkedinlearning-tutorial.md) to LinkedIn Learning (recommended)
-
-## Prerequisites
-
-The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* Approval and SCIM enabled for LinkedIn Learning (contact by email).
-
-## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and LinkedIn Learning](../app-provisioning/customize-application-attributes.md).
-
-## Step 2. Configure LinkedIn Learning to support provisioning with Azure AD
-1. Sign into [LinkedIn Learning Settings](https://www.linkedin.com/learning-admin/settings/global). Select **SCIM Setup** then select **Add new SCIM configuration**.
-
- ![SCIM Setup configuration](./media/linkedin-learning-provisioning-tutorial/learning-scim-settings.png)
-
-2. Enter a name for the configuration, and set **Auto-assign licenses** to On. Then click **Generate token**.
-
- ![SCIM configuration name](./media/linkedin-learning-provisioning-tutorial/learning-scim-configuration.png)
-
-3. After the configuration is created, an **Access token** should be generated. Keep this copied for later.
-
- ![SCIM access token](./media/linkedin-learning-provisioning-tutorial/learning-scim-token.png)
-
-4. You may reissue any existing configurations (which will generate a new token) or remove them.
-
-## Step 3. Add LinkedIn Learning from the Azure AD application gallery
-
-Add LinkedIn Learning from the Azure AD application gallery to start managing provisioning to LinkedIn Learning. If you have previously setup LinkedIn Learning for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-
-## Step 4. Define who will be in scope for provisioning
-
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
--
-## Step 5. Configure automatic user provisioning to LinkedIn Learning
-
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
-
-### To configure automatic user provisioning for LinkedIn Learning in Azure AD:
-
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **LinkedIn Learning**.
-
- ![The LinkedIn Learning link in the Applications list](common/all-applications.png)
-
-3. Select the **Provisioning** tab.
-
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-
-4. Set the **Provisioning Mode** to **Automatic**.
-
- ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-
-5. Under the **Admin Credentials** section, input `https://api.linkedin.com/scim` in **Tenant URL**. Input the access token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to LinkedIn Learning. If the connection fails, ensure your LinkedIn Learning account has Admin permissions and try again.
-
- ![Screenshot shows the Admin Credentials dialog box, where you can enter your Tenant U R L and Secret Token.](./media/linkedin-learning-provisioning-tutorial/provisioning.png)
-
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
-
- ![Notification Email](common/provisioning-notification-email.png)
-
-7. Select **Save**.
-
-8. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
-
-9. Review the user attributes that are synchronized from Azure AD to LinkedIn Learning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LinkedIn Learning for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the LinkedIn Learning API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
-
- |Attribute|Type|Supported for filtering|
- ||||
- |externalId|String|&check;|
- |userName|String|
- |name.givenName|String|
- |name.familyName|String|
- |displayName|String|
- |addresses[type eq "work"].locality|String|
- |title|String|
- |emails[type eq "work"].value|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
-
-10. Under the **Mappings** section, select **Provision Azure Active Directory Groups**.
-
-11. Review the group attributes that are synchronized from Azure AD to LinkedIn Learning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in LinkedIn Learning for update operations. Select the **Save** button to commit any changes.
-
- |Attribute|Type|Supported for filtering|
- ||||
- |displayName|String|&check;|
- |members|Reference|
- |externalId|String|
-
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-13. To enable the Azure AD provisioning service for LinkedIn Learning, change the **Provisioning Status** to **On** in the **Settings** section.
-
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-
-14. Define the users and/or groups that you would like to provision to LinkedIn Learning by choosing the desired values in **Scope** in the **Settings** section.
-
- ![Provisioning Scope](common/provisioning-scope.png)
-
-15. When you are ready to provision, click **Save**.
-
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
-
-## Step 6. Monitor your deployment
-Once you've configured provisioning, use the following resources to monitor your deployment:
-
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-
-## Additional resources
-
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-
-## Next steps
-
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Linkedinlearning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinlearning-tutorial.md
In this tutorial, you configure and test Azure AD SSO in a test environment.
* LinkedIn Learning supports **SP and IDP** initiated SSO. * LinkedIn Learning supports **Just In Time** user provisioning.
-* LinkedIn Learning supports [Automated user provisioning](linkedin-learning-provisioning-tutorial.md).
## Add LinkedIn Learning from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > These values are not real. You will update these values with the actual Identifier, Reply URL and Sign on URL which is explained later in the **Configure LinkedIn Learning SSO** section of tutorial.
-1. LinkedIn Learning application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. LinkedIn Learning application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
+1. LinkedIn Learning application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, whereas **nameidentifier** is mapped with **user.userprincipalname**. LinkedIn Learning application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
![image](common/edit-attribute.png)
active-directory Lms And Education Management System Leaf Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lms-and-education-management-system-leaf-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with LMS and Education Management System Leaf'
+description: Learn how to configure single sign-on between Azure Active Directory and LMS and Education Management System Leaf.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with LMS and Education Management System Leaf
+
+In this tutorial, you'll learn how to integrate LMS and Education Management System Leaf with Azure Active Directory (Azure AD). When you integrate LMS and Education Management System Leaf with Azure AD, you can:
+
+* Control in Azure AD who has access to LMS and Education Management System Leaf.
+* Enable your users to be automatically signed-in to LMS and Education Management System Leaf with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* LMS and Education Management System Leaf single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* LMS and Education Management System Leaf supports **SP** initiated SSO.
+* LMS and Education Management System Leaf supports **Just In Time** user provisioning.
+
+## Add LMS and Education Management System Leaf from the gallery
+
+To configure the integration of LMS and Education Management System Leaf into Azure AD, you need to add LMS and Education Management System Leaf from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **LMS and Education Management System Leaf** in the search box.
+1. Select **LMS and Education Management System Leaf** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for LMS and Education Management System Leaf
+
+Configure and test Azure AD SSO with LMS and Education Management System Leaf using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in LMS and Education Management System Leaf.
+
+To configure and test Azure AD SSO with LMS and Education Management System Leaf, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure LMS and Education Management System Leaf SSO](#configure-lms-and-education-management-system-leaf-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create LMS and Education Management System Leaf test user](#create-lms-and-education-management-system-leaf-test-user)** - to have a counterpart of B.Simon in LMS and Education Management System Leaf that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **LMS and Education Management System Leaf** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.leaf-hrm.jp/`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.leaf-hrm.jp/loginusers/acs`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.leaf-hrm.jp/`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [LMS and Education Management System Leaf support team](mailto:leaf-jimukyoku@insource.co.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up LMS and Education Management System Leaf** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to LMS and Education Management System Leaf.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **LMS and Education Management System Leaf**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure LMS and Education Management System Leaf SSO
+
+To configure single sign-on on **LMS and Education Management System Leaf** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [LMS and Education Management System Leaf support team](mailto:leaf-jimukyoku@insource.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create LMS and Education Management System Leaf test user
+
+In this section, a user called B.Simon is created in LMS and Education Management System Leaf. LMS and Education Management System Leaf supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in LMS and Education Management System Leaf, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to LMS and Education Management System Leaf Sign-on URL where you can initiate the login flow.
+
+* Go to LMS and Education Management System Leaf Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the LMS and Education Management System Leaf tile in the My Apps, this will redirect to LMS and Education Management System Leaf Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure LMS and Education Management System Leaf you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Risecom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/risecom-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Rise.com'
+description: Learn how to configure single sign-on between Azure Active Directory and Rise.com.
++++++++ Last updated : 06/24/2022++++
+# Tutorial: Azure AD SSO integration with Rise.com
+
+In this tutorial, you'll learn how to integrate Rise.com with Azure Active Directory (Azure AD). When you integrate Rise.com with Azure AD, you can:
+
+* Control in Azure AD who has access to Rise.com.
+* Enable your users to be automatically signed-in to Rise.com with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Rise.com single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Rise.com supports **SP** and **IDP** initiated SSO.
+* Rise.com supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Rise.com from the gallery
+
+To configure the integration of Rise.com into Azure AD, you need to add Rise.com from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Rise.com** in the search box.
+1. Select **Rise.com** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Rise.com
+
+Configure and test Azure AD SSO with Rise.com using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Rise.com.
+
+To configure and test Azure AD SSO with Rise.com, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Rise.com SSO](#configure-risecom-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Rise.com test user](#create-risecom-test-user)** - to have a counterpart of B.Simon in Rise.com that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Rise.com** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type the URL:
+ `https://id.rise.com/sso/saml2`
+
+ b. In the **Relay State** text box, type a URL using the following pattern:
+ `https://<CustomerDomainName>.rise.com`
+
+ > [!Note]
+ > This value is not real. Update this value with the actual Relay State URL. Contact [Rise.com support team](mailto:Enterprise@rise.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Rise.com application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Rise.com application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Rise.com** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Rise.com.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Rise.com**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Rise.com SSO
+
+To configure single sign-on on **Rise.com** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Rise.com support team](mailto:Enterprise@rise.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Rise.com test user
+
+In this section, a user called B.Simon is created in Rise.com. Rise.com supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Rise.com, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Rise.com Sign-On URL where you can initiate the login flow.
+
+* Go to Rise.com Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Rise.com for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Rise.com tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Rise.com for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Rise.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Rootly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rootly-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Rootly'
+description: Learn how to configure single sign-on between Azure Active Directory and Rootly.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with Rootly
+
+In this tutorial, you'll learn how to integrate Rootly with Azure Active Directory (Azure AD). When you integrate Rootly with Azure AD, you can:
+
+* Control in Azure AD who has access to Rootly.
+* Enable your users to be automatically signed-in to Rootly with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Rootly single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Rootly supports **SP** and **IDP** initiated SSO.
+* Rootly supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Rootly from the gallery
+
+To configure the integration of Rootly into Azure AD, you need to add Rootly from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Rootly** in the search box.
+1. Select **Rootly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Rootly
+
+Configure and test Azure AD SSO with Rootly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Rootly.
+
+To configure and test Azure AD SSO with Rootly, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Rootly SSO](#configure-rootly-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Rootly test user](#create-rootly-test-user)** - to have a counterpart of B.Simon in Rootly that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Rootly** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://rootly.com/sso`
+
+1. Rootly application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of Rootly application.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Rootly application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstname | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+1. On the **Set up Rootly** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Rootly.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Rootly**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Rootly SSO
+
+To configure single sign-on on **Rootly** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Rootly support team](mailto:support@rootly.com). They set this setting to have the SAML SSO connection set properly on both sides. For more information, refer [this](https://docs.rootly.com/integrations/sso#sv-installation) link.
+
+### Create Rootly test user
+
+In this section, a user called B.Simon is created in Rootly. Rootly supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Rootly, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Rootly Sign-On URL where you can initiate the login flow.
+
+* Go to Rootly Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Rootly for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Rootly tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Rootly for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Rootly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tableau Online Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableau-online-provisioning-tutorial.md
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+### Recommendations
+Tableau Cloud will only store the highest privileged role that is assigned to a user. In other words, if a user is assigned to two groups, the userΓÇÖs role will reflect the highest privileged role.
++
+To keep track of role assignments, you can create two purpose-specific groups for role assignments. For example, you can create groups such as Tableau ΓÇô Creator, and Tableau ΓÇô Explorer, etc. Assignment would then look like:
+* Tableau ΓÇô Creator: Creator
+* Tableau ΓÇô Explorer: Explorer
+* Etc.
+
+Once provisioning is set up, you will want to edit role changes directly in Azure Active Directory. Otherwise, you may end up with role inconsistencies between Tableau Cloud and Azure Active Directory.
+
+### Valid Tableau site role values
+On the **Select a Role** page in your Azure Active Directory portal, the Tableau Site Role values that are valid include the following: **Creator, SiteAdministratorCreator, Explorer, SiteAdministratorExplorer, ExplorerCanPublish, Viewer, or Unlicensed**.
++
+If you select a role that is not in the above list, such as a legacy (pre-v2018.1) role, you will experience an error.
+ ## Step 5. Configure automatic user provisioning to Tableau Cloud
This section guides you through the steps to configure the Azure AD provisioning
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
-### Recommendations
-Tableau Cloud will only store the highest privileged role that is assigned to a user. In other words, if a user is assigned to two groups, the userΓÇÖs role will reflect the highest privileged role.
--
-To keep track of role assignments, you can create two purpose-specific groups for role assignments. For example, you can create groups such as Tableau ΓÇô Creator, and Tableau ΓÇô Explorer, etc. Assignment would then look like:
-* Tableau ΓÇô Creator: Creator
-* Tableau ΓÇô Explorer: Explorer
-* Etc.
-
-Once provisioning is set up, you will want to edit role changes directly in Azure Active Directory. Otherwise, you may end up with role inconsistencies between Tableau Cloud and Azure Active Directory.
-
-### Valid Tableau site role values
-On the **Select a Role** page in your Azure Active Directory portal, the Tableau Site Role values that are valid include the following: **Creator, SiteAdministratorCreator, Explorer, SiteAdministratorExplorer, ExplorerCanPublish, Viewer, or Unlicensed**.
--
-If you select a role that is not in the above list, such as a legacy (pre-v2018.1) role, you will experience an error.
### Update a Tableau Cloud application to use the Tableau Cloud SCIM 2.0 endpoint
active-directory Zdiscovery Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zdiscovery-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with ZDiscovery'
+description: Learn how to configure single sign-on between Azure Active Directory and ZDiscovery.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with ZDiscovery
+
+In this tutorial, you'll learn how to integrate ZDiscovery with Azure Active Directory (Azure AD). When you integrate ZDiscovery with Azure AD, you can:
+
+* Control in Azure AD who has access to ZDiscovery.
+* Enable your users to be automatically signed-in to ZDiscovery with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ZDiscovery single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* ZDiscovery supports **SP** and **IDP** initiated SSO.
+
+## Add ZDiscovery from the gallery
+
+To configure the integration of ZDiscovery into Azure AD, you need to add ZDiscovery from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ZDiscovery** in the search box.
+1. Select **ZDiscovery** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for ZDiscovery
+
+Configure and test Azure AD SSO with ZDiscovery using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ZDiscovery.
+
+To configure and test Azure AD SSO with ZDiscovery, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ZDiscovery SSO](#configure-zdiscovery-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ZDiscovery test user](#create-zdiscovery-test-user)** - to have a counterpart of B.Simon in ZDiscovery that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **ZDiscovery** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:<AUTH0_TENANT>:<CONNECTION_NAME>`
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |--|
+ | `https://zapproved.auth0.com/login/callback?connection=<YOUR_AUTH0_CONNECTION_NAME>` |
+ | `https://zapproved-sandbox.auth0.com/login/callback?connection=<YOUR_AUTH0_CONNECTION_NAME>` |
+ | `https://zapproved-preview.us.auth0.com/login/callback?connection=<YOUR_AUTH0_CONNECTION_NAME>` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ ||
+ | `https://zdiscovery.io/<CustomerName>/` |
+ | `https://zdiscovery-sandbox.io/<CustomerName>` |
+ | `https://zdiscovery-preview.io/<CustomerName>` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Rise.com support team](mailto:support@zapproved.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+1. On the **Set up ZDiscovery** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ZDiscovery.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ZDiscovery**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure ZDiscovery SSO
+
+To configure single sign-on on **ZDiscovery** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [ZDiscovery support team](mailto:support@zapproved.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create ZDiscovery test user
+
+In this section, you create a user called Britta Simon at ZDiscovery. Work with [ZDiscovery support team](mailto:support@zapproved.com) to add the users in the ZDiscovery platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to ZDiscovery Sign-On URL where you can initiate the login flow.
+
+* Go to ZDiscovery Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the ZDiscovery for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the ZDiscovery tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the ZDiscovery for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure ZDiscovery you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an AKS cluster by using Azure CLI'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI. Previously updated : 04/29/2022 Last updated : 06/28/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
To learn more about creating a Windows Server node pool, see [Create an AKS clus
- This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].
- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
-[az account](/cli/azure/account) command.
+[az account][az-account] command.
-- Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
+- Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* providers are registered on your subscription. These are Azure resource providers required to support [Container insights][azure-monitor-containers]. To check the registration status, run the following commands:
- ```azurecli-interactive
+ ```azurecli
az provider show -n Microsoft.OperationsManagement -o table az provider show -n Microsoft.OperationalInsights -o table ```
- If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
+ If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using the following commands:
- ```azurecli-interactive
+ ```azurecli
az provider register --namespace Microsoft.OperationsManagement az provider register --namespace Microsoft.OperationalInsights ```
To learn more about creating a Windows Server node pool, see [Create an AKS clus
## Create a resource group
-An [Azure resource group](../../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is:
+An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is:
* The storage location of your resource group metadata. * Where your resources will run in Azure if you don't specify another region during resource creation.
The following output example resembles successful creation of the resource group
## Create AKS cluster
-Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Container insights][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node:
+Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Container insights][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity:
```azurecli-interactive
-az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
+az aks create -g myResourceGroup -n myManagedCluster --enable-managed-identity --node-count 1 --enable-addons monitoring
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
az aks install-cli ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
- * Downloads credentials and configures the Kubernetes CLI to use them.
- * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file* argument.
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
+
+ * Downloads credentials and configures the Kubernetes CLI to use them.
+ * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file* argument.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
Two [Kubernetes Services][kubernetes-service] are also created:
* An internal service for the Redis instance. * An external service to access the Azure Vote application from the internet.
-1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
-1. Copy in the following YAML definition:
+1. Create a file named `azure-vote.yaml` and copy in the following manifest.
+
+ * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system.
```yaml apiVersion: apps/v1
This quickstart is for introductory purposes. For guidance on a creating full so
<!-- LINKS - internal --> [kubernetes-concepts]: ../concepts-clusters-workloads.md [aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
+[aks-identity-concepts]: ../concepts-identity.md
[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[azure-resource-group]: ../../azure-resource-manager/management/overview.md
+[az-account]: /cli/azure/account
[az-aks-browse]: /cli/azure/aks#az-aks-browse [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
This article shows you how to create an RDP connection with an AKS node using th
## Before you begin
+### [Azure CLI](#tab/azure-cli)
+ This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure CLI][aks-quickstart-windows-cli]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac]. If you need to reset the password you can use `az aks update` to change the password.
If you need to reset both the username and password, see [Reset Remote Desktop S
You also need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+### [Azure PowerShell](#tab/azure-powershell)
+
+This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure PowerShell][aks-quickstart-windows-powershell]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac].
+
+If you need to reset the password you can use `Set-AzAksCluster` to change the password.
+
+```azurepowershell-interactive
+$cluster = Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster
+$cluster.WindowsProfile.AdminPassword = $WINDOWS_ADMIN_PASSWORD
+$cluster | Set-AzAksCluster
+```
+
+If you need to reset both the username and password, see [Reset Remote Desktop Services or its administrator password in a Windows VM
+](/troubleshoot/azure/virtual-machines/reset-rdp).
+
+You also need the Azure PowerShell version 7.5.0 or later installed and configured. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][install-azure-powershell].
+++ ## Deploy a virtual machine to the same subnet as your cluster The Windows Server nodes of your AKS cluster don't have externally accessible IP addresses. To make an RDP connection, you can deploy a virtual machine with a publicly accessible IP address to the same subnet as your Windows Server nodes. The following example creates a virtual machine named *myVM* in the *myResourceGroup* resource group.
-First, get the subnet used by your Windows Server node pool. To get the subnet id, you need the name of the subnet. To get the name of the subnet, you need the name of the vnet. Get the vnet name by querying your cluster for its list of networks. To query the cluster, you need its name. You can get all of these by running the following in the Azure Cloud Shell:
+### [Azure CLI](#tab/azure-cli)
+
+First, get the subnet used by your Windows Server node pool. To get the subnet ID, you need the name of the subnet. To get the name of the subnet, you need the name of the VNet. Get the VNet name by querying your cluster for its list of networks. To query the cluster, you need its name. You can get all of these by running the following in the Azure Cloud Shell:
```azurecli-interactive CLUSTER_RG=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
The following example output shows the VM has been successfully created and disp
Record the public IP address of the virtual machine. You will use this address in a later step.
+### [Azure PowerShell](#tab/azure-powershell)
+
+First, get the subnet used by your Windows Server node pool. You need the name of the subnet and its address prefix. To get the name of the subnet, you need the name of the VNet. Get the VNet name by querying your cluster for its list of networks. To query the cluster, you need its name. You can get all of these by running the following in the Azure Cloud Shell:
+
+```azurepowershell-interactive
+$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+$VNET_NAME = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Name
+$ADDRESS_PREFIX = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).AddressSpace | Select-Object -ExpandProperty AddressPrefixes
+$SUBNET_NAME = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Subnets[0].Name
+$SUBNET_ADDRESS_PREFIX = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Subnets[0] | Select-Object -ExpandProperty AddressPrefix
+```
+
+Now that you have the VNet and subnet details, run the following commands in the same Azure Cloud Shell window to create the public IP address and VM:
+
+```azurepowershell-interactive
+$ipParams = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus'
+ AllocationMethod = 'Dynamic'
+ IpAddressVersion = 'IPv4'
+}
+New-AzPublicIpAddress @ipParams
+
+$vmParams = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myVM'
+ Image = 'win2019datacenter'
+ Credential = Get-Credential azureuser
+ VirtualNetworkName = $VNET_NAME
+ AddressPrefix = $ADDRESS_PREFIX
+ SubnetName = $SUBNET_NAME
+ SubnetAddressPrefix = $SUBNET_ADDRESS_PREFIX
+ PublicIpAddressName = 'myPublicIP'
+}
+New-AzVM @vmParams
+
+(Get-AzPublicIpAddress -ResourceGroupName myResourceGroup -Name myPublicIP).IpAddress
+```
+
+The following example output shows the VM has been successfully created and displays the public IP address of the virtual machine.
+
+```console
+13.62.204.18
+```
+
+Record the public IP address of the virtual machine. You will use this address in a later step.
+++ ## Allow access to the virtual machine AKS node pool subnets are protected with NSGs (Network Security Groups) by default. To get access to the virtual machine, you'll have to enabled access in the NSG.
AKS node pool subnets are protected with NSGs (Network Security Groups) by defau
> The NSGs are controlled by the AKS service. Any change you make to the NSG will be overwritten at any time by the control plane. >
-First, get the resource group and nsg name of the nsg to add the rule to:
+### [Azure CLI](#tab/azure-cli)
+
+First, get the resource group and name of the NSG to add the rule to:
```azurecli-interactive CLUSTER_RG=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
Then, create the NSG rule:
az network nsg rule create --name tempRDPAccess --resource-group $CLUSTER_RG --nsg-name $NSG_NAME --priority 100 --destination-port-range 3389 --protocol Tcp --description "Temporary RDP access to Windows nodes" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+First, get the resource group and name of the NSG to add the rule to:
+
+```azurepowershell-interactive
+$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+$NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name
+```
+
+Then, create the NSG rule:
+
+```azurepowershell-interactive
+$nsgRuleParams = @{
+ Name = 'tempRDPAccess'
+ Access = 'Allow'
+ Direction = 'Inbound'
+ Priority = 100
+ SourceAddressPrefix = 'Internet'
+ SourcePortRange = '*'
+ DestinationAddressPrefix = '*'
+ DestinationPortRange = '3389'
+ Protocol = 'Tcp'
+ Description = 'Temporary RDP access to Windows nodes'
+}
+Get-AzNetworkSecurityGroup -Name $NSG_NAME -ResourceGroupName $CLUSTER_RG | Add-AzNetworkSecurityRuleConfig @nsgRuleParams | Set-AzNetworkSecurityGroup
+```
+++ ## Get the node address
+### [Azure CLI](#tab/azure-cli)
+ To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli][az-aks-install-cli] command:
-```azurecli-interactive
+```azurecli
az aks install-cli ```
To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks ge
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
+
+```azurepowershell
+Install-AzAksKubectl
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [Import-AzAksCredential][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurepowershell-interactive
+Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+```
+++ List the internal IP address of the Windows Server nodes using the [kubectl get][kubectl-get] command: ```console kubectl get nodes -o wide ```
-The follow example output shows the internal IP addresses of all the nodes in the cluster, including the Windows Server nodes.
+The following example output shows the internal IP addresses of all the nodes in the cluster, including the Windows Server nodes.
```console $ kubectl get nodes -o wide
You can now run any troubleshooting commands in the *cmd* window. Since Windows
## Remove RDP access
+### [Azure CLI](#tab/azure-cli)
+ When done, exit the RDP connection to the Windows Server node then exit the RDP session to the virtual machine. After you exit both RDP sessions, delete the virtual machine with the [az vm delete][az-vm-delete] command: ```azurecli-interactive
NSG_NAME=$(az network nsg list -g $CLUSTER_RG --query [].name -o tsv)
az network nsg rule delete --resource-group $CLUSTER_RG --nsg-name $NSG_NAME --name tempRDPAccess ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+When done, exit the RDP connection to the Windows Server node then exit the RDP session to the virtual machine. After you exit both RDP sessions, delete the virtual machine with the [Remove-AzVM][remove-azvm] command:
+
+```azurepowershell-interactive
+Remove-AzVM -ResourceGroupName myResourceGroup -Name myVM
+```
+
+And the NSG rule:
+
+```azurepowershell-interactive
+$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+$NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name
+```
+
+```azurepowershell-interactive
+Get-AzNetworkSecurityGroup -Name $NSG_NAME -ResourceGroupName $CLUSTER_RG | Remove-AzNetworkSecurityRuleConfig -Name tempRDPAccess | Set-AzNetworkSecurityGroup
+```
+++ ## Next steps If you need additional troubleshooting data, you can [view the Kubernetes master node logs][view-master-logs] or [Azure Monitor][azure-monitor-containers].
If you need additional troubleshooting data, you can [view the Kubernetes master
<!-- INTERNAL LINKS --> [aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md
+[aks-quickstart-windows-powershell]: ./learn/quick-windows-container-deploy-powershell.md
[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
[az-vm-delete]: /cli/azure/vm#az_vm_delete
+[remove-azvm]: /powershell/module/az.compute/remove-azvm
[azure-monitor-containers]: ../azure-monitor/containers/container-insights-overview.md [install-azure-cli]: /cli/azure/install-azure-cli
+[install-azure-powershell]: /powershell/azure/install-az-ps
[ssh-steps]: ssh.md
-[view-master-logs]: view-master-logs.md
+[view-master-logs]: view-master-logs.md
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
Instead of service principals, use managed identities, which are essentially wra
## How do I change the administrator password for Windows Server nodes on my cluster?
+### [Azure CLI](#tab/azure-cli)
+ When you create your AKS cluster, you specify the `--windows-admin-password` and `--windows-admin-username` parameters to set the administrator credentials for any Windows Server nodes on the cluster. If you didn't specify administrator credentials when you created a cluster by using the Azure portal or when setting `--vm-set-type VirtualMachineScaleSets` and `--network-plugin azure` by using the Azure CLI, the username defaults to *azureuser* and a randomized password. To change the administrator password, use the `az aks update` command:
az aks update \
> > When you're changing `--windows-admin-password`, the new password must be at least 14 characters and meet [Windows Server password requirements][windows-server-password].
+### [Azure PowerShell](#tab/azure-powershell)
+
+When you create your AKS cluster, you specify the `-WindowsProfileAdminUserPassword` and `-WindowsProfileAdminUserName` parameters to set the administrator credentials for any Windows Server nodes on the cluster. If you didn't specify administrator credentials when you created a cluster by using the Azure portal or when setting `-NodeVmSetType VirtualMachineScaleSets` and `-NetworkPlugin azure` by using the Azure PowerShell, the username defaults to *azureuser* and a randomized password.
+
+To change the administrator password, use the `Set-AzAksCluster` command:
+
+```azurepowershell
+$cluster = Get-AzAksCluster -ResourceGroupName $RESOURCE_GROUP -Name $CLUSTER_NAME
+$cluster.WindowsProfile.AdminPassword = $NEW_PW
+$cluster | Set-AzAksCluster
+```
+
+> [!IMPORTANT]
+> Performing the `Set-AzAksCluster` operation upgrades only Windows Server node pools. Linux node pools are not affected.
+>
+> When you're changing the Windows administrator password, the new password must be at least 14 characters and meet [Windows Server password requirements][windows-server-password].
+++ ## How many node pools can I create? The AKS cluster can have a maximum of 100 node pools. You can have a maximum of 1,000 nodes across those node pools. For more information, see [Node pool limitations][nodepool-limitations].
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
To configure an access policy using the portal:
### <a name="use-ssl-tls-certificate-from-azure-key-vault"></a>Obtain a custom TLS/SSL certificate for the API Management instance from Azure Key Vault You can use the system-assigned identity of an API Management instance to retrieve custom TLS/SSL certificates stored in Azure Key Vault. You can then assign these certificates to custom domains in the API Management instance. Keep these considerations in mind: -- The content type of the secret must be *application/x-pkcs12*.
+- The content type of the secret must be *application/x-pkcs12*. Learn more about custom domain [certificate requirements](configure-custom-domain.md?tabs=key-vault#domain-certificate-options).
- Use the Key Vault certificate secret endpoint, which contains the secret. > [!Important] > If you don't provide the object version of the certificate, API Management will automatically obtain the newer version of the certificate within four hours after it's updated in Key Vault.
-The following example shows an Azure Resource Manager template that contains the following steps:
+The following example shows an Azure Resource Manager template that uses the system-assigned managed identity of an API Management service instance to retrieve a custom domain certificate from Key Vault.
+
+#### Prerequisites
+
+* An API Management service instance configured with a system-assigned managed identity. To create the instance, you can use an [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/api-management-create-with-msi/).
+* An Azure Key Vault instance in the same resource group, hosting a certificate that will be used as a custom domain certificate in API Management.
+
+The following template contains the following steps.
+
+1. Update the access policies of the Azure Key Vault instance and allow the API Management instance to obtain secrets from it.
+1. Update the API Management instance by setting a custom domain name through the certificate from the Key Vault instance.
-1. Create an API Management instance with a managed identity.
-2. Update the access policies of an Azure Key Vault instance and allow the API Management instance to obtain secrets from it.
-3. Update the API Management instance by setting a custom domain name through a certificate from the Key Vault instance.
+When you run the template, provide parameter values appropriate for your environment.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "publisherEmail": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "The email address of the owner of the service"
- }
- },
- "publisherName": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "apiManagementServiceName": {
"type": "string",
- "defaultValue": "Contoso",
- "minLength": 1,
- "metadata": {
- "description": "The name of the owner of the service"
- }
- },
- "sku": {
- "type": "string",
- "allowedValues": ["Developer",
- "Standard",
- "Premium"],
- "defaultValue": "Developer",
- "metadata": {
- "description": "The pricing tier of this API Management instance"
- }
- },
- "skuCount": {
- "type": "int",
- "defaultValue": 1,
- "metadata": {
- "description": "The instance size of this API Management instance."
+ "minLength": 8,
+ "metadata":{
+ "description": "The name of the API Management service"
} },
+ "publisherEmail": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "The email address of the owner of the service"
+ }
+ },
+ "publisherName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "The name of the owner of the service"
+ }
+ },
+ "sku": {
+ "type": "string",
+ "allowedValues": ["Developer",
+ "Standard",
+ "Premium"],
+ "defaultValue": "Developer",
+ "metadata": {
+ "description": "The pricing tier of this API Management service"
+ }
+ },
+ "skuCount": {
+ "type": "int",
+ "defaultValue": 1,
+ "metadata": {
+ "description": "The instance size of this API Management service."
+ }
+ },
"keyVaultName": { "type": "string", "metadata": {
- "description": "Name of the vault"
- }
- },
- "proxyCustomHostname1": {
- "type": "string",
- "metadata": {
- "description": "Gateway custom hostname."
+ "description": "Name of the key vault"
} },
- "keyVaultIdToCertificate": {
- "type": "string",
- "metadata": {
- "description": "Reference to the Key Vault certificate. https://contoso.vault.azure.net/secrets/contosogatewaycertificate."
- }
- }
- },
- "variables": {
- "apiManagementServiceName": "[concat('apiservice', uniqueString(resourceGroup().id))]",
- "apimServiceIdentityResourceId": "[concat(resourceId('Microsoft.ApiManagement/service', variables('apiManagementServiceName')),'/providers/Microsoft.ManagedIdentity/Identities/default')]"
- },
- "resources": [{
+ "proxyCustomHostname1": {
+ "type": "string",
+ "metadata": {
+ "description": "Gateway custom hostname 1. Example: api.contoso.com"
+ }
+ },
+ "keyVaultIdToCertificate": {
+ "type": "string",
+ "metadata": {
+ "description": "Reference to the key vault certificate. Example: https://contoso.vault.azure.net/secrets/contosogatewaycertificate"
+ }
+ }
+ },
+ "variables": {
+ "apimServiceIdentityResourceId": "[concat(resourceId('Microsoft.ApiManagement/service', parameters('apiManagementServiceName')),'/providers/Microsoft.ManagedIdentity/Identities/default')]"
+ },
+ "resources": [
+ {
"apiVersion": "2021-08-01",
- "name": "[variables('apiManagementServiceName')]",
+ "name": "[parameters('apiManagementServiceName')]",
"type": "Microsoft.ApiManagement/service", "location": "[resourceGroup().location]", "tags": {
The following example shows an Azure Resource Manager template that contains the
{ "type": "Microsoft.KeyVault/vaults/accessPolicies", "name": "[concat(parameters('keyVaultName'), '/add')]",
- "apiVersion": "2015-06-01",
- "dependsOn": [
- "[resourceId('Microsoft.ApiManagement/service', variables('apiManagementServiceName'))]"
- ],
+ "apiVersion": "2018-02-14",
"properties": { "accessPolicies": [{
- "tenantId": "[reference(variables('apimServiceIdentityResourceId'), '2015-08-31-PREVIEW').tenantId]",
- "objectId": "[reference(variables('apimServiceIdentityResourceId'), '2015-08-31-PREVIEW').principalId]",
+ "tenantId": "[reference(variables('apimServiceIdentityResourceId'), '2018-11-30').tenantId]",
+ "objectId": "[reference(variables('apimServiceIdentityResourceId'), '2018-11-30').principalId]",
"permissions": { "secrets": ["get", "list"] } }] } },
- {
- "apiVersion": "2017-05-10",
+ {
+ "apiVersion": "2021-04-01",
+ "type": "Microsoft.Resources/deployments",
"name": "apimWithKeyVault",
- "type": "Microsoft.Resources/deployments",
- "dependsOn": [
- "[resourceId('Microsoft.ApiManagement/service', variables('apiManagementServiceName'))]"
+ "dependsOn": [
+ "[resourceId('Microsoft.ApiManagement/service', parameters('apiManagementServiceName'))]"
], "properties": { "mode": "incremental",
- "templateLink": {
- "uri": "https://raw.githubusercontent.com/solankisamir/arm-templates/master/basicapim.keyvault.json",
- "contentVersion": "1.0.0.0"
- },
- "parameters": {
- "publisherEmail": { "value": "[parameters('publisherEmail')]"},
- "publisherName": { "value": "[parameters('publisherName')]"},
- "sku": { "value": "[parameters('sku')]"},
- "skuCount": { "value": "[parameters('skuCount')]"},
- "proxyCustomHostname1": {"value" : "[parameters('proxyCustomHostname1')]"},
- "keyVaultIdToCertificate": {"value" : "[parameters('keyVaultIdToCertificate')]"}
- }
- }
- }]
+ "template": {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "resources": [{
+ "apiVersion": "2021-08-01",
+ "name": "[parameters('apiManagementServiceName')]",
+ "type": "Microsoft.ApiManagement/service",
+ "location": "[resourceGroup().location]",
+ "tags": {
+ },
+ "sku": {
+ "name": "[parameters('sku')]",
+ "capacity": "[parameters('skuCount')]"
+ },
+ "properties": {
+ "publisherEmail": "[parameters('publisherEmail')]",
+ "publisherName": "[parameters('publisherName')]",
+ "hostnameConfigurations": [{
+ "type": "Proxy",
+ "hostName": "[parameters('proxyCustomHostname1')]",
+ "keyVaultId": "[parameters('keyVaultIdToCertificate')]"
+ }]
+ },
+ "identity": {
+ "type": "systemAssigned"
+ }
+ }]
+ }
+ }
+ }
+]
} ```
API Management is a trusted Microsoft service to the following resources. This a
|Azure Key Vault | [Trusted-access-to-azure-key-vault](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services)| |Azure Storage | [Trusted-access-to-azure-storage](../storage/common/storage-network-security.md?tabs=azure-portal#trusted-access-based-on-system-assigned-managed-identity)| |Azure Service Bus | [Trusted-access-to-azure-service-bus](../service-bus-messaging/service-bus-ip-filtering.md#trusted-microsoft-services)|
-|Azure Event Hub | [Trused-access-to-azure-event-hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)|
+|Azure Event Hubs | [Trused-access-to-azure-event-hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)|
## Create a user-assigned managed identity
Keep these considerations in mind:
For the complete template, see [API Management with Key Vault based SSL using User Assigned Identity](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.apimanagement/api-management-key-vault-create/azuredeploy.json).
-In this template, you will deploy:
+In this template, you'll deploy:
* Azure API Management instance * Azure user-assigned managed identity * Azure Key Vault for storing the SSL/TLS certificate
-To run the deployment automatically, click the following button:
+To run the deployment automatically, select the following button:
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-key-vault-create%2Fazuredeploy.json)
api-management Graphql Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-policies.md
This article provides a reference for API Management policies to validate and re
The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths. An invalid query is a "request error". Authorization is only done for valid requests. -- **Permissions** Because GraphQL queries use a flattened schema: * Permissions may be applied at any leaf node of an output type:
Because GraphQL queries use a flattened schema:
* Fragments * Unions * Interfaces
- * The schema element
+ * The schema element
**Authorize element** Configure the `authorize` element to set an appropriate authorization rule for one or more paths.
Configure the `authorize` element to set an appropriate authorization rule for o
**Introspection system** The policy for path=`/__*` is the [introspection](https://graphql.org/learn/introspection/) system. You can use it to reject introspection requests (`__schema`, `__type`, etc.). + ### Policy statement ```xml
The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in
<set-graphql-resolver parent-type="type" field="field"> <http-data-source> <http-request>
- <set-method>HTTP method</set-method>
+ <set-method>...set-method policy configuration...</set-method>
<set-url>URL</set-url>
- [...]
+ <set-header>...set-header policy configuration...</set-header>
+ <set-body>...set-body policy configuration...</set-body>
+ <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate>
</http-request> <http-response>
- [...]
+ <json-to-xml>...json-to-xml policy configuration...</json-to-xml>
+ <xml-to-json>...xml-to-json policy configuration...</xml-to-json>
+ <find-and-replace>...find-and-replace policy configuration...</find-and-replace>
</http-response> </http-data-source> </set-graphql-resolver>
type User {
| | | -- | | `set-graphql-resolver` | Root element. | Yes | | `http-data-source` | Configures the HTTP request and optionally the HTTP response that are used to resolve data for the given `parent-type` and `field`. | Yes |
-| `http-request` | Specifies a URL and child policies to configure the resolver's HTTP request. Each of the following policies can be specified at most once in the element. <br/><br/>Required policy: [set-method](api-management-advanced-policies.md#SetRequestMethod)<br/><br/>Optional policies: [set-header](api-management-transformation-policies.md#SetHTTPheader), [set-body](api-management-transformation-policies.md#SetBody), [authentication-certificate](api-management-authentication-policies.md#ClientCertificate) | Yes |
-| `set-url` | The URL of the resolver's HTTP request. | Yes |
-| `http-response` | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each of the following policies can be specified at most once. <br/><br/>Optional policies: [set-body](api-management-transformation-policies.md#SetBody), [json-to-xml](api-management-transformation-policies.md#ConvertJSONtoXML), [xml-to-json](api-management-transformation-policies.md#ConvertXMLtoJSON), [find-and-replace](api-management-transformation-policies.md#Findandreplacestringinbody) | No |
+| `http-request` | Specifies a URL and child policies to configure the resolver's HTTP request. Each child element can be specified at most once. | Yes |
+| `set-method`| Method of the resolver's HTTP request, configured using the [set-method](api-management-advanced-policies.md#SetRequestMethod) policy. | Yes |
+| `set-url` | URL of the resolver's HTTP request. | Yes |
+| `set-header` | Header set in the resolver's HTTP request, configured using the [set-header](api-management-transformation-policies.md#SetHTTPheader) policy. | No |
+| `set-body` | Body set in the resolver's HTTP request, configured using the [set-body](api-management-transformation-policies.md#SetBody) policy. | No |
+| `authentication-certificate` | Client certificate presented in the resolver's HTTP request, configured using the [authentication-certificate](api-management-authentication-policies.md#ClientCertificate) policy. | No |
+| `http-response` | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each child element can be specified at most once. |
+| `json-to-xml` | Transforms the resolver's HTTP response using the [json-to-xml](api-management-transformation-policies.md#ConvertJSONtoXML) policy. | No |
+| `xml-to-json` | Transforms the resolver's HTTP response using the [xml-to-json](api-management-transformation-policies.md#ConvertJSONtoXML) policy. | No |
+| `find-and-replace` | Transforms the resolver's HTTP response using the [find-and-replace](api-management-transformation-policies.md#Findandreplacestringinbody) policy. | No |
+ ### Attributes
api-management How To Deploy Self Hosted Gateway Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-docker.md
This article provides the steps for deploying self-hosted gateway component of A
docker run -d -p 80:8080 -p 443:8081 --name <gateway-name> --env-file env.conf mcr.microsoft.com/azure-api-management/gateway:<tag> ```
-9. Execute the command. The command instructs your Docker environment to run the container using a [container image](https://aka.ms/apim/sputnik/registry-portal) from the Microsoft Artifact Registry, and to map the container's HTTP (8080) and HTTPS (8081) ports to ports 80 and 443 on the host.
+9. Execute the command. The command instructs your Docker environment to run the container using a [container image](https://aka.ms/apim/shgw/registry-portal) from the Microsoft Artifact Registry, and to map the container's HTTP (8080) and HTTPS (8081) ports to ports 80 and 443 on the host.
10. Run the below command to check if the gateway container is running: ```console docker ps
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
This article describes the steps for deploying the self-hosted gateway component
6. Select the **\<gateway-name\>.yml** file link and download the YAML file. 7. Select the **copy** icon at the lower-right corner of the **Deploy** text box to save the `kubectl` commands to the clipboard. 8. Paste commands to the terminal (or command) window. The first command creates a Kubernetes secret that contains the access token generated in step 4. The second command applies the configuration file downloaded in step 6 to the Kubernetes cluster and expects the file to be in the current directory.
-9. Run the commands to create the necessary Kubernetes objects in the [default namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) and start self-hosted gateway pods from the [container image](https://aka.ms/apim/sputnik/registry-portal) downloaded from the Microsoft Artifact Registry.
+9. Run the commands to create the necessary Kubernetes objects in the [default namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) and start self-hosted gateway pods from the [container image](https://aka.ms/apim/shgw/registry-portal) downloaded from the Microsoft Artifact Registry.
10. Run the following command to check if the deployment succeeded. Note that it might take a little time for all the objects to be created and for the pods to initialize. ```console
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Deploying self-hosted gateways into the same environments where the backend API
## Packaging and features
-The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/sputnik/registry-portal) from the Microsoft Artifact Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
+The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/shgw/registry-portal) from the Microsoft Artifact Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
### Known limitations
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
To configure a policy:
</on-error> </policies> ```
+ > [!NOTE]
+ > Set a policy's elements and child elements in the order provided in the policy statement.
+ 1. Select **Save** to propagate changes to the API Management gateway immediately. The **ip-filter** policy now appears in the **Inbound processing** section.
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
There are two types of backups in App Service. Automatic backups made for your a
| [Storage account](../storage/index.yml) required | No. | Yes. | | Backup frequency | Hourly, not configurable. | Configurable. | | Retention | 30 days, not configurable. | 0-30 days or indefinite. |
-| Donwloadable | No. | Yes, as Azure Storage blobs. |
+| Downloadable | No. | Yes, as Azure Storage blobs. |
| Partial backups | Not supported. | Supported. | <!-
app-service Manage Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-move-across-regions.md
Certain resources, such as imported certificates or hybrid connections, contain
1. [Create a back up of the source app](manage-backup.md). 1. [Create an app in a new App Service plan, in the target region](app-service-plan-manage.md#create-an-app-service-plan). 2. [Restore the back up in the target app](manage-backup.md)
-2. If you use a custom domain, [bind it preemptively to the target app](manage-custom-dns-migrate-domain.md#bind-the-domain-name-preemptively) with `awverify.` and [enable the domain in the target app](manage-custom-dns-migrate-domain.md#enable-the-domain-for-your-app).
+2. If you use a custom domain, [bind it preemptively to the target app](manage-custom-dns-migrate-domain.md#bind-the-domain-name-preemptively) with `asuid.` and [enable the domain in the target app](manage-custom-dns-migrate-domain.md#enable-the-domain-for-your-app).
3. Configure everything else in your target app to be the same as the source app and verify your configuration. 4. When you're ready for the custom domain to point to the target app, [remap the domain name](manage-custom-dns-migrate-domain.md#remap-the-active-dns-name).
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
To host your application in Azure, you need to create Azure App Service web app in Azure. You can create a web app using the [Azure portal](https://portal.azure.com/), [VS Code](https://code.visualstudio.com/) using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), or the Azure CLI.
-### [Azure portal](#tab/azure-portal)
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+### [Azure CLI](#tab/azure-cli)
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot of how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
### [VS Code](#tab/vscode-aztools)
code .
| [!INCLUDE [Create app service step 8](<./includes/quickstart-python/create-app-service-visual-studio-code-8.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-8-240-px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-8.png"::: | | [!INCLUDE [Create app service step 9](<./includes/quickstart-python/create-app-service-visual-studio-code-9.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-9-240-px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-9.png"::: |
-### [Azure CLI](#tab/azure-cli)
+### [Azure portal](#tab/azure-portal)
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot of how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Train a custom model to analyze and extract data from forms and documents specif
### Prerequisites for training a custom form model
-* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip). If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md).
+* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip).
+
+* If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md).
* Configure CORS
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
In the following example, we use the General Documents feature. The steps to use
1. This step is a one-time process unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
-1. Select the Analyze command to run analysis on the sample document or try your document by using the Add command.
+1. Select the Analyze button to run analysis on the sample document or try your document by using the Add command.
1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view.
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Previously updated : 06/06/2022 Last updated : 06/28/2022
To learn more about Form Recognizer features and development options, visit our
**Document Analysis**
-* 🆕 Read—Analyze and extract printed (typeface) and handwritten text lines, words, locations, and detected languages.
+* 🆕 Read—Analyze and extract printed (typeface) and handwritten text lines, words, locations, and detected languages.
* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities. * Layout—Analyze and extract tables, lines, words, and selection marks from documents, without the need to train a model. **Prebuilt Models**
-* 🆕 W-2—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
+* 🆕 W-2—Analyze and extract fields from US W-2 tax documents (used to report income), using a pre-trained W-2 model.
* InvoicesΓÇöAnalyze and extract common fields from invoices, using a pre-trained invoice model. * ReceiptsΓÇöAnalyze and extract common fields from receipts, using a pre-trained receipt model. * ID documentsΓÇöAnalyze and extract common fields from ID documents like passports or driver's licenses, using a pre-trained ID documents model.
To learn more about Form Recognizer features and development options, visit our
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* [cURL](https://curl.haxx.se/windows/) installed.
+* curl command line tool installed.
+
+ * [Windows](https://curl.haxx.se/windows/)
+ * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows)
* [PowerShell version 7.*+](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true), or a similar command-line application. To check your PowerShell version, type `Get-Host | Select-Object Version`.
-* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* A Form Recognizer (single-service) or Cognitive Services (multi-service) resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
> [!TIP] > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
To learn more about Form Recognizer features and development options, visit our
* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart: :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
+ ## Analyze documents and get results
- Form Recognizer v3.0 consolidates the analyze document (POST) and get result (GET) requests into single operations. The `modelId` is used for POST and `resultId` for GET operations.
+ A POST request is used to analyze documents with a prebuilt or custom model. A GET request is used to retrieve the result of a document analysis call. The `modelId` is used with POST and `resultId` with GET operations.
### Analyze document (POST Request)
curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-
| ID Documents | prebuilt-idDocument | [Sample ID document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/identity_documents.png) | | Business Cards | prebuilt-businessCard | [Sample business card](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/de5e0d8982ab754823c54de47a47e8e499351523/curl/form-recognizer/rest-api/business_card.jpg) |
-#### Operation-Location
+#### POST response
-You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a `resultID` that can be queried to get the status of the asynchronous operation:
+You'll receive a `202 (Success)` response that includes an **Operation-location** header. The value of this header contains a `resultID` that can be queried to get the status of the asynchronous operation:
:::image type="content" source="../media/quickstarts/operation-location-result-id.png" alt-text="{alt-text}":::
You'll receive a `202 (Success)` response that includes an **Operation-Location*
After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+1. Replace `{POST response}` Operation-location header from the [POST response](#post-response).
-1. Replace `{endpoint}` with the endpoint value from your Form Recognizer instance in the Azure portal.
1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal.
-1. Replace `{modelID}` with the same modelID you used to analyze your document.
-1. Replace `{resultID}` with the result ID from the [Operation-Location](#operation-location) header.
+ <!-- markdownlint-disable MD024 --> #### GET request ```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
+curl -v -X GET "{POST response}" -H "Ocp-Apim-Subscription-Key: {key}"
``` #### Examine the response
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Use this procedure to configure source control using the Azure portal.
|Publish Runbook | Setting of On if runbooks are automatically published after synchronization from source control, and Off otherwise. | |Description | Text specifying additional details about the source control. |
- <sup>1</sup> To enable Auto Sync when configuring source control integration with Azure DevOps, you must be a Project Administrator.
+ <sup>1</sup> To enable Auto Sync when configuring source control integration with Azure DevOps, you must be a Project Administrator.</br>
+ Auto Sync does not work with Automation Private Link. If you enable the Private Link, the source control webhook invocations will fail as it is outside the network.
:::image type="content" source="./media/source-control-integration/source-control-summary-inline.png" alt-text="Screenshot that describes the Source control summary." lightbox="./media/source-control-integration/source-control-summary-expanded.png"::: > [!NOTE]
-> The login for your source control repository might be different from your login for the Azure portal. Ensure that you are logged in with the correct account for your source control repository when configuring source control. If there is a doubt, open a new tab in your browser, log out from **dev.azure.com**, **visualstudio.com**, or **github.com**, and try reconnecting to source control.
+> The login for your source control repository might be different from your login for the Azure portal. Ensure that you are logged in with the correct account for your source control repository when configuring source control. If there is a doubt, open a new tab in your browser, log out from **dev.azure.com**, **visualstudio.com**, or **github.com**, and try reconnecting to source control.
### Configure source control in PowerShell
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Title: Create a Data Controller using Kubernetes tools
-description: Create a Data Controller using Kubernetes tools
+ Title: Create a data controller using Kubernetes tools
+description: Create a data controller using Kubernetes tools
Last updated 11/03/2021
-# Create Azure Arc data controller using Kubernetes tools
+# Create Azure Arc-enabled data controller using Kubernetes tools
+A data controller manages Azure Arc-enabled data services for a Kubernetes cluster. This article describes how to use Kubernetes tools to create a data controller.
## Prerequisites Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information.
-To create the Azure Arc data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
+To create the data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) > [!NOTE]
-> Some of the steps to create the Azure Arc data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
+> Some of the steps to create the data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
### Cleanup from past installations
-If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted.
+If you installed the data controller in the past on the same cluster and deleted the data controller, there may be some cluster level objects that would still need to be deleted.
For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`.
-Run the following commands to delete the Azure Arc data controller cluster level objects:
+Run the following commands to delete the data controller cluster level objects:
```console # Cleanup azure arc data service artifacts
kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{names
## Overview
-Creating the Azure Arc data controller has the following high level steps:
+Creating the data controller has the following high level steps:
- > [!IMPORTANT]
- > Some of the steps below require Kubernetes cluster administrator permissions.
-
-1. Create the custom resource definitions for the Arc data controller, Azure SQL managed instance, and PostgreSQL Hyperscale.
-1. Create a namespace in which the data controller will be created.
+1. Create a namespace in which the data controller will be created.
+1. Create the deployer service account.
1. Create the bootstrapper service including the replica set, service account, role, and role binding. 1. Create a secret for the data controller administrator username and password.
-1. Create the webhook deployment job, cluster role and cluster role binding.
1. Create the data controller.
-## Create the custom resource definitions
-
-Run the following command to create the custom resource definitions.
-
- > [!IMPORTANT]
- > Requires Kubernetes cluster administrator permissions.
-
-```console
-kubectl create -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
-```
- ## Create a namespace in which the data controller will be created Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created. In this example and the remainder of the examples in this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout.
openshift.io/sa.scc.supplemental-groups: 1000700001/10000
openshift.io/sa.scc.uid-range: 1000700001/10000 ```
-If other people will be using this namespace that are not cluster administrators, we recommend creating a namespace admin role and granting that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
+If other people who are not cluster administrators will be using this namespace, create a namespace admin role and grant that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
++
+## Create the deployer service account
+
+ > [!IMPORTANT]
+ > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
+
+Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
+
+```console
+kubectl apply --namespace arc -f arcdata-deployer.yaml
+```
+ ## Create the bootstrapper service
-The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller, SQL managed instances, or PostgreSQL Hyperscale server groups.
+The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller.
-Run the following command to create a bootstrapper service, a service account for the bootstrapper service, and a role and role binding for the bootstrapper service account.
+Run the following command to create a "bootstrap" job to install the bootstrapper along with related cluster-scope and namespaced objects, such as custom resource definitions (CRDs), the service account and bootstrapper role.
```console
-kubectl create --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/bootstrap.yaml
```
-Verify that the bootstrapper pod is running using the following command. You may need to run it a few times until the status changes to `Running`.
+The [uninstall.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/uninstall.yaml) is for uninstalling the bootstrapper and related Kubernetes objects, except the CRDs.
+
+Verify that the bootstrapper pod is running using the following command.
```console
-kubectl get pod --namespace arc
+kubectl get pod --namespace arc -l app=bootstrapper
```
-The bootstrapper.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment does not have access directly to the Microsoft Container Registry, you can do the following:
+If the status is not _Running_, run the command a few times until the status is _Running_.
+
+The bootstrap.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following:
- Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md).-- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-lin) for your private container registry.-- Add an image pull secret to the bootstrapper container. See example below.-- Change the image location for the bootstrapper image. See example below.-
-The example below assumes that you created a image pull secret name `arc-private-registry`.
-
-```yaml
-#Just showing only the relevant part of the bootstrapper.yaml template file here
- spec:
- serviceAccountName: sa-bootstrapper
- nodeSelector:
- kubernetes.io/os: linux
- imagePullSecrets:
- - name: arc-private-registry #Create this image pull secret if you are using a private container registry
- containers:
- - name: bootstrapper
- image: mcr.microsoft.com/arcdata/arc-bootstrapper:v1.1.0_2021-11-02 #Change this registry location if you are using a private container registry.
- imagePullPolicy: Always
-```
+- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry.
+- Change the image URL for the bootstrapper image in the bootstrap.yaml file.
+- Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret.
## Create secrets for the metrics and logs dashboards
kubectl create --namespace arc -f C:\arc-data-services\controller-login-secret.y
Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify during Kubernetes native tools deployment](monitor-certificates.md).
-## Create the webhook deployment job, cluster role and cluster role binding
-
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/web-hook.yaml) locally on your computer so that you can modify some of the settings.
-
-Edit the file and replace `{{namespace}}` in all places with the name of the namespace you created in the previous step. **Save the file.**
-
-Run the following command to create the cluster role and cluster role bindings.
-
- > [!IMPORTANT]
- > Requires Kubernetes cluster administrator permissions.
-
-```console
-kubectl create -n arc -f <path to the edited template file on your computer>
-```
- ## Create the data controller Now you are ready to create the data controller itself.
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
Edit the following as needed:
Edit the following as needed:
- **name**: The default name of the data controller is `arc`, but you can change it if you want. - **displayName**: Set this to the same value as the name attribute at the top of the file. - **registry**: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and [pushing them to a private container registry](offline-deployment.md), enter the IP address or DNS name of your registry here.-- **dockerRegistry**: The image pull secret to use to pull the images from a private container registry if required.
+- **dockerRegistry**: The secret to use to pull the images from a private container registry if required.
- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images. - **imageTag**: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version. - **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
Currently, the following Azure Arc-enabled data services are available:
For an introduction to how Azure Arc-enabled data services supports your hybrid work environment, see this introductory video:
-> [!VIDEO https://docs.microsoft.com/Shows//Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
## Always current
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
+
+ Title: Azure Arc-enabled data services - Pre-release testing
+description: Experience pre-release versions of Azure Arc-enabled data services
++++++ Last updated : 06/28/2022++
+#Customer intent: As a data professional, I want to validate upcoming releases.
++
+# Pre-release testing
+
+To provide an opportunity for customers and partners to provide pre-release feedback, pre-release versions of Azure Arc-enabled data services are made available on a predictable schedule. This article describes how to install pre-release versions of Azure Arc-enabled data services and provide feedback to Microsoft.
+
+## Pre-release testing schedule
+
+Each month, Azure Arc-enabled data services is released on the second Tuesday of the month, commonly known as "Patch Tuesday". The pre-release versions are made available on a predictable schedule in alignment with that release date.
+
+- 14 days before the release date, the *test* pre-release version is made available.
+- 7 days before the release date, the *preview* pre-release version is made available.
+
+The main difference between the test and preview pre-release versions is usually just quality and stability, but in some exceptional cases there may be new features introduced in between the test and preview releases.
+
+Normally, pre-release version binaries are available around 10:00 AM Pacific Time. Documentation follows later in the day.
+
+## Artifacts for a pre-release version
+
+Pre-release versions simultaneously release with artifacts, which are designed to work together:
+
+- Container images hosted on the Microsoft Container Registry (MCR)
+ - `mcr.microsoft.com/arcdata/preview` is the repository that hosts the **preview** pre-release builds
+ - `mcr.microsoft.com/arcdata/test` is the repository that hosts the **test** pre-release builds
+
+ > [!NOTE]
+ > `mcr.microsoft.com/arcdata/` will continue to be the repository that hosts the final release builds.
+
+ - Azure CLI extension hosted on Azure Blob Storage
+ - Azure Data Studio extension hosted on Azure Blob Storage
+
+In addition to the above installable artifacts, the following are updated in Azure as needed:
+
+- New version of ARM API (occasionally)
+- New Azure portal accessible via a special URL query string parameter (see below for details)
+- New Arc-enabled Kubernetes extension version for Arc-enabled data services (applies to direct connectivity mode only)
+- Documentation updates on this page describing the location and details of the above artifacts and the new features available and any pre-release "read me" documentation
+
+## Installing pre-release versions
+
+### Install prerequisite tools
+
+To install a pre-release version, follow these pre-requisite instructions:
+
+If you use the Azure CLI extension:
+
+- Uninstall the Azure CLI extension (`az extension remove -n arcdata`).
+- Download the latest pre-release Azure CLI extension `.whl` file from [https://aka.ms/az-cli-arcdata-ext](https://aka.ms/az-cli-arcdata-ext).
+- Install the latest pre-release Azure CLI extension (`az extension add -s <location of downloaded .whl file>`).
+
+If you use the Azure Data Studio extension to install:
+
+- Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**.
+- Download the latest pre-release Azure Data Studio extension .vsix file from [https://aka.ms/ads-arcdata-ext](https://aka.ms/ads-arcdata-ext).
+- Install the extension by choosing File -> Install Extension from VSIX package and then browsing to the download location of the .vsix file.
+
+### Install using Azure CLI
+
+> [!NOTE]
+> Deploying pre-release builds using direct connectivity mode from Azure CLI is not supported.
+
+#### Indirect connectivity mode
+
+If you install using the Azure CLI, follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md). Once created, edit this custom configuration profile file enter the `docker` property values as required based on the information provided in the version history table on this page.
+
+For example:
+
+```json
+
+ "docker": {
+ "registry": "mcr.microsoft.com",
+ "repository": "arcdata/test",
+ "imageTag": "v1.8.0_2022-06-07_5ba6b837",
+ "imagePullPolicy": "Always"
+ },
+```
+
+Once the file is edited, use the command `az arcdata dc create` as explained in [create a custom configuration profile](create-custom-configuration-template.md).
+
+### Install using Azure Data Studio
+
+> [!NOTE]
+> Deploying pre-release builds using direct connectivity mode from Azure Data Studio is not supported.
+
+#### Indirect connectivity mode
+
+If you use Azure Data Studio to install, complete the data controller deployment wizard as normal except click on **Script to notebook** at the end instead of **Deploy**. In the generated notebook, edit the `Set variables` cell to *add* the following lines:
+
+```python
+# choose between arcdata/test or arcdata/preview as appropriate
+os.environ["AZDATA_DOCKER_REPOSITORY"] = "arcdata/test"
+os.environ["AZDATA_DOCKER_TAG"] = "v1.8.0_2022-06-07_5ba6b837"
+```
+
+Run the notebook by clicking **Run All**.
+
+### Install using Azure portal
+
+Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal.
+
+Open the Azure portal by using this special URL: [https://portal.azure.com/?Microsoft_Azure_HybridData_Platform=BugBash](https://portal.azure.com/?Microsoft_Azure_HybridData_Platform=BugBash).
+
+Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate and enter the desired tag in the **Image tag** field. Fill out the rest of the custom cluster configuration template fields as normal.
+
+Complete the rest of the wizard as normal.
+
+When you deploy with this method, the most recent pre-release version will always be used.
+
+## Current preview release information
++
+## Provide feedback
+
+At this time, pre-release testing is supported for certain customers and partners that have established agreements with Microsoft. Participants have points of contact on the product engineering team. Email your points of contact with any issues that are found during pre-release testing.
+
+## Next steps
+
+[Release notes - Azure Arc-enabled data services](release-notes.md)
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
Title: Upgrade indirectly connected Azure Arc data controller using Kubernetes tools
-description: Article describes how to upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
+ Title: Upgrade indirectly connected data controller for Azure Arc using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected data controller for Azure Arc using Kubernetes tools
Last updated 05/27/2022
-# Upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
+# Upgrade an indirectly connected Azure Arc-enabled data controller using Kubernetes tools
This article explains how to upgrade an indirectly connected Azure Arc-enabled data controller with Kubernetes tools.
During a data controller upgrade, portions of the data control plane such as Cus
In this article, you'll apply a .yaml file to:
-1. Specify a service account.
-1. Set the cluster roles.
-1. Set the cluster role bindings.
-1. Set the job.
+1. Create the service account for running upgrade.
+1. Upgrade the bootstrapper.
+1. Upgrade the data controller.
> [!NOTE] > Some of the data services tiers and modes are generally available and some are in preview.
In this article, you'll apply a .yaml file to:
## Prerequisites
-Prior to beginning the upgrade of the Azure Arc data controller, you'll need:
+Prior to beginning the upgrade of the data controller, you'll need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected
You need an indirectly connected data controller with the `imageTag: v1.0.0_2021
## Install tools
-To upgrade the Azure Arc data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
+To upgrade the data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
The examples in this article use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or helm if you're familiar with those tools and Kubernetes yaml/json.
Found 2 valid versions. The current datacontroller version is <current-version>
... ```
-## Create or download .yaml file
-
-To upgrade the data controller, you'll apply a yaml file to the Kubernetes cluster. The example file for the upgrade is available in GitHub at <https://github.com/microsoft/azure_arc/blob/main/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml>.
-
-You can download the file - and other Azure Arc related demonstration files - by cloning the repository. For example:
-
-```azurecli
-git clone https://github.com/microsoft/azure-arc
-```
-
-For more information, see [Cloning a repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) in the GitHub docs.
-
-The following steps use files from the repository.
-
-In the yaml file, you'll replace ```{{namespace}}``` with your namespace.
- ## Upgrade data controller This section shows how to upgrade an indirectly connected data controller.
This section shows how to upgrade an indirectly connected data controller.
### Upgrade
-You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
-
-### Specify the service account
-
-The upgrade requires an elevated service account for the upgrade job.
-
-To specify the service account:
-
-1. Describe the service account in a .yaml file. The following example sets a name for `ServiceAccount` as `sa-arc-upgrade-worker`:
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="2-4":::
-
-1. Edit the file as needed.
+You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the data controller.
-### Set the cluster roles
-A cluster role (`ClusterRole`) grants the service account permission to perform the upgrade.
+### Create the service account for running upgrade
-1. Describe the cluster role and rules in a .yaml file. The following example defines a cluster role for `arc:cr-upgrade-worker` and allows all API groups, resources, and verbs.
+ > [!IMPORTANT]
+ > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="7-9":::
+Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
-1. Edit the file as needed.
-
-### Set the cluster role binding
-
-A cluster role binding (`ClusterRoleBinding`) links the service account and the cluster role.
-
-1. Describe the cluster role binding in a .yaml file. The following example describes a cluster role binding for the service account.
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="20-21":::
-
-1. Edit the file as needed.
-
-### Specify the job
+```console
+kubectl apply --namespace arc -f arcdata-deployer.yaml
+```
-A job creates a pod to execute the upgrade.
-1. Describe the job in a .yaml file. The following example creates a job called `arc-bootstrapper-upgrade-job`.
+### Upgrade the bootstrapper
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="31-48":::
+The following command creates a job for upgrading the bootstrapper and related Kubernetes objects.
-1. Edit the file for your environment.
+```console
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/upgrade/yaml/bootstrapper-upgrade-job.yaml
+```
### Upgrade the data controller
-Specify the image tag to upgrade the data controller to.
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="50-56":::
-
-### Apply the resources
+The following command patches the image tag to upgrade the data controller.
-Run the following kubectl command to apply the resources to your cluster.
-
-``` bash
-kubectl apply -n <namespace> -f upgrade-indirect-k8s.yaml
+```console
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/upgrade/yaml/data-controller-upgrade.yaml
``` + ## Monitor the upgrade status You can monitor the progress of the upgrade with kubectl.
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 06/21/2022 Last updated : 06/27/2022
This article provides information on troubleshooting and resolving issues that may occur while attempting to deploy, use, or remove the Azure Arc resource bridge (preview). The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster. For general information, see [Azure Arc resource bridge (preview) overview](./overview.md).
-## Logs
+## General issues
+
+### Logs
For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the client machine from which you've deployed the Azure Arc resource bridge.
The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge
$HOME\.KVA\.ssh\logkey.pub $HOME\.KVA\.ssh\logkey ```+
+To run the `az arcappliance logs` command, the path to the kubeconfig must be provided. The kubeconfig is generated after successful completion of the `az arcappliance deploy` command and is placed in the same directory as the CLI command in ./kubeconfig or as specified in `--outfile` (if the parameter was passed).
+
+If `az arcappliance deploy` was not completed, then the kubeconfig file may exist but may be empty or missing data, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs instead. The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane Endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 10.97.176.27", the command to use for logs collection would be:
+
+```azurecli
+az arcappliance logs hci --out-dir c:\logs --ip 10.97.176.27
+```
+ To view the logs, run the following command: ```azurecli
To specify the IP address of the Azure Arc resource bridge virtual machine, run
az arcappliance logs <provider> --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX ```
-## `az arcappliance prepare` fails when deploying to VMware
+### Remote PowerShell is not supported
-The `arcappliance` extension for Azure CLI enables a [prepare](/cli/azure/arcappliance/prepare) command, which enables you to download an OVA template to your vSphere environment. This OVA file is used to deploy the Azure Arc resource bridge. The `az arcappliance prepare` command uses the vSphere SDK and can result in the following error:
+If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you may experience various problems. For instance, you might see an [EOF error when using the `logs` command](#logs-command-fails-with-eof-error), or an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure).
+
+Using `az arcappliance` commands from remote PowerShell is not currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
+
+### Resource bridge cannot be updated
+
+In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again.
+
+For example, if you specified the wrong location, or subscription during deployment, later the resource creation fails. If you only try to recreate the resource without redeploying the resource bridge VM, you'll see the status stuck at `WaitForHeartBeat`.
+
+To resolve this issue, delete the appliance and update the appliance YAML file. Then redeploy and create the resource bridge.
+
+### Failure due to previous failed deployments
+
+If an Arc resource bridge deployment fails, subsequent deployments may fail due to residual cached folders remaining on the machine.
+
+To prevent this from happening, be sure to run the `az arcappliance delete` command after any failed deployment. This command must be run with the latest `arcappliance` Azure CLI extension. To ensure that you have the latest version installed on your machine, run the following command:
```azurecli
-$ az arcappliance prepare vmware --config-file <path to config>
+az extension update --name arcappliance
+```
-Error: Error in reading OVA file: failed to parse ovf: strconv.ParseInt: parsing "3670409216":
-value out of range.
+If the failed deployment is not successfully removed, residual cached folders may cause future Arc resource bridge deployments to fail. This may cause the error message `Unavailable desc = connection closed before server preface received` to surface when various `az arcappliance` commands are run, including `prepare` and `delete`.
+
+To resolve this error, the .wssd\python and .wssd\kva folders in the user profile directory need to be deleted on the machine where the Arc resource bridge CLI commands are being run. You can delete these manually by navigating to the user profile directory (typically C:\Users\<username>), then deleting the .wssd\python and/or .wssd\kva folders. After they are deleted, try the command again.
+
+### Token refresh error
+
+When you run the Azure CLI commands, the following error may be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign in to Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again by using the `az login` command.
+
+### `logs` command fails with EOF error
+
+When running the `az arcappliance logs` Azure CLI command, you may see an error: `Appliance logs command failed with error: EOF when reading a line.` This may occur in scenarios similar to the following:
+
+```azurecli
+az arcappliance logs hci --kubeconfig .\kubeconfig --out-dir c:\temp --ip 192.168.200.127
++ CategoryInfo : NotSpecified: (WARNING: Comman...s/CLI_refstatus:String) [], RemoteException++ FullyQualifiedErrorId : NativeCommandError+
+Please enter cloudservice FQDN/IP: Appliance logs command failed with error: EOF when reading a line[v-Host1]: PS C:\Users\AzureStackAdminD\Documents> az arcappliance logs hci --kubeconfig .\kubeconfig --out-dir c:\temp --ip 192.168.200.127
++ CategoryInfo : NotSpecified: (WARNING: Comman...s/CLI_refstatus:String) [], RemoteException++ FullyQualifiedErrorId : NativeCommandError+
+Please enter cloudservice FQDN/IP: Appliance logs command failed with error: EOF when reading a line
```
-### Cause
+The `az arcappliance logs` CLI command runs in interactive mode, meaning that it prompts the user for parameters. If the command is run in a scenario where it can't prompt the user for parameters, this error will occur. This is especially common when trying to use remote PowerShell to run the command.
-This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. When you install the Azure CLI, it's a 32-bit Windows Installer package. However, the Azure CLI `az arcappliance` extension needs to run in a 64-bit context.
+To avoid this error, use Remote Desktop Protocol (RDP) or a console session to sign directly in to the node and locally run the `logs` command (or any `az arcappliance` command). Remote PowerShell is not currently supported by Azure Arc resource bridge.
-### Resolution
+You can also avoid this error by pre-populating the values that the `logs` command prompts for, thus avoiding the prompt. The example below provides these values into a variable which is then passed to the `logs` command. Be sure to replace `$loginValues` with your cloudservice IP address and the full path to your token credentials.
-Perform the following steps to configure your client machine with the Azure CLI 64-bit version.
+```azurecli
+$loginValues="192.168.200.2
+C:\kvatoken.tok"
-1. Uninstall the current version of the Azure CLI on Windows following these [steps](/cli/azure/install-azure-cli-windows#uninstall).
-1. Install version 3.6 or higher of [Python](https://www.python.org/downloads/windows/) (64-bit).
+$user_in = ""
+foreach ($val in $loginValues) { $user_in = $user_in + $val + "`n" }
- > [!NOTE]
- > It is important after installing Python to confirm that its path is added to the PATH environmental variable.
+$user_in | az arcappliance logs hci --kubeconfig C:\Users\AzureStackAdminD\.kube\config
+```
-1. Install the [pip](https://pypi.org/project/pip/) package installer for Python.
-1. Verify Python is installed correctly by running `py` in a Command Prompt.
-1. From an elevated PowerShell console, run `pip install azure-cli` to install the Azure CLI from PyPI.
+### Default host resource pools are unavailable for deployment
+
+When using the `az arcappliance createConfig` or `az arcappliance run` command, there will be an interactive experience which shows the list of the VMware entities where user can select to deploy the virtual appliance. This list will show all user-created resource pools along with default cluster resource pools, but the default host resource pools aren't listed.
+
+When the appliance is deployed to a host resource pool, there is no high availability if the host hardware fails. Because of this, we recommend that you don't try to deploy the appliance in a host resource pool.
-After you complete these steps, in a new PowerShell console you can get started using the Azure Arc appliance CLI extension.
+## Networking issues
-## Azure Arc resource bridge (preview) is unreachable
+### Azure Arc resource bridge is unreachable
Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if not reserved. Rebooting the Azure Arc resource bridge (preview) or VM can trigger an IP address change, resulting in failing services. Intermittently, the resource bridge (preview) can lose the reserved IP configuration. This is due to the behavior described in [loss of VIPs when systemd-networkd is restarted](https://github.com/acassen/keepalived/issues/1385). When the IP address isn't assigned to the Azure Arc resource bridge (preview) VM, any call to the resource bridge API server will fail. As a result, you can't create any new resource through the resource bridge (preview), ranging from connecting to Azure Arc private cloud, create a custom location, create a VM, etc.
-Another possible cause is slow disk access. Azure Arc resource bridge uses etcd which requires 10ms latency or less per [recommendation](https://docs.openshift.com/container-platform/4.6/scalability_and_performance/recommended-host-practices.html#recommended-etcd-practices_). If the underlying disk has low performance, it can impact the operations, and causing failures.
+Another possible cause is slow disk access. Azure Arc resource bridge uses etcd which requires 10 ms latency or less per [recommendation](https://docs.openshift.com/container-platform/4.6/scalability_and_performance/recommended-host-practices.html#recommended-etcd-practices_). If the underlying disk has low performance, it can impact the operations, and causing failures.
-### Resolution
+To resolve this issue, reboot the resource bridge (preview) VM, and it should recover its IP address. If the address is assigned from a DHCP server, reserve the IP address associated with the resource bridge (preview).
-Reboot the resource bridge (preview) VM and it should recover its IP address. If the address is assigned from a DHCP server, reserve the IP address associated with the resource bridge (preview).
+### SSL proxy configuration issues
-## Resource bridge cannot be updated
+Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. This configuration is handled automatically. However, proxy configuration of the client machine isn't configured by the Azure Arc resource bridge.
-In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again.
+There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the host and guest trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
-For example, if you specified the wrong location, or subscription during deployment, later the resource creation fails. If you only try to recreate the resource without redeploying the resource bridge VM, you'll see the status stuck at `WaitForHeartBeat`.
+## Azure-Arc enabled VMs on Azure Stack HCI issues
+
+For general help resolving issues related to Azure-Arc enabled VMs on Azure Stack HCI, see [Troubleshoot Azure Arc-enabled virtual machines](/azure-stack/hci/manage/troubleshoot-arc-enabled-vms).
+
+### Authentication handshake failure
+
+When running an `az arcappliance` command, you may see a connection error: `authentication handshake failed: x509: certificate signed by unknown authority`
+
+This is usually caused when trying to run commands from remote PowerShell, which is not supported by Azure Arc resource bridge.
+
+To install Azure Arc resource bridge on an Azure Stack HCI cluster, `az arcappliance` commands must be run locally on a node in the cluster. Sign in to the node through Remote Desktop Protocol (RDP) or use a console session to run these commands.
+
+## Azure Arc-enabled VMWare VCenter issues
+
+### `az arcappliance prepare` failure
+
+The `arcappliance` extension for Azure CLI enables a [prepare](/cli/azure/arcappliance/prepare) command, which enables you to download an OVA template to your vSphere environment. This OVA file is used to deploy the Azure Arc resource bridge. The `az arcappliance prepare` command uses the vSphere SDK and can result in the following error:
+
+```azurecli
+$ az arcappliance prepare vmware --config-file <path to config>
+
+Error: Error in reading OVA file: failed to parse ovf: strconv.ParseInt: parsing "3670409216":
+value out of range.
+```
+
+This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. When you install the Azure CLI, it's a 32-bit Windows Installer package. However, the Azure CLI `az arcappliance` extension needs to run in a 64-bit context.
+
+To resolve this issue, perform the following steps to configure your client machine with the Azure CLI 64-bit version:
-### Resolution
+1. Uninstall the current version of the Azure CLI on Windows following these [steps](/cli/azure/install-azure-cli-windows#uninstall).
+1. Install version 3.6 or higher of [Python](https://www.python.org/downloads/windows/) (64-bit).
-Delete the appliance, update the appliance YAML file, then redeploy and create the resource bridge.
+ > [!IMPORTANT]
+ > After you install Python, make sure to confirm that its path is added to the PATH environmental variable.
-## Token refresh error
+1. Install the [pip](https://pypi.org/project/pip/) package installer for Python.
+1. Verify Python is installed correctly by running `py` in a Command Prompt.
+1. From an elevated PowerShell console, run `pip install azure-cli` to install the Azure CLI from PyPI.
-When you run the Azure CLI commands, the following error may be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign into Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again.
+After you complete these steps, you can get started using the Azure Arc appliance CLI extension in a new PowerShell console.
-### Resolution
+### Error during host configuration
-Sign into Azure again using the `az login` command.
+When you deploy the resource bridge on VMware vCenter, if you have been using the same template to deploy and delete the appliance multiple times, you may encounter the following error:
+
+`Appliance cluster deployment failed with error:
+Error: An error occurred during host configuration`
+
+To resolve this issue, delete the existing template manually. Then run [`az arcappliance prepare`](/cli/azure/arcappliance/prepare) to download a new template for deployment.
+
+### Unable to find folders
+
+When deploying the resource bridge on VMware vCenter, you specify the folder in which the template and VM will be created. The folder must be VM and template folder type. Other types of folder, such as storage folders, network folders, or host and cluster folders, can't be used by the resource bridge deployment.
+
+### Insufficient permissions
+
+When deploying the resource bridge on VMWare Vcenter, you may get an error saying that you have insufficient permission. To resolve this issue, make sure that your user account has all of the following privileges in VMware vCenter and then try again.
+
+```
+"Datastore.AllocateSpace"
+"Datastore.Browse"
+"Datastore.DeleteFile"
+"Datastore.FileManagement"
+"Folder.Create"
+"Folder.Delete"
+"Folder.Move"
+"Folder.Rename"
+"InventoryService.Tagging.CreateTag"
+"Sessions.ValidateSession"
+"Network.Assign"
+"Resource.ApplyRecommendation"
+"Resource.AssignVMToPool"
+"Resource.HotMigrate"
+"Resource.ColdMigrate"
+"StorageViews.View"
+"System.Anonymous"
+"System.Read"
+"System.View"
+"VirtualMachine.Config.AddExistingDisk"
+"VirtualMachine.Config.AddNewDisk"
+"VirtualMachine.Config.AddRemoveDevice"
+"VirtualMachine.Config.AdvancedConfig"
+"VirtualMachine.Config.Annotation"
+"VirtualMachine.Config.CPUCount"
+"VirtualMachine.Config.ChangeTracking"
+"VirtualMachine.Config.DiskExtend"
+"VirtualMachine.Config.DiskLease"
+"VirtualMachine.Config.EditDevice"
+"VirtualMachine.Config.HostUSBDevice"
+"VirtualMachine.Config.ManagedBy"
+"VirtualMachine.Config.Memory"
+"VirtualMachine.Config.MksControl"
+"VirtualMachine.Config.QueryFTCompatibility"
+"VirtualMachine.Config.QueryUnownedFiles"
+"VirtualMachine.Config.RawDevice"
+"VirtualMachine.Config.ReloadFromPath"
+"VirtualMachine.Config.RemoveDisk"
+"VirtualMachine.Config.Rename"
+"VirtualMachine.Config.ResetGuestInfo"
+"VirtualMachine.Config.Resource"
+"VirtualMachine.Config.Settings"
+"VirtualMachine.Config.SwapPlacement"
+"VirtualMachine.Config.ToggleForkParent"
+"VirtualMachine.Config.UpgradeVirtualHardware"
+"VirtualMachine.GuestOperations.Execute"
+"VirtualMachine.GuestOperations.Modify"
+"VirtualMachine.GuestOperations.ModifyAliases"
+"VirtualMachine.GuestOperations.Query"
+"VirtualMachine.GuestOperations.QueryAliases"
+"VirtualMachine.Hbr.ConfigureReplication"
+"VirtualMachine.Hbr.MonitorReplication"
+"VirtualMachine.Hbr.ReplicaManagement"
+"VirtualMachine.Interact.AnswerQuestion"
+"VirtualMachine.Interact.Backup"
+"VirtualMachine.Interact.ConsoleInteract"
+"VirtualMachine.Interact.CreateScreenshot"
+"VirtualMachine.Interact.CreateSecondary"
+"VirtualMachine.Interact.DefragmentAllDisks"
+"VirtualMachine.Interact.DeviceConnection"
+"VirtualMachine.Interact.DisableSecondary"
+"VirtualMachine.Interact.DnD"
+"VirtualMachine.Interact.EnableSecondary"
+"VirtualMachine.Interact.GuestControl"
+"VirtualMachine.Interact.MakePrimary"
+"VirtualMachine.Interact.Pause"
+"VirtualMachine.Interact.PowerOff"
+"VirtualMachine.Interact.PowerOn"
+"VirtualMachine.Interact.PutUsbScanCodes"
+"VirtualMachine.Interact.Record"
+"VirtualMachine.Interact.Replay"
+"VirtualMachine.Interact.Reset"
+"VirtualMachine.Interact.SESparseMaintenance"
+"VirtualMachine.Interact.SetCDMedia"
+"VirtualMachine.Interact.SetFloppyMedia"
+"VirtualMachine.Interact.Suspend"
+"VirtualMachine.Interact.TerminateFaultTolerantVM"
+"VirtualMachine.Interact.ToolsInstall"
+"VirtualMachine.Interact.TurnOffFaultTolerance"
+"VirtualMachine.Inventory.Create"
+"VirtualMachine.Inventory.CreateFromExisting"
+"VirtualMachine.Inventory.Delete"
+"VirtualMachine.Inventory.Move"
+"VirtualMachine.Inventory.Register"
+"VirtualMachine.Inventory.Unregister"
+"VirtualMachine.Namespace.Event"
+"VirtualMachine.Namespace.EventNotify"
+"VirtualMachine.Namespace.Management"
+"VirtualMachine.Namespace.ModifyContent"
+"VirtualMachine.Namespace.Query"
+"VirtualMachine.Namespace.ReadContent"
+"VirtualMachine.Provisioning.Clone"
+"VirtualMachine.Provisioning.CloneTemplate"
+"VirtualMachine.Provisioning.CreateTemplateFromVM"
+"VirtualMachine.Provisioning.Customize"
+"VirtualMachine.Provisioning.DeployTemplate"
+"VirtualMachine.Provisioning.DiskRandomAccess"
+"VirtualMachine.Provisioning.DiskRandomRead"
+"VirtualMachine.Provisioning.FileRandomAccess"
+"VirtualMachine.Provisioning.GetVmFiles"
+"VirtualMachine.Provisioning.MarkAsTemplate"
+"VirtualMachine.Provisioning.MarkAsVM"
+"VirtualMachine.Provisioning.ModifyCustSpecs"
+"VirtualMachine.Provisioning.PromoteDisks"
+"VirtualMachine.Provisioning.PutVmFiles"
+"VirtualMachine.Provisioning.ReadCustSpecs"
+"VirtualMachine.State.CreateSnapshot"
+"VirtualMachine.State.RemoveSnapshot"
+"VirtualMachine.State.RenameSnapshot"
+"VirtualMachine.State.RevertToSnapshot"
+```
## Next steps
azure-fluid-relay Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/customer-managed-keys.md
+
+ Title: Customer-managed keys for Azure Fluid Relay encryption
+description: Better understand the data encryption with CMK
++ Last updated : 10/08/2021++++
+# Customer-managed keys for Azure Fluid Relay encryption
+
+You can use your own encryption key to protect the data in your Azure Fluid Relay resource. When you specify a customer-managed key (CMK), that key is used to protect and control access to the key that encrypts your data. CMK offers greater flexibility to manage access controls.
+
+You must use one of the following Azure key stores to store your CMK:
+- [Azure Key Vault](../../key-vault/general/overview.md)
+- [Azure Key Vault Managed Hardware Security Module (HSM)](../../key-vault/managed-hsm/overview.md)
+
+You must create a new Azure Fluid Relay resource to enable CMK. You cannot change the CMK enablement/disablement on an existing Fluid Relay resource.
+
+Also, CMK of Fluid Relay relies on Managed Identity, and you need to assign a managed identity to the Fluid Relay resource when enabling CMK. Only user-assigned identity is allowed for Fluid Relay resource CMK. For more information about managed identities, see [here](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+
+Configuring a Fluid Relay resource with CMK can't be done through Azure portal yet.
+
+When you configure the Fluid Relay resource with CMK, the Azure Fluid Relay service configures the appropriate CMK encrypted settings on the Azure Storage account scope where your Fluid session artifacts are stored. For more information about CMK in Azure Storage, see [here](../../storage/common/customer-managed-keys-overview.md).
+
+To verify a Fluid Relay resource is using CMK, you can check the property of the resource by sending GET and see if it has valid, non-empty property of encryption.customerManagedKeyEncryption.
+
+## Prerequisites:
+
+Before configuring CMK on your Azure Fluid Relay resource, the following prerequisites must be met:
+- Keys must be stored in an Azure Key Vault.
+- Keys must be RSA key and not EC key since EC key doesnΓÇÖt support WRAP and UNWRAP.
+- A user assigned managed identity must be created with necessary permission (GET, WRAP and UNWRAP) to the key vault in step 1. More information [here](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md). Please grant GET, WRAP and UNWRAP under Key Permissions in AKV.
+- Azure Key Vault, user assigned identity, and the Fluid Relay resource must be in the same region and in the same Azure Active Directory (Azure AD) tenant.
+
+## Create a Fluid Relay resource with CMK
+
+```
+PUT https://management.azure.com/subscriptions/<subscription ID>/resourceGroups/<resource group name> /providers/Microsoft.FluidRelay/fluidRelayServers/< Fluid Relay resource name>?api-version=2022-06-01 @"<path to request payload>"
+```
+
+Request payload format:
+
+```
+{
+ "location": "<the region you selected for Fluid Relay resource>",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ ΓÇ£<User assigned identity resource ID>": {}
+ }
+ },
+ "properties": {
+ "encryption": {
+ "customerManagedKeyEncryption": {
+ "keyEncryptionKeyIdentity": {
+ "identityType": "UserAssigned",
+ "userAssignedIdentityResourceId": "<User assigned identity resource ID>"
+ },
+ "keyEncryptionKeyUrl": "<key identifier>"
+ }
+ }
+ }
+}
+```
+
+Example userAssignedIdentities and userAssignedIdentityResourceId:
+/subscriptions/ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testUserAssignedIdentity
+
+Example keyEncryptionKeyUrl: https://test-key-vault.vault.azure.net/keys/testKey/testKeyVersionGuid
+
+Notes:
+- Identity.type must be UserAssigned. It is the identity type of the managed identity that is assigned to the Fluid Relay resource.
+- Properties.encryption.customerManagedKeyEncryption.keyEncryptionKeyIdentity.identityType must be UserAssigned. It is the identity type of the managed identity that should be used for CMK.
+- Although you can specify more than one in Identity.userAssignedIdentities, only one user identity assigned to Fluid Relay resource specified will be used for CMK access the key vault for encryption.
+- Properties.encryption.customerManagedKeyEncryption.keyEncryptionKeyIdentity.userAssignedIdentityResourceId is the resource ID of the user assigned identity that should be used for CMK. Notice that it should be one of the identities in Identity.userAssignedIdentities (You must assign the identity to Fluid Relay resource before it can use it for CMK). Also, it should have necessary permissions on the key (provided by keyEncryptionKeyUrl).
+- Properties.encryption.customerManagedKeyEncryption.keyEncryptionKeyUrl is the key identifier used for CMK.
+
+## Update CMK settings of an existing Fluid Relay resource
+
+You can update the following CMK settings on existing Fluid Relay resource:
+- Change the identity that is used for accessing the key encryption key.
+- Change the key encryption key identifier (key URL).
+- Change the key version of the key encryption key.
+
+Note that you cannot disable CMK on existing Fluid Relay resource once it is enabled.
+
+Request URL:
+
+```
+PATCH https://management.azure.com/subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.FluidRelay/fluidRelayServers/<fluid relay server name>?api-version=2022-06-01 @"path to request payload"
+```
+
+Request payload example for updating key encryption key URL:
+
+```
+{
+ "properties": {
+ "encryption": {
+ "customerManagedKeyEncryption": {
+ "keyEncryptionKeyUrl": "https://test_key_vault.vault.azure.net/keys/testKey /xxxxxxxxxxxxxxxx"
+ }
+ }
+ }
+}
+```
+
+## See also
+
+- [Overview of Azure Fluid Relay architecture](architecture.md)
+- [Data storage in Azure Fluid Relay](../concepts/data-storage.md)
+- [Data encryption in Azure Fluid Relay](../concepts/data-encryption.md)
azure-fluid-relay Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/data-encryption.md
Microsoft has a set of internal guidelines for encryption key rotation which Azu
### Can I use my own encryption keys?
-No, this feature is not available yet. Keep an eye out for more updates on this.
+Yes. For more information, refer to [Customer-managed keys for Azure Fluid Relay encryption](../concepts/customer-managed-keys.md).
### What regions have encryption turned on?
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md
Before you use Core Tools to deploy your project to Azure, you create a producti
1. Use the following command to prepare your TypeScript project for deployment: ```console
- npm run build:production
+ npm run build
``` 1. With the necessary resources in place, you're now ready to deploy your local functions project to the function app in Azure by using the [func azure functionapp publish](functions-run-local.md#project-file-deployment) command. In the following example, replace `<APP_NAME>` with the name of your app.
azure-functions Durable Functions Task Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-task-hubs.md
Title: Task hubs in Durable Functions - Azure
description: Learn what a task hub is in the Durable Functions extension for Azure Functions. Learn how to configure task hubs. Previously updated : 05/10/2022 Last updated : 06/28/2022
A *task hub* in [Durable Functions](durable-functions-overview.md) is a logical
> > For more information on the various storage provider options and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
-If multiple function apps share a storage account, each function app *must* be configured with a separate task hub name. A storage account can contain multiple task hubs. This restriction generally applies to other storage providers as well.
+If multiple function apps share a storage account, each function app *must* be configured with a separate task hub name. This requirement also applies to staging slots: each staging slot must be configured with a unique task hub name. A single storage account can contain multiple task hubs. This restriction generally applies to other storage providers as well.
> [!NOTE] > The exception to the task hub sharing rule is if you are configuring your app for regional disaster recovery. See the [disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md) article for more information.
The task hub name will be set to the value of the `MyTaskHub` app setting. The f
} ```
+> [!NOTE]
+> When using deployment slots, it's a best practice to configure the task hub name using app settings. If you want to ensure that a particular slot always uses a particular task hub, use ["slot-sticky" app settings](../functions-deployment-slots.md#create-a-deployment-setting).
+ In addition to **host.json**, task hub names can also be configured in [orchestration client binding](durable-functions-bindings.md#orchestration-client) metadata. This is useful if you need to access orchestrations or entities that live in a separate function app. The following code demonstrates how to write a function that uses the [orchestration client binding](durable-functions-bindings.md#orchestration-client) to work with a task hub that is configured as an App Setting: # [C#](#tab/csharp)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
You can add the preview extension bundle by adding or replacing the following co
## Functions runtime > [!NOTE]
-> Python language support for the SQL bindings extension is only available for v4 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) and requires runtime v4.5.0 or greater for deployment in Azure. Learn more about determining the runtime in the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) documentation. Please see the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250) for the latest update on availability.
-
-The functions runtime required for local development and testing of Python functions isn't included in the current release of functions core tools and must be installed independently. The latest instructions on installing a preview version of functions core tools are available in the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250).
-
-Alternatively, a VS Code [development container](https://code.visualstudio.com/docs/remote/containers) definition can be used to expedite your environment setup. The definition components are available in the SQL bindings [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python/.devcontainer).
+> Python language support for the SQL bindings extension is available starting with v4.5.0 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version). You may need to update your install of Azure Functions [Core Tools](functions-run-local.md) for local development. Learn more about determining the runtime in Azure regions from the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) documentation. Please see the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250) for the latest update on availability.
## Install bundle
azure-functions Functions Bindings Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-output.md
Title: Apache Kafka output binding for Azure Functions description: Use Azure Functions to write messages to an Apache Kafka stream.- Last updated 05/14/2022- zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Kafka Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md
Title: Apache Kafka trigger for Azure Functions description: Use Azure Functions to run your code based on events from an Apache Kafka stream.- Last updated 05/14/2022- zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
Title: Apache Kafka bindings for Azure Functions description: Learn to integrate Azure Functions with an Apache Kafka stream.- Last updated 05/14/2022- zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-linux-custom-image.md
A function app on Azure manages the execution of your functions in your hosting
With the image deployed to your function app in Azure, you can now invoke the function as before through HTTP requests. In your browser, navigate to the following URL: `https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions` ::: zone-end ::: zone pivot="programming-language-csharp"
azure-functions Functions Create Maven Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-eclipse.md
Title: Create an Azure function app with Java and Eclipse description: How-to guide to create and publish a simple HTTP triggered serverless app using Java and Eclipse to Azure Functions.- Last updated 07/01/2018- ms.devlang: java
azure-functions Functions Create Private Site Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-private-site-access.md
Title: Enable private site access to Azure Functions description: Learn to set up Azure virtual network private site access for Azure Functions.-- Last updated 06/17/2020
azure-functions Functions Debug Event Grid Trigger Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-debug-event-grid-trigger-local.md
Title: Azure Functions Event Grid local debugging description: Learn to locally debug Azure Functions triggered by an Event Grid event- Last updated 10/18/2018- # Azure Function Event Grid Trigger Local Debugging
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
Title: Azure Functions deployment slots description: Learn to create and use deployment slots with Azure Functions- Last updated 03/02/2022- # Azure Functions deployment slots
azure-functions Functions Dotnet Dependency Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-dependency-injection.md
Title: Use dependency injection in .NET Azure Functions description: Learn how to use dependency injection for registering and using services in .NET functions- ms.devlang: csharp Last updated 03/24/2021- # Use dependency injection in .NET Azure Functions
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md
Title: Getting started with Azure Functions description: Take the first steps toward working with Azure Functions.- Last updated 11/19/2020- zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
Title: Use GitHub Actions to make code updates in Azure Functions description: Learn how to use GitHub Actions to define a workflow to build and deploy Azure Functions projects in GitHub.- Last updated 10/07/2020-
azure-functions Functions Idempotent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-idempotent.md
Title: Designing Azure Functions for identical input description: Building Azure Functions to be idempotent-- Last updated 06/09/2022
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md
Title: Manually run a non HTTP-triggered Azure Functions description: Use an HTTP request to run a non-HTTP triggered Azure Functions- Last updated 04/23/2020- # Manually run a non HTTP-triggered function
azure-functions Functions Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitor-log-analytics.md
Title: Monitoring Azure Functions with Azure Monitor Logs description: Learn how to use Azure Monitor Logs with Azure Functions to monitor function executions.- Last updated 04/15/2020- # Customer intent: As a developer, I want to monitor my functions so I can know if they're running correctly.
azure-functions Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md
Title: Azure Functions Overview description: Learn how Azure Functions can help build robust serverless apps.- ms.assetid: 01d6ca9f-ca3f-44fa-b0b9-7ffee115acd4 Last updated 05/27/2022-
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
Title: Azure Functions C# script developer reference description: Understand how to develop Azure Functions using C# script.- Last updated 12/12/2017- # Azure Functions C# script (.csx) developer reference
azure-functions Functions Reliable Event Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reliable-event-processing.md
Title: Azure Functions reliable event processing description: Avoid missing Event Hub messages in Azure Functions- Last updated 10/01/2020- # Azure Functions reliable event processing
azure-functions Functions Triggers Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-triggers-bindings.md
Title: Triggers and bindings in Azure Functions description: Learn to use triggers and bindings to connect your Azure Function to online events and cloud-based services.- Last updated 05/25/2022-
azure-functions Functions Twitter Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-twitter-email.md
Title: Create a function that integrates with Azure Logic Apps description: Create a function integrate with Azure Logic Apps and Azure Cognitive Services. The resulting workflow categorizes tweet sentiments sends email notifications.- ms.assetid: 60495cc5-1638-4bf0-8174-52786d227734 Last updated 04/10/2021- ms.devlang: csharp
azure-functions Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/pricing.md
Title: Azure Functions pricing description: Learn how billing works for Azure Functions.-- Last updated 11/20/2020
azure-functions Shift Expressjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/shift-expressjs.md
Title: Shifting from Express.js to Azure Functions description: Learn to refactor Express.js endpoints to Azure Functions.- Last updated 07/31/2020- ms.devlang: javascript
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
description: Configure continuous deployment to your applications hosted in Azur
Previously updated : 03/02/2022
+recommendations: false
Last updated : 06/27/2022 # Deploy an app in Azure Government with Azure Pipelines
-This article helps you use Azure Pipelines to set up continuous integration (CI) and continuous deployment (CD) of your web app running in Azure Government. CI/CD automates the build of your code from a repo along with the deployment (release) of the built code artifacts to a service or set of services in Azure Government. In this tutorial, you'll build a web app and deploy it to an Azure Governments app service. This build and release process is triggered by a change to a code file in the repo.
+This how-to guide helps you use Azure Pipelines to set up continuous integration (CI) and continuous delivery (CD) of your web app running in Azure Government. CI/CD automates the build of your code from a repository along with the deployment (release) of the built code artifacts to a service or set of services in Azure Government. In this how-to guide, you'll build a web app and deploy it to an Azure Governments App Service. The build and release process is triggered by a change to a code file in the repository.
-[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
+> [!NOTE]
+> [Azure DevOps](/azure/devops/) isn't available on Azure Government. While this how-to guide shows how to configure the CI/CD capabilities of Azure Pipelines to deploy an app to a service inside Azure Government, be aware that Azure Pipelines runs its pipelines outside of Azure Government. Research your organization's security and service policies before using it as part of your deployment tools. For guidance on how to use Azure DevOps Server to create a DevOps experience inside a private network on Azure Government, see [Azure DevOps Server on Azure Government](https://devblogs.microsoft.com/azuregov/azure-devops-server-in-azure-government/).
+
+[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by development teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites
-Before starting this tutorial, you must complete the following prerequisites:
+Before starting this how-to guide, you must complete the following prerequisites:
-+ [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization)
-+ [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project?;bc=%2fazure%2fdevops%2fuser-guide%2fbreadcrumb%2ftoc.json&tabs=new-nav&toc=%2fazure%2fdevops%2fuser-guide%2ftoc.json)
-+ Install and set up [Azure PowerShell](/powershell/azure/install-az-ps)
+- [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization)
+- [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project)
+- Install and set up [Azure PowerShell](/powershell/azure/install-az-ps)
If you don't have an active Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
-## Create Azure Government app service
-
-[Create an App service in your Azure Government subscription](documentation-government-howto-deploy-webandmobile.md).
-The following steps will set up a CD process to deploy to this Web App.
-
-## Set up Build and Source control integration
-
-Follow through one of the quickstarts below to set up a Build for your specific type of app:
--- [ASP.NET 4 app](/azure/devops/pipelines/apps/aspnet/build-aspnet-4)-- [ASP.NET Core app](/azure/devops/pipelines/ecosystems/dotnet-core)-- [Node.js app with Gulp](/azure/devops/pipelines/ecosystems/javascript)-
-## Generate a service principal
-
-1. Download or copy and paste the [service principal creation](https://github.com/yujhongmicrosoft/spncreationn/blob/master/spncreation.ps1) PowerShell script into an IDE or editor.
-
- > [!NOTE]
- > This script will be updated to use the Azure Az PowerShell module instead of the deprecated AzureRM PowerShell module.
-
-2. Open up the file and navigate to the `param` parameter. Replace the `$environmentName` variable with
-AzureUSGovernment." This action sets the service principal to be created in Azure Government.
-
-3. Open your PowerShell window and run the following command. This command sets a policy that enables running local files.
+## Create Azure Government App Service app
+
+Follow [Tutorial: Deploy an Azure App Service app](./documentation-government-howto-deploy-webandmobile.md) to learn how to deploy an Azure App Service app to Azure Government. The following steps will set up a CD process to deploy to your web app.
+
+## Set up build and source control integration
+
+Review one of the following quickstarts to set up a build for your specific type of app:
+
+- [ASP.NET 4](/azure/devops/pipelines/apps/aspnet/build-aspnet-4)
+- [.NET Core](/azure/devops/pipelines/ecosystems/dotnet-core)
+- [Node.js](/azure/devops/pipelines/ecosystems/javascript)
+
+## Generate a service principal
+
+1. Copy and paste the following service principal creation PowerShell script into an IDE or editor, and then save the script. This code is compatible only with Azure Az PowerShell v7.0.0 or higher.
+
+ ```powershell
+ param
+ (
+ [Parameter(Mandatory=$true, HelpMessage="Enter Azure subscription name - you need to be subscription admin to execute the script")]
+ [string] $subscriptionName,
+
+ [Parameter(Mandatory=$false, HelpMessage="Provide SPN role assignment")]
+ [string] $spnRole = "owner",
+
+ [Parameter(Mandatory=$false, HelpMessage="Provide Azure environment name for your subscription")]
+ [string] $environmentName = "AzureUSGovernment"
+ )
+
+ # Initialize
+ $ErrorActionPreference = "Stop"
+ $VerbosePreference = "SilentlyContinue"
+ $userName = ($env:USERNAME).Replace(' ', '')
+ $newguid = [guid]::NewGuid()
+ $displayName = [String]::Format("AzDevOps.{0}.{1}", $userName, $newguid)
+ $homePage = "http://" + $displayName
+ $identifierUri = $homePage
+
+ # Check for Azure Az PowerShell module
+ $isAzureModulePresent = Get-Module -Name Az -ListAvailable
+ if ([String]::IsNullOrEmpty($isAzureModulePresent) -eq $true)
+ {
+ Write-Output "Script requires Azure PowerShell modules to be present. Obtain Azure PowerShell from https://docs.microsoft.com//powershell/azure/install-az-ps" -Verbose
+ return
+ }
+
+ Import-Module -Name Az.Accounts
+ Write-Output "Provide your credentials to access your Azure subscription $subscriptionName" -Verbose
+ Connect-AzAccount -Subscription $subscriptionName -Environment $environmentName
+ $azureSubscription = Get-AzSubscription -SubscriptionName $subscriptionName
+ $connectionName = $azureSubscription.Name
+ $tenantId = $azureSubscription.TenantId
+ $id = $azureSubscription.SubscriptionId
+
+ # Create new Azure AD application
+ Write-Output "Creating new application in Azure AD (App URI - $identifierUri)" -Verbose
+ $azureAdApplication = New-AzADApplication -DisplayName $displayName -HomePage $homePage -Verbose
+ $appId = $azureAdApplication.AppId
+ $objectId = $azureAdApplication.Id
+ Write-Output "Azure AD application creation completed successfully (Application Id: $appId) and (Object Id: $objectId)" -Verbose
+
+ # Add secret to Azure AD application
+ Write-Output "Creating new secret for Azure AD application"
+ $secret = New-AzADAppCredential -ObjectId $objectId -EndDate (Get-Date).AddYears(2)
+ Write-Output "Secret created successfully" -Verbose
+
+ # Create new SPN
+ Write-Output "Creating new SPN" -Verbose
+ $spn = New-AzADServicePrincipal -ApplicationId $appId
+ $spnName = $spn.DisplayName
+ Write-Output "SPN creation completed successfully (SPN Name: $spnName)" -Verbose
+
+ # Assign role to SPN
+ Write-Output "Waiting for SPN creation to reflect in directory before role assignment"
+ Start-Sleep 20
+ Write-Output "Assigning role ($spnRole) to SPN app ($appId)" -Verbose
+ New-AzRoleAssignment -RoleDefinitionName $spnRole -ApplicationId $spn.AppId
+ Write-Output "SPN role assignment completed successfully" -Verbose
+
+ # Print values
+ Write-Output "`nCopy and paste below values for service connection" -Verbose
+ Write-Output "***************************************************************************"
+ Write-Output "Connection Name: $connectionName(SPN)"
+ Write-Output "Environment: $environmentName"
+ Write-Output "Subscription Id: $id"
+ Write-Output "Subscription Name: $connectionName"
+ Write-Output "Service Principal Id: $appId"
+ Write-Output "Tenant Id: $tenantId"
+ Write-Output "***************************************************************************"
+ ```
+
+2. Open your PowerShell window and run the following command, which sets a policy that enables running local files:
`Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass`
- When you're asked whether you want to change the execution policy, enter "A" (for "Yes to All").
+ When asked whether you want to change the execution policy, enter "A" (for "Yes to All").
-4. Navigate to the directory that has the edited script above.
+3. Navigate to the directory where you saved the service principal creation PowerShell script.
-5. Edit the following command with the name of your script and run:
+4. Edit the following command with the name of your script and run:
`./<name of script file you saved>`
-6. The "subscriptionName" parameter can be found by logging into your Azure Government subscription via `Connect-AzAccount -EnvironmentName AzureUSGovernment` and then running `Get-AzureSubscription`.
-
-7. When prompted for the "password" parameter, enter your desired password.
+5. The "subscriptionName" parameter can be found by logging into your Azure Government subscription via `Connect-AzAccount -EnvironmentName AzureUSGovernment` and then running `Get-AzureSubscription`.
-8. After providing your Azure Government subscription credentials, you should see the following message:
+6. After providing your Azure Government subscription credentials, you should see the following message:
- > [!NOTE]
- > The Environment variable should be `AzureUSGovernment`.
+ `The Environment variable should be AzureUSGovernment`
-9. After the script has run, you should see your service connection values. Copy these values as we'll need them when setting up our endpoint.
+7. After the script has run, you should see your service connection values. Copy these values as we'll need them when setting up our endpoint.
- ![ps4](./media/documentation-government-vsts-img11.png)
+ :::image type="content" source="./media/documentation-government-vsts-img11.png" alt-text="Service connection values displayed after running the PowerShell script." border="false":::
## Configure the Azure Pipelines service connection
-Follow the instructions in [Service connections for builds and releases](/azure/devops/pipelines/library/service-endpoints) to set up the Azure Pipelines service connection.
+Follow [Manage service connections](/azure/devops/pipelines/library/service-endpoints) to set up the Azure Pipelines service connection.
+
+Make one change specific to Azure Government:
-Make one change specific to Azure Government: In step #3 of [Service connections for builds and releases](/azure/devops/pipelines/library/service-endpoints), click on "use the full version of the service connection catalog" and set **Environment** to **AzureUSGovernment**.
+- In step #3 of [Manage service connections: Create a service connection](/azure/devops/pipelines/library/service-endpoints#create-a-service-connection), click on *Use the full version of the service connection catalog* and set **Environment** to **AzureUSGovernment**.
## Define a release process
-Follow [Deploy a web app to Azure App Services](/azure/devops/pipelines/apps/cd/deploy-webdeploy-webapps) instructions to set up your release pipeline and deploy to your application in Azure Government.
+Follow [Deploy an Azure Web App](/azure/devops/pipelines/targets/webapp) instructions to set up your release pipeline and deploy to your application in Azure Government.
## Q&A **Do I need a build agent?** <br/>
-You need at least one [agent](/azure/devops/pipelines/agents/agents) to run your deployments. By default, the build and deployment processes are configured to use the [hosted agents](/azure/devops/pipelines/agents/agents#microsoft-hosted-agents). Configuring a private agent would limit data sharing outside of Azure Government.
+You need at least one [agent](/azure/devops/pipelines/agents/agents) to run your deployments. By default, the build and deployment processes are configured to use [hosted agents](/azure/devops/pipelines/agents/agents#microsoft-hosted-agents). Configuring a private agent would limit data sharing outside of Azure Government.
-**I use Team Foundation Server on premises. Can I configure CD on my server to target Azure Government?** <br/>
-Currently, Team Foundation Server can't be used to deploy to an Azure Government Cloud.
+**Can I configure CD on Azure DevOps Server (formerly Team Foundation Server) to target Azure Government?** <br/>
+You can set up Azure DevOps Server in Azure Government. For guidance on how to use Azure DevOps Server to create a DevOps experience inside a private network on Azure Government, see [Azure DevOps Server on Azure Government](https://devblogs.microsoft.com/azuregov/azure-devops-server-in-azure-government/).
## Next steps -- Subscribe to the [Azure Government blog](https://devblogs.microsoft.com/azuregov/)-- Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag
+For more information, see the following resources:
+
+- [Sign up for Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial)
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [Ask questions via the azure-gov tag on StackOverflow](https://stackoverflow.com/tags/azure-gov)
+- [Azure Government blog](https://devblogs.microsoft.com/azuregov/)
+- [What is Infrastructure as Code? ΓÇô Azure DevOps](/devops/deliver/what-is-infrastructure-as-code)
+- [DevSecOps for infrastructure as code (IaC) ΓÇô Azure Architecture Center](/azure/architecture/solution-ideas/articles/devsecops-infrastructure-as-code)
+- [Testing your application and Azure environment ΓÇô Microsoft Azure Well-Architected Framework](/azure/architecture/framework/devops/release-engineering-testing)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure compliance](../compliance/index.yml)
azure-government Documentation Government Ase Disa Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-ase-disa-cap.md
Title: ASE deployment with DISA CAP
-description: This document provides a comparison of features and guidance on developing applications for Azure Government
-
-cloud: gov
-
+description: This article explains the baseline App Service Environment configuration for customers who use DISA CAP to connect to Azure Government.
- Previously updated : 11/29/2018-
+recommendations: false
Last updated : 06/27/2022 # App Service Environment reference for DoD customers connected to the DISA CAP
-This article explains the baseline configuration of an App Service Environment (ASE) with an internal load balancer (ILB) for customers who use the DISA CAP to connect to Azure Government.
+This article explains the baseline configuration of an App Service Environment (ASE) with an internal load balancer (ILB) for customers who use the Defense Information Systems Agency (DISA) Cloud Access Point (CAP) to connect to Azure Government.
## Environment configuration ### Assumptions
-The customer has deployed an ASE with an ILB and has implemented an ExpressRoute connection to the DISA Cloud Access Point (CAP).
+You've deployed an ASE with an ILB and have implemented an ExpressRoute connection to the DISA CAP.
### Route table
-When creating the ASE via the portal, a route table with a default route of 0.0.0.0/0 and next hop ΓÇ£InternetΓÇ¥ is created.
-However, since DISA advertises a default route out the ExpressRoute circuit, the User Defined Route (UDR) should either be deleted, or remove the default route to internet.
+When you create the ASE via the Azure Government portal, a route table with a default route of 0.0.0.0/0 and next hop ΓÇ£InternetΓÇ¥ is created. However, since DISA advertises a default route out of the ExpressRoute circuit, the User Defined Route (UDR) should either be deleted, or you should remove the default route to Internet.
-You will need to create new routes in the UDR for the management addresses in order to keep the ASE healthy. For Azure Government ranges, see [App Service Environment management addresses](../app-service/environment/management-addresses.md).
+You'll need to create new routes in the UDR for the management addresses to keep the ASE healthy. For Azure Government ranges, see [App Service Environment management addresses](../app-service/environment/management-addresses.md).
-- 23.97.29.209/32 --> Internet-- 13.72.53.37/32 --> Internet-- 13.72.180.105/32 --> Internet-- 52.181.183.11/32 --> Internet-- 52.227.80.100/32 --> Internet-- 52.182.93.40/32 --> Internet-- 52.244.79.34/32 --> Internet-- 52.238.74.16/32 --> Internet
+- 23.97.29.209/32 -> Internet
+- 13.72.53.37/32 -> Internet
+- 13.72.180.105/32 -> Internet
+- 52.181.183.11/32 -> Internet
+- 52.227.80.100/32 -> Internet
+- 52.182.93.40/32 -> Internet
+- 52.244.79.34/32 -> Internet
+- 52.238.74.16/32 -> Internet
Make sure the UDR is applied to the subnet your ASE is deployed to. ### Network security group (NSG)
-The ASE will be created with inbound and outbound security rules as shown below. The inbound security rules MUST allow ports 454-455 with an ephemeral source port range (*).
-
-The images below describe the default NSG rules created during the ASE creation. For more information, see [Networking considerations for an App Service Environment](../app-service/environment/network-info.md#network-security-groups)
+The ASE will be created with the following inbound and outbound security rules. The inbound security rules **must** allow ports 454-455 with an ephemeral source port range (*). The following images describe the default NSG rules generated during the ASE creation. For more information, see [Networking considerations for an App Service Environment](../app-service/environment/network-info.md#network-security-groups).
-![Default inbound NSG security rules for an ILB ASE](media/documentation-government-ase-disacap-inbound-route-table.png)
-![Default outbound NSG security rules for an ILB ASE](media/documentation-government-ase-disacap-outbound-route-table.png)
-### Service Endpoints
+### Service endpoints
-Depending on the storage you use, you will be required to enable Service Endpoints for SQL and Azure Storage to access them without going back down to the DISA BCAP. You also need to enable EventHub Service Endpoint for ASE logs. [Learn more](../app-service/environment/network-info.md#service-endpoints).
+Depending on the storage you use, you need to enable service endpoints for Azure SQL Database and Azure Storage to access them without going back to the DISA CAP. You also need to enable the Event Hubs service endpoint for ASE logs. For more information, see [Networking considerations for App Service Environment: Service endpoints](../app-service/environment/network-info.md#service-endpoints).
## FAQs
-Some configuration changes may take some time to take effect. Allow for several hours for changes to routing, NSGs, ASE Health, etc. to propagate and take effect, or optionally you can reboot the ASE.
+**How long will it take for configuration changes to take effect?** </br>
+Some configuration changes may take time to become effective. Allow several hours for changes to routing, NSGs, ASE Health, and so on, to propagate and take effect. Otherwise, you can optionally reboot the ASE.
-## Resource manager template sample
+## Azure Resource Manager template sample
> [!NOTE]
-> In order to deploy non-RFC 1918 IP addresses in the portal you must pre-stage the VNet and Subnet for the ASE. You can use a Resource Manager Template to deploy the ASE with non-RFC1918 IPs as well.
-
+> To deploy non-RFC 1918 IP addresses in the portal, you must pre-stage the VNet and subnet for the ASE. You can use an Azure Resource Manager template to deploy the ASE with non-RFC1918 IPs as well.
+
+</br>
+ <a href="https://portal.azure.us/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2FApp-Service-Environment-AzFirewall%2Fazuredeploy.json" target="_blank"> <img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazuregov.png" alt="Button to deploy to Azure Gov" /> </a>
-This template deploys an **ILB ASE** into the Azure Government or Azure DoD regions.
+This template deploys an **ILB ASE** into the Azure Government or DoD regions.
## Next steps
-[Azure Government overview](documentation-government-welcome.md)
+
+- [Sign up for Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial)
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [Ask questions via the azure-gov tag on StackOverflow](https://stackoverflow.com/tags/azure-gov)
+- [Azure Government blog](https://devblogs.microsoft.com/azuregov/)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Secure Azure computing architecture](./compliance/secure-azure-computing-architecture.md)
+- [Azure Policy overview](../governance/policy/overview.md)
+- [Azure Policy regulatory compliance built-in initiatives](../governance/policy/samples/index.md#regulatory-compliance)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Azure Government isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
+- [Azure Government DoD overview](./documentation-government-overview-dod.md)
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
- Title: Troubleshooting no data - Application Insights for .NET
-description: Not seeing data in Azure Application Insights? Try here.
-- Previously updated : 05/21/2020----
-# Troubleshooting no data - Application Insights for .NET/.NET Core
--
-## Some of my telemetry is missing
-*In Application Insights, I only see a fraction of the events that are being generated by my app.*
-
-* If you're consistently seeing the same fraction, it's probably because of adaptive [sampling](../../azure-monitor/app/sampling.md). To confirm this, open Search (from the **Overview** in the portal on the left) and look at an instance of a Request or other event. To see the full property details, select the ellipsis (**...**) at the bottom of the **Properties** section. If Request Count > 1, sampling is in operation.
-* It's possible that you're hitting a [data rate limit](../service-limits.md#application-insights) for your pricing plan. These limits are applied per minute.
-
-*I'm randomly experiencing data loss.*
-
-* Check whether you're experiencing data loss at [Telemetry Channel](telemetry-channels.md#does-the-application-insights-channel-guarantee-telemetry-delivery-if-not-what-are-the-scenarios-in-which-telemetry-can-be-lost).
-
-* Check for any known issues in Telemetry Channel [GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/issues).
-
-*I'm experiencing data loss in Console App or on Web App when app is about to stop.*
-
-* SDK channel keeps telemetry in buffer, and sends them in batches. If the application is shutting down, you might need to explicitly call [Flush()](api-custom-events-metrics.md#flushing-data). Behavior of `Flush()` depends on the actual [channel](telemetry-channels.md#built-in-telemetry-channels) used.
-
-* Per [.NET Core/.NET Framework Console application](worker-service.md#net-corenet-framework-console-application), explicitly calling Flush() followed by sleep is required in Console Apps.
-
-## Request count collected by Application Insights SDK doesn't match the IIS log count for my application
-
-Internet Information Services (IIS) logs counts of all request reaching IIS and inherently could differ from the total request reaching an application. Due to this behavior, it isn't guaranteed that the request count collected by the SDKs will match the total IIS log count.
-
-## No data from my server
-* I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.*
-* A firewall issue is most likely the cause. [Set firewall exceptions for Application Insights to send data](../../azure-monitor/app/ip-addresses.md).
-
-*I [installed Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) on my web server to monitor existing apps. I don't see any results.*
-
-* See [Troubleshooting Status Monitor](./status-monitor-v2-troubleshoot.md).
-
-## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
-
-If you have an ASP.NET application hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
-
-[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md). The set of SSL security protocols is one of the quirks enabled by the httpRuntime targetFramework value in the system.web section of web.config.
-If the httpRuntime targetFramework is 4.5.2 or lower, then TLS 1.2 isn't included by default.
-
-> [!NOTE]
-> The httpRuntime targetFramework value is independent of the target framework used when building your application.
-
-To check the setting, open your web.config file and find the system.web section. Ensure that the `targetFramework` for `httpRuntime` is set to 4.6 or above.
-
- ```xml
- <system.web>
- ...
- <httpRuntime targetFramework="4.7.2" />
- ...
- </system.web>
- ```
-
-> [!NOTE]
-> Modifying the httpRuntime targetFramework value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Retargeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes).
-
-> [!NOTE]
-> If the targetFramework is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you are using your own virtual machine, you may need to enable TLS 1.2 in the OS.
--
-## FileNotFoundException: "Could not load file or assembly Microsoft.AspNet TelemetryCorrelation"
-
-For more information on this error, see [GitHub issue 1610 ]
-(https://github.com/microsoft/ApplicationInsights-dotnet/issues/1610).
-
-When upgrading from SDKs older than (2.4), you need to make sure the following changes applied to `web.config` and `ApplicationInsights.config`:
-
-1. Two http modules instead of one. In `web.config`, you should have two http modules. Order is important for some scenarios:
-
- ``` xml
- <system.webServer>
- <modules>
- <add name="TelemetryCorrelationHttpModule" type="Microsoft.AspNet.TelemetryCorrelation.TelemetryCorrelationHttpModule, Microsoft.AspNet.TelemetryCorrelation" preCondition="integratedMode,managedHandler" />
- <add name="ApplicationInsightsHttpModule" type="Microsoft.ApplicationInsights.Web.ApplicationInsightsHttpModule, Microsoft.AI.Web" preCondition="managedHandler" />
- </modules>
- </system.webServer>
- ```
-
-2. In `ApplicationInsights.config` in addition to `RequestTrackingTelemetryModule` you should have the following telemetry module:
-
- ``` xml
- <TelemetryModules>
- <Add Type="Microsoft.ApplicationInsights.Web.AspNetDiagnosticTelemetryModule, Microsoft.AI.Web"/>
- </TelemetryModules>
- ```
-
-***Failure to upgrade properly may lead to unexpected exceptions or telemetry not being collected.***
--
-## <a name="q01"></a>No 'Add Application Insights' option in Visual Studio
-*When I right-click an existing project in Solution Explorer, I don't see any Application Insights options.*
-
-* Not all types of .NET project are supported by the tools. Web and WCF projects are supported. For other project types such as desktop or service applications, you can still [add an Application Insights SDK to your project manually](./windows-desktop.md).
-* Make sure you have [Visual Studio 2013 Update 3 or later](/visualstudio/releasenotes/vs2013-update3-rtm-vs). It comes pre-installed with Developer Analytics tools, which provide the Application Insights SDK.
-* Select **Tools**, **Extensions and Updates** and check that **Developer Analytics Tools** is installed and enabled. If so, select **Updates** to see if there's an update available.
-* Open the New Project dialog and choose ASP.NET Web application. If you see the Application Insights option there, then the tools are installed. If not, try uninstalling and then reinstalling the Developer Analytics Tools.
-
-## <a name="q02"></a>Adding Application Insights failed
-*When I try to add Application Insights to an existing project, I see an error message.*
-
-Likely causes:
-
-* Communication with the Application Insights portal failed; or
-* There's a problem with your Azure account;
-* You only have [read access to the subscription or group where you were trying to create the new resource](./resources-roles-access-control.md).
-
-Fix:
-
-* Check that you provided sign-in credentials for the right Azure account.
-* In your browser, check that you have access to the [Azure portal](https://portal.azure.com). Open Settings and see if there's any restriction.
-* [Add Application Insights to your existing project](./asp-net.md): In Solution Explorer, right select your project and choose "Add Application Insights."
-
-## <a name="NuGetBuild"></a> "NuGet package(s) are missing" on my build server
-*Everything builds OK when I'm debugging on my development machine, but I get a NuGet error on the build server.*
-
-See [NuGet Package Restore](https://docs.nuget.org/Consume/Package-Restore)
-and [Automatic Package Restore](https://docs.nuget.org/Consume/package-restore/migrating-to-automatic-package-restore).
-
-## Missing menu command to open Application Insights from Visual Studio
-*When I right-click my project Solution Explorer, I don't see any Application Insights commands, or I don't see an Open Application Insights command.*
-
-Likely causes:
-
-* You created the Application Insights resource manually.
-* The project is of a type that isn't supported by the Application Insights tools.
-* The Developer Analytics tools are disabled in your Visual Studio.
-* Your Visual Studio is older than 2013 Update 3.
-
-Fix:
-
-* Make sure your Visual Studio version is 2013 update 3 or later.
-* Select **Tools**, **Extensions and Updates** and check that **Developer Analytics tools** is installed and enabled. If so, select **Updates** to see if there's an update available.
-* Right-click your project in Solution Explorer. If you see the command **Application Insights > Configure Application Insights**, use it to connect your project to the resource in the Application Insights service.
-
-Otherwise, your project type isn't directly supported by the Developer Analytics tools. To see your telemetry, sign in to the [Azure portal](https://portal.azure.com), choose Application Insights on the left navigation bar, and select your application.
-
-## 'Access denied' on opening Application Insights from Visual Studio
-*The 'Open Application Insights' menu command takes me to the Azure portal, but I get an 'access denied' error.*
-
-The Microsoft sign-in that you last used on your default browser doesn't have access to [the resource that was created when Application Insights was added to this app](./asp-net.md). There are two likely reasons:
-
-More than one Microsoft account - maybe a work and a personal Microsoft account? The sign-in that you last used on your default browser was for a different account than the one that has access to [add Application Insights to the project](./asp-net.md).
- * Fix: Select your name at top right of the browser window, and sign out. Then sign in with the account that has access. Then on the left navigation bar, select Application Insights and select your app.
-* Someone else added Application Insights to the project, and they forgot to give you [access to the resource group](./resources-roles-access-control.md) in which it was created.
- * Fix: If they used an organizational account, they can add you to the team; or they can grant you individual access to the resource group.
-
-## 'Asset not found' on opening Application Insights from Visual Studio
-*The 'Open Application Insights' menu command takes me to the Azure portal, but I get an 'asset not found' error.*
-
-Likely causes:
-
-* The Application Insights resource for your application has been deleted; or
-* The [connection string](./sdk-connection-string.md) was set or changed in ApplicationInsights.config by editing it directly, without updating the project file.
-
-The [connection string](./sdk-connection-string.md) in ApplicationInsights.config controls where the telemetry is sent. A line in the project file controls which resource is opened when you use the command in Visual Studio.
-
-Fix:
-
-* In Solution Explorer, right-click the project and choose Application Insights, Configure Application Insights. In the dialog, you can either choose to send telemetry to an existing resource, or create a new one. Or:
-* Open the resource directly. Sign in to [the Azure portal](https://portal.azure.com), select Application Insights on the left navigation bar, and then select your app.
-
-## Where do I find my telemetry?
-*I signed in to the [Microsoft Azure portal](https://portal.azure.com), and I'm looking at the Azure home dashboard. So where do I find my Application Insights data?*
-
-* On the left navigation bar, select Application Insights, then your app name. If you don't have any projects there, you need to [add or configure Application Insights in your web project](./asp-net.md).
- There you'll see some summary charts. You can select through them to see more detail.
-* In Visual Studio, while you're debugging your app, select the Application Insights button.
-
-## <a name="q03"></a> No server data (or no data at all)
-*I ran my app and then opened the Application Insights service in Microsoft Azure, but all the charts show 'Learn how to collect...' or 'Not configured.'* Or, *only Page View and user data, but no server data.*
-
-* Run your application in debug mode in Visual Studio (F5). Use the application so as to generate some telemetry. Check that you can see events logged in the Visual Studio output window.
- ![Screenshot that shows running your application in debug mode in Visual Studio.](./media/asp-net-troubleshoot-no-data/output-window.png)
-* In the Application Insights portal, open [Diagnostic Search](./diagnostic-search.md). Data usually appears here first.
-* Select the Refresh button. The blade refreshes itself periodically, but you can also do it manually. The refresh interval is longer for larger time ranges.
-* Verify the [connection strings](./sdk-connection-string.md) match. On the main blade for your app in the Application Insights portal, in the **Essentials** drop-down, look at **Connection string**. Then, in your project in Visual Studio, open ApplicationInsights.config and find the `<ConnectionString>`. Check that the two strings are equal. If not:
- * In the portal, select Application Insights and look for the app resource with the right string; or
- * In Visual Studio Solution Explorer, right-click the project and choose Application Insights, Configure. Reset the app to send telemetry to the right resource.
- * If you can't find the matching strings, check that you're using the same sign-in credentials in Visual Studio as in to the portal.
-* In the [Microsoft Azure home dashboard](https://portal.azure.com), look at the Service Health map. If there are some alert indications, wait until they've returned to OK and then close and reopen your Application Insights application blade.
-* Check also [our status blog](https://techcommunity.microsoft.com/t5/azure-monitor-status/bg-p/AzureMonitorStatusBlog).
-* Did you write any code for the [server-side SDK](./api-custom-events-metrics.md) that might change the [connection string](./sdk-connection-string.md) in `TelemetryClient` instances or in `TelemetryContext`? Or did you write a [filter or sampling configuration](./api-filtering-sampling.md) that might be filtering out too much?
-* If you edited ApplicationInsights.config, carefully check the configuration of [TelemetryInitializers and TelemetryProcessors](./api-filtering-sampling.md). An incorrectly named type or parameter can cause the SDK to send no data.
-
-## <a name="q04"></a>No data on Page Views, Browsers, Usage
-*I see data in Server Response Time and Server Requests charts, but no data in Page View Load time, or in the Browser or Usage blades.*
-
-The data comes from scripts in the web pages.
-
-* If you added Application Insights to an existing web project, [you have to add the scripts by hand](./javascript.md).
-* Make sure Internet Explorer isn't displaying your site in Compatibility mode.
-* Use the browser's debug feature (F12 on some browsers, then choose Network) to verify that data is being sent to `dc.services.visualstudio.com`.
-
-## No dependency or exception data
-See [dependency telemetry](./asp-net-dependencies.md) and [exception telemetry](asp-net-exceptions.md).
-
-## No performance data
-Performance data (CPU, IO rate, and so on) is available for [Java web services](java-2x-collectd.md), [Windows desktop apps](./windows-desktop.md), [IIS web apps and services if you install Application Insights Agent](./status-monitor-v2-overview.md), and [Azure Cloud Services](./app-insights-overview.md). you'll find it under Settings, Servers.
-
-## No (server) data since I published the app to my server
-* Check that you copied all the Microsoft. ApplicationInsights DLLs to the server, together with Microsoft.Diagnostics.Instrumentation.Extensions.Intercept.dll
-* In your firewall, you might have to [open some TCP ports](./ip-addresses.md).
-* If you have to use a proxy to send out of your corporate network, set [defaultProxy](/previous-versions/dotnet/netframework-1.1/aa903360(v=vs.71)) in Web.config
-* Windows Server 2008: Make sure you've installed the following updates: [KB2468871](https://support.microsoft.com/kb/2468871), [KB2533523](https://support.microsoft.com/kb/2533523), [KB2600217](https://www.microsoft.com/download/details.aspx?id=28936).
-
-## I used to see data, but it has stopped
-* Have you hit your monthly quota of data points? Open the Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for more capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/).
-
-## I don't see all the data I'm expecting
-If your application sends considerable data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the [adaptive sampling](./sampling.md) feature may operate and send only a percentage of your telemetry.
-
-You can disable it, but doing so isn't recommended. Sampling is designed so that related telemetry is correctly transmitted, for diagnostic purposes.
-
-## Client IP address is 0.0.0.0
-
-On February 5 2018, we announced that we removed logging of the Client IP address. This recommendation doesn't affect Geo Location.
-
-> [!NOTE]
-> If you need the first 3 octets of the IP address, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to add a custom attribute.
-> This does not affect data collected prior to February 5, 2018.
-
-## Wrong geographical data in user telemetry
-The city, region, and country dimensions are derived from IP addresses and aren't always accurate. These IP addresses are processed for location first and then changed to 0.0.0.0 to be stored.
-
-## Exception "method not found" on running in Azure Cloud Services
-Did you build for .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)? Earlier versions aren't automatically supported in Azure Cloud Services roles. [Install LTS on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
-
-## Troubleshooting Logs
-
-Follow these instructions to capture troubleshooting logs for your framework.
-
-### .NET Framework
-
-> [!NOTE]
-> Starting in version 2.14, the [Microsoft.AspNet.ApplicationInsights.HostingStartup](https://www.nuget.org/packages/Microsoft.AspNet.ApplicationInsights.HostingStartup) package is no longer necessary, SDK logs are now collected with the [Microsoft.ApplicationInsights](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) package. No additional package is required.
-
-1. Modify your applicationinsights.config file to include the following XML:
-
- ```xml
- <TelemetryModules>
- <Add Type="Microsoft.ApplicationInsights.Extensibility.Implementation.Tracing.FileDiagnosticsTelemetryModule, Microsoft.ApplicationInsights">
- <Severity>Verbose</Severity>
- <LogFileName>mylog.txt</LogFileName>
- <LogFilePath>C:\\SDKLOGS</LogFilePath>
- </Add>
- </TelemetryModules>
- ```
- Your application must have Write permissions to the configured location
-
-2. Restart process so that these new settings are picked up by SDK
-
-3. Revert these changes when you're finished.
-
-### .NET Core
-
-1. Install the [Application Insights SDK NuGet package for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) package from NuGet. The version you install must match the current installed version of `Microsoft.ApplicationInsights`.
-
- The latest version of Microsoft.ApplicationInsights.AspNetCore is 2.14.0, and it refers to Microsoft.ApplicationInsights version 2.14.0. Hence the version of Microsoft.ApplicationInsights.AspNetCore to be installed should be 2.14.0.
-
-2. Modify `ConfigureServices` method in your `Startup.cs` class.:
-
- ```csharp
- services.AddSingleton<ITelemetryModule, FileDiagnosticsTelemetryModule>();
- services.ConfigureTelemetryModule<FileDiagnosticsTelemetryModule>( (module, options) => {
- module.LogFilePath = "C:\\SDKLOGS";
- module.LogFileName = "mylog.txt";
- module.Severity = "Verbose";
- } );
- ```
- Your application must have Write permissions to the configured location
-
-3. Restart process so that these new settings are picked up by SDK
-
-4. Revert these changes when you're finished.
--
-## <a name="PerfView"></a> Collect logs with PerfView
-[PerfView](https://github.com/Microsoft/perfview) is a free tool that helps isolate CPU, memory, and other issues.
-
-The Application Insights SDK log EventSource self-troubleshooting logs that can be captured by PerfView.
-
-To collect logs, download PerfView and run this command:
-```cmd
-PerfView.exe collect -MaxCollectSec:300 -NoGui /onlyProviders=*Microsoft-ApplicationInsights-Core,*Microsoft-ApplicationInsights-Data,*Microsoft-ApplicationInsights-WindowsServer-TelemetryChannel,*Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Dependency,*Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Web,*Microsoft-ApplicationInsights-Extensibility-DependencyCollector,*Microsoft-ApplicationInsights-Extensibility-HostingStartup,*Microsoft-ApplicationInsights-Extensibility-PerformanceCollector,*Microsoft-ApplicationInsights-Extensibility-EventCounterCollector,*Microsoft-ApplicationInsights-Extensibility-PerformanceCollector-QuickPulse,*Microsoft-ApplicationInsights-Extensibility-Web,*Microsoft-ApplicationInsights-Extensibility-WindowsServer,*Microsoft-ApplicationInsights-WindowsServer-Core,*Microsoft-ApplicationInsights-LoggerProvider,*Microsoft-ApplicationInsights-Extensibility-EventSourceListener,*Microsoft-ApplicationInsights-AspNetCore,*Redfield-Microsoft-ApplicationInsights-Core,*Redfield-Microsoft-ApplicationInsights-Data,*Redfield-Microsoft-ApplicationInsights-WindowsServer-TelemetryChannel,*Redfield-Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Dependency,*Redfield-Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Web,*Redfield-Microsoft-ApplicationInsights-Extensibility-DependencyCollector,*Redfield-Microsoft-ApplicationInsights-Extensibility-PerformanceCollector,*Redfield-Microsoft-ApplicationInsights-Extensibility-EventCounterCollector,*Redfield-Microsoft-ApplicationInsights-Extensibility-PerformanceCollector-QuickPulse,*Redfield-Microsoft-ApplicationInsights-Extensibility-Web,*Redfield-Microsoft-ApplicationInsights-Extensibility-WindowsServer,*Redfield-Microsoft-ApplicationInsights-LoggerProvider,*Redfield-Microsoft-ApplicationInsights-Extensibility-EventSourceListener,*Redfield-Microsoft-ApplicationInsights-AspNetCore
-```
-
-You can modify these parameters as needed:
-- **MaxCollectSec**. Set this parameter to prevent PerfView from running indefinitely and affecting the performance of your server.-- **OnlyProviders**. Set this parameter to only collect logs from the SDK. You can customize this list based on your specific investigations. -- **NoGui**. Set this parameter to collect logs without the GUI.--
-For more information,
-- [Recording performance traces with PerfView](https://github.com/dotnet/roslyn/wiki/Recording-performance-traces-with-PerfView).-- [Application Insights Event Sources](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/troubleshooting/ETW)-
-## Collect logs with dotnet-trace
-
-Alternatively, customers can also use a cross-platform .NET Core tool, [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace) for collecting logs that can further help in troubleshooting. This tool may be helpful for linux-based environments.
-
-After installation of [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace), execute the command below in bash.
-
-```bash
-dotnet-trace collect --process-id <PID> --providers Microsoft-ApplicationInsights-Core,Microsoft-ApplicationInsights-Data,Microsoft-ApplicationInsights-WindowsServer-TelemetryChannel,Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Dependency,Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Web,Microsoft-ApplicationInsights-Extensibility-DependencyCollector,Microsoft-ApplicationInsights-Extensibility-HostingStartup,Microsoft-ApplicationInsights-Extensibility-PerformanceCollector,Microsoft-ApplicationInsights-Extensibility-EventCounterCollector,Microsoft-ApplicationInsights-Extensibility-PerformanceCollector-QuickPulse,Microsoft-ApplicationInsights-Extensibility-Web,Microsoft-ApplicationInsights-Extensibility-WindowsServer,Microsoft-ApplicationInsights-WindowsServer-Core,Microsoft-ApplicationInsights-LoggerProvider,Microsoft-ApplicationInsights-Extensibility-EventSourceListener,Microsoft-ApplicationInsights-AspNetCore,Redfield-Microsoft-ApplicationInsights-Core,Redfield-Microsoft-ApplicationInsights-Data,Redfield-Microsoft-ApplicationInsights-WindowsServer-TelemetryChannel,Redfield-Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Dependency,Redfield-Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Web,Redfield-Microsoft-ApplicationInsights-Extensibility-DependencyCollector,Redfield-Microsoft-ApplicationInsights-Extensibility-PerformanceCollector,Redfield-Microsoft-ApplicationInsights-Extensibility-EventCounterCollector,Redfield-Microsoft-ApplicationInsights-Extensibility-PerformanceCollector-QuickPulse,Redfield-Microsoft-ApplicationInsights-Extensibility-Web,Redfield-Microsoft-ApplicationInsights-Extensibility-WindowsServer,Redfield-Microsoft-ApplicationInsights-LoggerProvider,Redfield-Microsoft-ApplicationInsights-Extensibility-EventSourceListener,Redfield-Microsoft-ApplicationInsights-AspNetCore
-```
-
-## How to remove Application Insights
-
-Learn how to remove Application Insights in Visual Studio by following the steps provide in the [remove Application Insights article](./remove-application-insights.md).
-
-## Still not working...
-* [Microsoft Q&A question page for Application Insights](/answers/topics/azure-monitor.html)
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
For the template-based ASP.NET MVC app from this article, the file that you need
## Troubleshooting
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/asp-net-troubleshoot-no-data).
+ There's a known issue in the current version of Visual Studio 2019: storing the instrumentation key or connection string in a user secret is broken for .NET Framework-based apps. The key ultimately has to be hardcoded into the *applicationinsights.config* file to work around this bug. This article is designed to avoid this issue entirely, by not using user secrets. ## Open-source SDK
For the latest updates and bug fixes, [consult the release notes](./release-note
## Next steps * Add synthetic transactions to test that your website is available from all over the world with [availability monitoring](monitor-web-app-availability.md).
-* [Configure sampling](sampling.md) to help reduce telemetry traffic and data storage costs.
--
+* [Configure sampling](sampling.md) to help reduce telemetry traffic and data storage costs.
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
You can create up to 100 availability tests per Application Insights resource.
## Troubleshooting
-See the dedicated [troubleshooting article](troubleshoot-availability.md).
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability).
## Next steps
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
Yes, certain Telemetry Channels will persist data locally if an endpoint cannot
Telemetry channels that utilize local storage create temp files in the TEMP or APPDATA directories, which are restricted to the specific account running your application. This may happen when an endpoint was temporarily unavailable or you hit the throttling limit. Once this issue is resolved, the telemetry channel will resume sending all the new and persisted data.
-This persisted data is not encrypted locally. If this is a concern, review the data and restrict the collection of private data. (For more information, see [How to export and delete private data](../logs/personal-data-mgmt.md#how-to-export-and-delete-private-data).)
+This persisted data is not encrypted locally. If this is a concern, review the data and restrict the collection of private data. (For more information, see [How to export and delete private data](../logs/personal-data-mgmt.md#exporting-and-deleting-personal-data).)
If a customer needs to configure this directory with specific security requirements, it can be configured per framework. Please make sure that the process running your application has write access to this directory, but also make sure this directory is protected to avoid telemetry being read by unintended users.
AzureLogHandler(
## How do I send data to Application Insights using TLS 1.2?
-To insure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
+To ensure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your application/clients cannot communicate over at least TLS 1.2 you would not be able to send data to Application Insights. The approach you take to test and validate your application's TLS support will vary depending on the operating system/platform as well as the language/framework your application uses.
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Application Insights can test your website at regular intervals to check that it
[Learn more about how to set up availability web tests.][availability]
-## Questions? Problems?
-[Troubleshooting Java](java-2x-troubleshoot.md)
+## Troubleshooting
+
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/java-2x-troubleshoot).
## Next steps * [Monitor dependency calls](java-2x-agent.md)
azure-monitor Java 2X Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-troubleshoot.md
- Title: Troubleshoot Application Insights in a Java web project
-description: Troubleshooting guide - monitoring live Java apps with Application Insights.
- Previously updated : 03/14/2019----
-# Troubleshooting and Q and A for Application Insights for Java SDK
-
-> [!CAUTION]
-> This document applies to Application Insights Java 2.x which is no longer recommended.
->
-> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
-
-Questions or problems with [Azure Application Insights in Java][java]? Here are some tips.
-
-## Build errors
-**In Eclipse or Intellij Idea, when adding the Application Insights SDK via Maven or Gradle, I get build or checksum validation errors.**
-
-* If the dependency `<version>` element is using a pattern with wildcard characters (e.g. (Maven) `<version>[2.0,)</version>` or (Gradle) `version:'2.+'`), try specifying a specific version instead like `2.6.4`.
-
-## No data
-**I added Application Insights successfully and ran my app, but I've never seen data in the portal.**
-
-* Wait a minute and click Refresh. The charts refresh themselves periodically, but you can also refresh manually. The refresh interval depends on the time range of the chart.
-* Check that you have an instrumentation key defined in the ApplicationInsights.xml file (in the resources folder in your project) or configured as Environment variable.
-* Verify that there is no `<DisableTelemetry>true</DisableTelemetry>` node in the xml file.
-* In your firewall, you might have to open TCP ports 80 and 443 for outgoing traffic to dc.services.visualstudio.com. See the [full list of firewall exceptions](./ip-addresses.md)
-* In the Microsoft Azure start board, look at the service status map. If there are some alert indications, wait until they have returned to OK and then close and re-open your Application Insights application blade.
-* [Turn on logging](#debug-data-from-the-sdk) by adding an `<SDKLogger />` element under the root node in the ApplicationInsights.xml file (in the resources folder in your project), and check for entries prefaced with AI: INFO/WARN/ERROR for any suspicious logs.
-* Make sure that the correct ApplicationInsights.xml file has been successfully loaded by the Java SDK, by looking at the console's output messages for a "Configuration file has been successfully found" statement.
-* If the config file is not found, check the output messages to see where the config file is being searched for, and make sure that the ApplicationInsights.xml is located in one of those search locations. As a rule of thumb, you can place the config file near the Application Insights SDK JARs. For example: in Tomcat, this would mean the WEB-INF/classes folder. During development you can place ApplicationInsights.xml in resources folder of your web project.
-* Please also look at [GitHub issues page](https://github.com/microsoft/ApplicationInsights-Java/issues) for known issues with the SDK.
-* Please ensure to use same version of Application Insights core, web, agent and logging appenders to avoid any version conflict issues.
--
-#### I used to see data, but it has stopped
-* Have you hit your monthly quota of data points? Open Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for additional capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/).
-* Have you recently upgraded your SDK? Please ensure that only Unique SDK jars are present inside the project directory. There should not be two different versions of SDK present.
-* Are you looking at the correct AI resource? Please match the iKey of your application to the resource where you are expecting telemetry. They should be the same.
-
-#### I don't see all the data I'm expecting
-* Open the Usage and estimated cost page and check whether [sampling](./sampling.md) is in operation. (100% transmission means that sampling isn't in operation.) The Application Insights service can be set to accept only a fraction of the telemetry that arrives from your app. This helps you keep within your monthly quota of telemetry.
-* Do you have SDK Sampling turned on? If yes, data would be sampled at the rate specified for all the applicable types.
-* Are you running an older version of Java SDK? Starting with version 2.0.1, we have introduced fault tolerance mechanism to handle intermittent network and backend failures as well as data persistence on local drives.
-* Are you getting throttled due to excessive telemetry? If you turn on INFO logging, you will see a log message "App is throttled". Our current limit is 32k telemetry items/second.
-
-### Java Agent cannot capture dependency data
-* Have you configured Java agent by following [Configure Java Agent](java-2x-agent.md) ?
-* Make sure both the Java agent jar and the AI-Agent.xml file are placed in the same folder.
-* Make sure that the dependency you are trying to auto-collect is supported for auto collection. Currently we only support MySQL, MsSQL, Oracle DB and Azure Cache for Redis dependency collection.
-
-## No usage data
-**I see data about requests and response times, but no page view, browser, or user data.**
-
-You successfully set up your app to send telemetry from the server. Now your next step is to [set up your web pages to send telemetry from the web browser][usage].
-
-Alternatively, if your client is an app in a [phone or other device][platforms], you can send telemetry from there.
-
-Use the same instrumentation key to set up both your client and server telemetry. The data will appear in the same Application Insights resource, and you'll be able to correlate events from client and server.
-
-## Disabling telemetry
-**How can I disable telemetry collection?**
-
-In code:
-
-```Java
-
- TelemetryConfiguration config = TelemetryConfiguration.getActive();
- config.setTrackingIsDisabled(true);
-```
-
-**Or**
-
-Update ApplicationInsights.xml (in the resources folder in your project). Add the following under the root node:
-
-```xml
-
- <DisableTelemetry>true</DisableTelemetry>
-```
-
-Using the XML method, you have to restart the application when you change the value.
-
-## Changing the target
-**How can I change which Azure resource my project sends data to?**
-
-* [Get the instrumentation key of the new resource.][java]
-* If you added Application Insights to your project using the Azure Toolkit for Eclipse, right click your web project, select **Azure**, **Configure Application Insights**, and change the key.
-* If you had configured the Instrumentation Key as environment variable please update the value of the environment variable with new iKey.
-* Otherwise, update the key in ApplicationInsights.xml in the resources folder in your project.
-
-## Debug data from the SDK
-
-**How can I find out what the SDK is doing?**
-
-To get more information about what's happening in the API, add `<SDKLogger/>` under the root node of the ApplicationInsights.xml configuration file.
-
-### ApplicationInsights.xml
-
-You can also instruct the logger to output to a file:
-
-```xml
- <SDKLogger type="FILE"><!-- or "CONSOLE" to print to stderr -->
- <Level>TRACE</Level>
- <UniquePrefix>AI</UniquePrefix>
- <BaseFolderPath>C:/agent/AISDK</BaseFolderPath>
-</SDKLogger>
-```
-
-### Spring Boot Starter
-
-To enable SDK logging with Spring Boot Apps using the Application Insights Spring Boot Starter, add the following to the `application.properties` file:
-
-```yaml
-azure.application-insights.logger.type=file
-azure.application-insights.logger.base-folder-path=C:/agent/AISDK
-azure.application-insights.logger.level=trace
-```
-
-or to print to standard error:
-
-```yaml
-azure.application-insights.logger.type=console
-azure.application-insights.logger.level=trace
-```
-
-### Java Agent
-
-To enable JVM Agent Logging update the [AI-Agent.xml file](java-2x-agent.md):
-
-```xml
-<AgentLogger type="FILE"><!-- or "CONSOLE" to print to stderr -->
- <Level>TRACE</Level>
- <UniquePrefix>AI</UniquePrefix>
- <BaseFolderPath>C:/agent/AIAGENT</BaseFolderPath>
-</AgentLogger>
-```
-
-### Java Command Line Properties
-_Since version 2.4.0_
-
-To enable logging using command line options, without changing configuration files:
-
-```
-java -Dapplicationinsights.logger.file.level=trace -Dapplicationinsights.logger.file.uniquePrefix=AI -Dapplicationinsights.logger.baseFolderPath="C:/my/log/dir" -jar MyApp.jar
-```
-
-or to print to standard error:
-
-```
-java -Dapplicationinsights.logger.console.level=trace -jar MyApp.jar
-```
-
-## The Azure start screen
-**I'm looking at [the Azure portal](https://portal.azure.com). Does the map tell me something about my app?**
-
-No, it shows the health of Azure servers around the world.
-
-*From the Azure start board (home screen), how do I find data about my app?*
-
-Assuming you [set up your app for Application Insights][java], click Browse, select Application Insights, and select the app resource you created for your app. To get there faster in future, you can pin your app to the start board.
-
-## Intranet servers
-**Can I monitor a server on my intranet?**
-
-Yes, provided your server can send telemetry to the Application Insights portal through the public internet.
-
-You may need to [open some outgoing ports in your server's firewall](./ip-addresses.md#outgoing-ports)
-to allow the SDK to send data to the portal.
-
-## Data retention
-**How long is data retained in the portal? Is it secure?**
-
-See [Data retention and privacy][data].
-
-## Debug logging
-Application Insights uses `org.apache.http`. This is relocated within Application Insights core jars under the namespace `com.microsoft.applicationinsights.core.dependencies.http`. This enables Application Insights to handle scenarios where different versions of the same `org.apache.http` exist in one code base.
-
->[!NOTE]
->If you enable DEBUG level logging for all namespaces in the app, it will be honored by all executing modules including `org.apache.http` renamed as `com.microsoft.applicationinsights.core.dependencies.http`. Application Insights will not be able to apply filtering for these calls because the log call is being made by the Apache library. DEBUG level logging produce a considerable amount of log data and is not recommended for live production instances.
-
-## Next steps
-**I set up Application Insights for my Java server app. What else can I do?**
-
-* [Monitor availability of your web pages][availability]
-* [Monitor web page usage][usage]
-* [Track usage and diagnose issues in your device apps][platforms]
-* [Write code to track usage of your app][track]
-* [Capture diagnostic logs][javalogs]
-
-## Get help
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/ms-application-insights)
-* [File an issue on GitHub](https://github.com/microsoft/ApplicationInsights-Java/issues)
-
-<!--Link references-->
-
-[availability]: ./monitor-web-app-availability.md
-[data]: ./data-retention-privacy.md
-[java]: java-2x-get-started.md
-[javalogs]: java-2x-trace-logs.md
-[platforms]: ./platforms.md
-[track]: ./api-custom-events-metrics.md
-[usage]: javascript.md
-
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
If you want to attach custom dimensions to your logs, use [Log4j 1.2 MDC](https:
## Troubleshooting
-For help with troubleshooting, see [Troubleshooting](java-standalone-troubleshoot.md).
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/java-standalone-troubleshoot).
## Release notes
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
- Title: Troubleshooting Azure Monitor Application Insights for Java
-description: Learn how to troubleshoot the Java agent for Azure Monitor Application Insights
- Previously updated : 11/30/2020---
-# Troubleshooting guide: Azure Monitor Application Insights for Java
-
-In this article, we cover some of the common issues that you might face while instrumenting a Java application by using the Java agent for Application Insights. We also cover the steps to resolve these issues. Application Insights is a feature of the Azure Monitor platform service.
-
-## Check the self-diagnostic log file
-
-By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory
-that holds the `applicationinsights-agent-3.3.0.jar` file.
-
-This log file is the first place to check for hints to any issues you might be experiencing.
-
-If no log file is generated, check that your Java application has write permission to the directory that holds the
-`applicationinsights-agent-3.3.0.jar` file.
-
-If still no log file is generated, check the stdout log from your Java application. Application Insights Java 3.x
-should log any errors to stdout that would prevent it from logging to its normal location.
-
-## JVM fails to start
-
-If the JVM fails to start with "Error opening zip file or JAR manifest missing",
-try re-downloading the agent jar file because it may have been corrupted during file transfer.
-
-## Upgrade from the Application Insights Java 2.x SDK
-
-If you're already using the Application Insights Java 2.x SDK in your application, you can keep using it.
-The Application Insights Java 3.x agent will detect it,
-and capture and correlate any custom telemetry you're sending via the 2.x SDK,
-while suppressing any auto-collection performed by the 2.x SDK to prevent duplicate telemetry.
-For more information, see [Upgrade from the Java 2.x SDK](./java-standalone-upgrade-from-2x.md).
-
-## Upgrade from Application Insights Java 3.0 Preview
-
-If you're upgrading from the Java 3.0 Preview agent, review all of the [configuration options](./java-standalone-config.md) carefully. The JSON structure has completely changed in the 3.0 general availability (GA) release.
-
-These changes include:
--- The configuration file name has changed from `ApplicationInsights.json` to `applicationinsights.json`.-- The `instrumentationSettings` node is no longer present. All content in `instrumentationSettings` is moved to the root level. -- Configuration nodes like `sampling`, `jmxMetrics`, `instrumentation`, and `heartbeat` are moved out of `preview` to the root level.-
-## Some logging is not auto-collected
-
-Logging is only captured if it first meets the level that is configured for the logging framework,
-and second, also meets the level that is configured for Application Insights.
-
-For example, if your logging framework is configured to log `WARN` (and above) from package `com.example`,
-and Application Insights is configured to capture `INFO` (and above),
-then Application Insights will only capture `WARN` (and above) from package `com.example`.
-
-The best way to know if a particular logging statement meets the logging frameworks' configured threshold
-is to confirm that it is showing up in your normal application log (e.g. file or console).
-
-Also note that if an exception object is passed to the logger, then the log message (and exception object details)
-will show up in the Azure portal under the `exceptions` table instead of the `traces` table.
-
-See the [auto-collected logging configuration](./java-standalone-config.md#auto-collected-logging) for more details.
-
-## Import SSL certificates
-
-This section helps you to troubleshoot and possibly fix the exceptions related to SSL certificates when using the Java agent.
-
-There are two different paths below for resolving this issue:
-* If using a default Java keystore
-* If using a custom Java keystore
-
-If you aren't sure which path to follow, check to see if you have a JVM arg `-Djavax.net.ssl.trustStore=...`.
-If you _don't_ have such a JVM arg, then you are probably using the default Java keystore.
-If you _do_ have such a JVM arg, then you are probably using a custom keystore,
-and the JVM arg will point you to your custom keystore.
-
-### If using the default Java keystore:
-
-Typically the default Java keystore will already have all of the CA root certificates. However there might be some exceptions, such as the ingestion endpoint certificate might be signed by a different root certificate. So we recommend the following three steps to resolve this issue:
-
-1. Check if the SSL certificate that was used to sign the Application Insights endpoint is already present in the default keystore. The trusted CA certificates, by default, are stored in `$JAVA_HOME/jre/lib/security/cacerts`. To list certificates in a Java keystore use the following command:
- > `keytool -list -v -keystore $PATH_TO_KEYSTORE_FILE`
-
- You can redirect the output to a temp file like this (will be easy to search later)
- > `keytool -list -v -keystore $JAVA_HOME/jre/lib/security/cacerts > temp.txt`
-
-2. Once you have the list of certificates, follow these [steps](#steps-to-download-ssl-certificate) to download the SSL certificate that was used to sign the Application Insights endpoint.
-
- Once you have the certificate downloaded, generate an SHA-1 hash on the certificate using the below command:
- > `keytool -printcert -v -file "your_downloaded_ssl_certificate.cer"`
-
- Copy the SHA-1 value and check if this value is present in "temp.txt" file you saved previously. If you are not able to find the SHA-1 value in the temp file, it indicates that the downloaded SSL cert is missing in default Java keystore.
--
-3. Import the SSL certificate to the default Java keystore using the following command:
- > `keytool -import -file "the cert file" -alias "some meaningful name" -keystore "path to cacerts file"`
-
- In this case it will be
-
- > `keytool -import -file "your downloaded ssl cert file" -alias "some meaningful name" $JAVA_HOME/jre/lib/security/cacerts`
--
-### If using a custom Java keystore:
-
-If you are using a custom Java keystore, you may need to import the Application Insights endpoint(s) SSL certificate(s) into it.
-We recommend the following two steps to resolve this issue:
-1. Follow these [steps](#steps-to-download-ssl-certificate) to download the SSL certificate from the Application Insights endpoint.
-2. Use the following command to import the SSL certificate to the custom Java keystore:
- > `keytool -importcert -alias your_ssl_certificate -file "your downloaded SSL certificate name.cer" -keystore "Your KeyStore name" -storepass "Your keystore password" -noprompt`
-
-### Steps to download SSL certificate
-
-1. Open your favorite browser and go to the URL from which you want to download the SSL certificate.
-
-2. Select the **View site information** (lock) icon in the browser, and then select the **Certificate** option.
-
- :::image type="content" source="media/java-ipa/troubleshooting/certificate-icon-capture.png" alt-text="Screenshot of the Certificate option in site information." lightbox="media/java-ipa/troubleshooting/certificate-icon-capture.png":::
-
-3. Later, you have to click on the "Certificate Path" -> Select the root Certificate -> Click on 'View Certificate'. This will pop up a new certificate menu and you can download the certificate, from the new menu.
-
- :::image type="content" source="media/java-ipa/troubleshooting/root-certificate.png" alt-text="Screenshot of how to select the root certificate." lightbox="media/java-ipa/troubleshooting/root-certificate.png":::
-
-4. Go to the **Details** tab and select **Copy to file**.
-5. Select the **Next** button, select **Base-64 encoded X.509 (.CER)** format, and then select **Next** again.
-
- :::image type="content" source="media/java-ipa/troubleshooting/certificate-export-wizard.png" alt-text="Screenshot of the Certificate Export Wizard, with a format selected." lightbox="media/java-ipa/troubleshooting/certificate-export-wizard.png":::
-
-6. Specify the file where you want to save the SSL certificate. Then select **Next** > **Finish**. You should see a "The export was successful" message.
-
-> [!WARNING]
-> You'll need to repeat these steps to get the new certificate before the current certificate expires. You can find the expiration information on the **Details** tab of the **Certificate** dialog box.
->
-> :::image type="content" source="media/java-ipa/troubleshooting/certificate-details.png" alt-text="Screenshot that shows SSL certificate details." lightbox="media/java-ipa/troubleshooting/certificate-details.png":::
-
-## Understanding UnknownHostException
-
-If you see this exception after upgrading to Java agent version greater than 3.2.0, upgrading your network to resolve the new endpoint shown in the exception might resolve the exception. The reason for the difference between Application Insights versions is that versions greater than 3.2.0 point to the new ingestion endpoint `v2.1/track` compared to the older `v2/track`. The new ingestion endpoint automatically redirects you to the ingestion endpoint (new endpoint shown in exception) nearest to the storage for your Application Insights resource.
-
-## Missing cipher suites
-
-If the Application Insights Java agent detects that you do not have any of the cipher suites that are supported by the endpoints it connects to, it will alert you and link you here.
-
-### Background on cipher suites:
-Cipher suites come into play before a client application and server exchange information over an SSL/TLS connection. The client application initiates an SSL handshake. Part of that process involves notifying the server which cipher suites it supports. The server receives that information and compares the cipher suites supported by the client application with the algorithms it supports. If it finds a match, the server notifies the client application and a secure connection is established. If it does not find a match, the server refuses the connection.
-
-#### How to determine client side cipher suites:
-In this case, the client is the JVM on which your instrumented application is running. Starting from 3.2.5, Application Insights Java will log a warning message if missing cipher suites could be causing connection failures to one of the service endpoints.
-
-If using an earlier version of Application Insights Java, compile and run the following Java program to get the list of supported cipher suites in your JVM:
-
-```
-import javax.net.ssl.SSLServerSocketFactory;
-
-public class Ciphers {
- public static void main(String[] args) {
- SSLServerSocketFactory ssf = (SSLServerSocketFactory) SSLServerSocketFactory.getDefault();
- String[] defaultCiphers = ssf.getDefaultCipherSuites();
- System.out.println("Default\tCipher");
- for (int i = 0; i < defaultCiphers.length; ++i) {
- System.out.print('*');
- System.out.print('\t');
- System.out.println(defaultCiphers[i]);
- }
- }
-}
-```
-Following are the cipher suites that are generally supported by the Application Insights endpoints:
-- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384-- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256-
-#### How to determine server side cipher suites:
-In this case, the server side is the Application Insights ingestion endpoint or the Application Insights Live metrics endpoint. You can use an online tool like [SSLLABS](https://www.ssllabs.com/ssltest/analyze.html) to determine the expected cipher suites based on the endpoint url.
-
-#### How to add the missing cipher suites:
-
-If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom Java runtime using `jlink` please make sure to include the same module.
-
-Otherwise, these cipher suites should already be part of modern Java 8+ distributions,
-so it is recommended to check where you installed your Java distribution from, and investigate why the security
-providers in that Java distribution's `java.security` configuration file differ from standard Java distributions.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
As shown, there are three different Azure Monitor exporters that support OpenCen
Each exporter accepts the same arguments for configuration, passed through the constructors. You can see details about each one here: - `connection_string`: The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.
+- `credential`: Credential class used by AAD authentication. See `Authentication` section below.
- `enable_standard_metrics`: Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.-- `export_interval`: Used to specify the frequency in seconds of exporting.
+- `export_interval`: Used to specify the frequency in seconds of exporting. Defaults to 15s.
+- `grace_period`: Used to specify the timeout for shutdown of exporters in seconds. Defaults to 5s.
- `instrumentation_key`: The instrumentation key used to connect to your Azure Monitor resource.-- `logging_sampling_rate`: Used for `AzureLogHandler`. Provides a sampling rate [0,1.0] for exporting logs. Defaults to 1.0.
+- `logging_sampling_rate`: Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to 1.0.
- `max_batch_size`: Specifies the maximum size of telemetry that's exported at once. - `proxies`: Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies). - `storage_path`: A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is $USER + `.opencensus` + `.azure` + `python-file-name`.
+- `timeout`: Specifies the networking timeout to send telemetry to the ingestion service in seconds. Defaults to 10s.
## Integrate with Azure Functions
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Application Insights Agent is located here: https://www.powershellgallery.com/pa
- [Start-ApplicationInsightsMonitoringTrace](./status-monitor-v2-api-reference.md#start-applicationinsightsmonitoringtrace) ## Troubleshooting-- [Troubleshooting](status-monitor-v2-troubleshoot.md)-- [Known issues](status-monitor-v2-troubleshoot.md#known-issues)
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot).
## FAQ
azure-monitor Status Monitor V2 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-troubleshoot.md
- Title: Azure Application Insights Agent troubleshooting and known issues | Microsoft Docs
-description: The known issues of Application Insights Agent and troubleshooting examples. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure.
- Previously updated : 04/23/2019---
-# Troubleshooting Application Insights Agent (formerly named Status Monitor v2)
-
-When you enable monitoring, you might experience issues that prevent data collection.
-This article lists all known issues and provides troubleshooting examples.
-
-## Known issues
-
-### Conflicting DLLs in an app's bin directory
-
-If any of these DLLs are present in the bin directory, monitoring might fail:
--- Microsoft.ApplicationInsights.dll-- Microsoft.AspNet.TelemetryCorrelation.dll-- System.Diagnostics.DiagnosticSource.dll-
-Some of these DLLs are included in the Visual Studio default app templates, even if your app doesn't use them.
-You can use troubleshooting tools to see symptomatic behavior:
--- PerfView:
- ```
- ThreadID="7,500"
- ProcessorNumber="0"
- msg="Found 'System.Diagnostics.DiagnosticSource, Version=4.0.2.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' assembly, skipping attaching redfield binaries"
- ExtVer="2.8.13.5972"
- SubscriptionId=""
- AppName=""
- FormattedMessage="Found 'System.Diagnostics.DiagnosticSource, Version=4.0.2.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' assembly, skipping attaching redfield binaries"
- ```
--- IISReset and app load (without telemetry). Investigate with Sysinternals (Handle.exe and ListDLLs.exe):
- ```
- .\handle64.exe -p w3wp | findstr /I "InstrumentationEngine AI. ApplicationInsights"
- E54: File (R-D) C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\Runtime\Microsoft.ApplicationInsights.RedfieldIISModule.dll
-
- .\Listdlls64.exe w3wp | findstr /I "InstrumentationEngine AI ApplicationInsights"
- 0x0000000009be0000 0x127000 C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\Instrumentation64\MicrosoftInstrumentationEngine_x64.dll
- 0x0000000009b90000 0x4f000 C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\Instrumentation64\Microsoft.ApplicationInsights.ExtensionsHost_x64.dll
- 0x0000000004d20000 0xb2000 C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\Instrumentation64\Microsoft.ApplicationInsights.Extensions.Base_x64.dll
- ```
-
-### PowerShell Versions
-This product was written and tested using PowerShell v5.1.
-This module isn't compatible with PowerShell versions 6 or 7.
-We recommend using PowerShell v5.1 alongside newer versions.
-For more information, see [Using PowerShell 7 side by side with PowerShell 5.1](/powershell/scripting/install/migrating-from-windows-powershell-51-to-powershell-7#using-powershell-7-side-by-side-with-windows-powershell-51).
-
-### Conflict with IIS shared configuration
-
-If you have a cluster of web servers, you might be using a [shared configuration](/iis/web-hosting/configuring-servers-in-the-windows-web-platform/shared-configuration_211).
-The HttpModule can't be injected into this shared configuration.
-Run the Enable command on each web server to install the DLL into each server's GAC.
-
-After you run the Enable command, complete these steps:
-1. Go to the shared configuration directory and find the applicationHost.config file.
-2. Add this line to the modules section of your configuration:
- ```
- <modules>
- <!-- Registered global managed http module handler. The 'Microsoft.AppInsights.IIS.ManagedHttpModuleHelper.dll' must be installed in the GAC before this config is applied. -->
- <add name="ManagedHttpModuleHelper" type="Microsoft.AppInsights.IIS.ManagedHttpModuleHelper.ManagedHttpModuleHelper, Microsoft.AppInsights.IIS.ManagedHttpModuleHelper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler,runtimeVersionv4.0" />
- </modules>
- ```
-
-### IIS Nested Applications
-
-We don't instrument nested applications in IIS in version 1.0.
-
-### Advanced SDK Configuration isn't available.
-
-The SDK configuration isn't exposed to the end user in version 1.0.
-
-
-
-## Troubleshooting
-
-### Troubleshooting PowerShell
-
-#### Determine which modules are available
-You can use the `Get-Module -ListAvailable` command to determine which modules are installed.
-
-#### Import a module into the current session
-If a module hasn't been loaded into a PowerShell session, you can manually load it by using the `Import-Module <path to psd1>` command.
--
-### Troubleshooting the Application Insights Agent module
-
-#### List the commands available in the Application Insights Agent module
-Run the command `Get-Command -Module Az.ApplicationMonitor` to get the available commands:
-
-```
-CommandType Name Version Source
- -
-Cmdlet Disable-ApplicationInsightsMonitoring 0.4.0 Az.ApplicationMonitor
-Cmdlet Disable-InstrumentationEngine 0.4.0 Az.ApplicationMonitor
-Cmdlet Enable-ApplicationInsightsMonitoring 0.4.0 Az.ApplicationMonitor
-Cmdlet Enable-InstrumentationEngine 0.4.0 Az.ApplicationMonitor
-Cmdlet Get-ApplicationInsightsMonitoringConfig 0.4.0 Az.ApplicationMonitor
-Cmdlet Get-ApplicationInsightsMonitoringStatus 0.4.0 Az.ApplicationMonitor
-Cmdlet Set-ApplicationInsightsMonitoringConfig 0.4.0 Az.ApplicationMonitor
-Cmdlet Start-ApplicationInsightsMonitoringTrace 0.4.0 Az.ApplicationMonitor
-```
-
-#### Determine the current version of the Application Insights Agent module
-Run the `Get-ApplicationInsightsMonitoringStatus -PowerShellModule` command to display the following information about the module:
- - PowerShell module version
- - Application Insights SDK version
- - File paths of the PowerShell module
-
-Review the [API reference](status-monitor-v2-api-reference.md) for a detailed description of how to use this cmdlet.
--
-### Troubleshooting running processes
-
-You can inspect the processes on the instrumented computer to determine if all DLLs are loaded and environment variables are set.
-If monitoring is working, at least 12 DLLs should be loaded.
-
-* Use the `Get-ApplicationInsightsMonitoringStatus -InspectProcess` command to check the DLLs.
-* Use the `(Get-Process -id {PID}).StartInfo.EnvironmentVariables` command to check the environment variables. Following are the environment varibles set in the worker process or dotnet core process:
-
-```
-COR_ENABLE_PROFILING=1
-COR_PROFILER={324F817A-7420-4E6D-B3C1-143FBED6D855}
-COR_PROFILER_PATH_32=Path to MicrosoftInstrumentationEngine_x86.dll
-COR_PROFILER_PATH_64=Path to MicrosoftInstrumentationEngine_x64.dll
-MicrosoftInstrumentationEngine_Host={CA487940-57D2-10BF-11B2-A3AD5A13CBC0}
-MicrosoftInstrumentationEngine_HostPath_32=Path to Microsoft.ApplicationInsights.ExtensionsHost_x86.dll
-MicrosoftInstrumentationEngine_HostPath_64=Path to Microsoft.ApplicationInsights.ExtensionsHost_x64.dll
-MicrosoftInstrumentationEngine_ConfigPath32_Private=Path to Microsoft.InstrumentationEngine.Extensions.config
-MicrosoftInstrumentationEngine_ConfigPath64_Private=Path to Microsoft.InstrumentationEngine.Extensions.config
-MicrosoftAppInsights_ManagedHttpModulePath=Path to Microsoft.ApplicationInsights.RedfieldIISModule.dll
-MicrosoftAppInsights_ManagedHttpModuleType=Microsoft.ApplicationInsights.RedfieldIISModule.RedfieldIISModule
-ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=Microsoft.ApplicationInsights.StartupBootstrapper
-DOTNET_STARTUP_HOOKS=Path to Microsoft.ApplicationInsights.StartupHook.dll
-```
-
-Review the [API reference](status-monitor-v2-api-reference.md) for a detailed description of how to use this cmdlet.
--
-### Collect ETW logs by using PerfView
-
-#### Setup
-
-1. Download PerfView.exe and PerfView64.exe from [GitHub](https://github.com/Microsoft/perfview/releases).
-2. Start PerfView64.exe.
-3. Expand **Advanced Options**.
-4. Clear these check boxes:
- - **Zip**
- - **Merge**
- - **.NET Symbol Collection**
-5. Set these **Additional Providers**: `61f6ca3b-4b5f-5602-fa60-759a2a2d1fbd,323adc25-e39b-5c87-8658-2c1af1a92dc5,925fa42b-9ef6-5fa7-10b8-56449d7a2040,f7d60e07-e910-5aca-bdd2-9de45b46c560,7c739bb9-7861-412e-ba50-bf30d95eae36,252e28f4-43f9-5771-197a-e8c7e750a984,f9c04365-1d1f-5177-1cdc-a0b0554b6903`
--
-#### Collecting logs
-
-1. In a command console with Admin privileges, run the `iisreset /stop` command to turn off IIS and all web apps.
-2. In PerfView, select **Start Collection**.
-3. In a command console with Admin privileges, run the `iisreset /start` command to start IIS.
-4. Try to browse to your app.
-5. After your app is loaded, return to PerfView and select **Stop Collection**.
-
-## Next steps
--- Review the [API reference](status-monitor-v2-overview.md#powershell-api-reference) to learn about parameters you might have missed.
azure-monitor Troubleshoot Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/troubleshoot-availability.md
- Title: Troubleshoot your Azure Application Insights availability tests
-description: Troubleshoot web tests in Azure Application Insights. Get alerts if a website becomes unavailable or responds slowly.
- Previously updated : 02/14/2021---
-# Troubleshooting
-
-This article will help you to troubleshoot common issues that may occur when using availability monitoring.
-
-## Troubleshooting report steps for ping tests
-
-The Troubleshooting Report allows you to easily diagnose common problems that cause your **ping tests** to fail.
-
-![Animation of navigating from the availability tab by selecting a failure to the end-to-end transaction details to view the troubleshooting report](./media/troubleshoot-availability/availability-to-troubleshooter.gif)
-
-1. On the availability tab of your Application Insights resource, select overall or one of the availability tests.
-2. Either select **Failed** then a test under "Drill into" on the left or select one of the points on the scatter plot.
-3. On the end-to-end transaction detail page, select an event then under "Troubleshooting report summary" select **[Go to step]** to see the troubleshooting report.
-
-> [!NOTE]
-> If the connection re-use step is present, then DNS resolution, connection establishment, and TLS transport steps will not be present.
-
-|Step | Error message | Possible cause |
-|--||-|
-| Connection reuse | n/a | Usually dependent on a previously established connection meaning the web test step is dependent. So there would be no DNS, connection or SSL step required. |
-| DNS resolution | The remote name could not be resolved: "your URL" | The DNS resolution process failed, most likely due to misconfigured DNS records or temporary DNS server failures. |
-| Connection establishment | A connection attempt failed because the connected party did not properly respond after a period of time. | In general, it means your server is not responding to the HTTP request. A common cause is that our test agents are being blocked by a firewall on your server. If you would like to test within an Azure Virtual Network, you should add the Availability service tag to your environment.|
-| TLS transport | The client and server cannot communicate because they do not possess a common algorithm.| Only TLS 1.0, 1.1, and 1.2 are supported. SSL is not supported. This step does not validate SSL certificates and only establishes a secure connection. This step will only shows up when an error occurs. |
-| Receiving response header | Unable to read data from the transport connection. The connection was closed. | Your server committed a protocol error in the response header. For example, connection closed by your server when the response is not fully. |
-| Receiving response body | Unable to read data from the transport connection: The connection was closed. | Your server committed a protocol error in response body. For example, Connection closed by your server when the response is not fully read or the chunk size is wrong in chunked response body. |
-| Redirect limit validation | This webpage has too many redirects. This loop will be terminated here since this request exceeded the limit for auto redirects. | There's a limit of 10 redirects per test. |
-| Status code validation | `200 - OK` does not match the expected status `400 - BadRequest`. | The returned status code that is counted as a success. 200 is the code that indicates that a normal web page has been returned. |
-| Content validation | The required text 'hello' did not appear in the response. | The string is not an exact case-sensitive match in the response, for example the string "Welcome!". It must be a plain string, without wildcard characters (for example an asterisk). If your page content changes you might have to update the string. Only English characters are supported with content match. |
-
-## Common troubleshooting questions
-
-### Site looks okay but I see test failures? Why is Application Insights alerting me?
-
- * Does your test have **Parse dependent requests** enabled? That results in a strict check on resources such as scripts, images etc. These types of failures may not be noticeable on a browser. Check all the images, scripts, style sheets, and any other files loaded by the page. If any of them fails, the test is reported as failed, even if the main HTML page loads without issue. To desensitize the test to such resource failures, simply uncheck the Parse Dependent Requests from the test configuration.
-
- * To reduce odds of noise from transient network blips etc., ensure Enable retries for test failures configuration is checked. You can also test from more locations and manage alert rule threshold accordingly to prevent location-specific issues causing undue alerts.
-
- * Click on any of the red dots from the Availability scatter plot experience experience, or any availability failure from the Search explorer to see the details of why we reported the failure. The test result, along with the correlated server-side telemetry (if enabled) should help understand why the test failed. Common causes of transient issues are network or connection issues.
-
- * Did the test time-out? We abort tests after 2 minutes. If your ping or multi-step test takes longer than 2 minutes, we will report it as a failure. Consider breaking the test into multiple ones that can complete in shorter durations.
-
- * Did all locations report failure, or only some of them? If only some reported failures, it may be due to network/CDN issues. Again, clicking on the red dots should help understand why the location reported failures.
-
-### I did not get an email when the alert triggered, or resolved or both?
-
-Check the alerts' action group configuration to confirm your email is directly listed, or a distribution list you are on is configured to receive notifications. If it is, then check the distribution list configuration to confirm it can receive external emails. Also check if your mail administrator may have any policies configured that may cause this issue.
-
-### I did not receive the webhook notification?
-
-Check to ensure the application receiving the webhook notification is available, and successfully processes the webhook requests. See [this](../alerts/alerts-log-webhook.md) for more information.
-
-### I am getting 403 Forbidden errors, what does this mean?
-
-This error indicates that you need to add firewall exceptions to allow the availability agents to test your target url. For a full list of agent IP addresses to allow, consult the [IP exception article](./ip-addresses.md#availability-tests).
-
-### Intermittent test failure with a protocol violation error?
-
-The error ("protocol violation..CR must be followed by LF") indicates an issue with the server (or dependencies). This happens when malformed headers are set in the response. It can be caused by load balancers or CDNs. Specifically, some headers might not be using CRLF to indicate end of line, which violates the HTTP specification and therefore fail validation at the .NET WebRequest level. Inspect the response to spot headers, which might be in violation.
-
-> [!NOTE]
-> The URL may not fail on browsers that have a relaxed validation of HTTP headers. See this blog post for a detailed explanation of this issue: http://mehdi.me/a-tale-of-debugging-the-linkedin-api-net-and-http-protocol-violations/
-
-### I don't see any related server-side telemetry to diagnose test failures?*
-
-If you have Application Insights set up for your server-side application, that may be because [sampling](./sampling.md) is in operation. Select a different availability result.
-
-### Can I call code from my web test?
-
-No. The steps of the test must be in the .webtest file. And you can't call other web tests or use loops. But there are several plug-ins that you might find helpful.
--
-### Is there a difference between "web tests" and "availability tests"?
-
-The two terms may be referenced interchangeably. Availability tests is a more generic term that includes the single URL ping tests in addition to the multi-step web tests.
-
-### I'd like to use availability tests on our internal server that runs behind a firewall.
-
- There are two possible solutions:
-
- * Configure your firewall to permit incoming requests from the [IP addresses
- of our web test agents](./ip-addresses.md).
- * Write your own code to periodically test your internal server. Run the code as a background process on a test server behind your firewall. Your test process can send its results to Application Insights by using [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) API in the core SDK package. This requires your test server to have outgoing access to the Application Insights ingestion endpoint, but that is a much smaller security risk than the alternative of permitting incoming requests. The results will appear in the availability web tests blades though the experience will be slightly simplified from what is available for tests created via the portal. Custom availability tests will also appear as availability results in Analytics, Search, and Metrics.
-
-### Uploading a multi-step web test fails
-
-Some reasons this might happen:
- * There's a size limit of 300 K.
- * Loops aren't supported.
- * References to other web tests aren't supported.
- * Data sources aren't supported.
-
-### My multi-step test doesn't complete
-
-There's a limit of 100 requests per test. Also, the test is stopped if it runs longer than two minutes.
-
-### How can I run a test with client certificates?
-
-This is currently not supported.
-
-## Next steps
-
-* [Multi-step web testing](availability-multistep.md)
-* [URL ping tests](monitor-web-app-availability.md)
azure-monitor Usage Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-troubleshoot.md
- Title: Troubleshoot user analytics tools - Application Insights
-description: Troubleshooting guide - analyzing site and app usage with Application Insights.
- Previously updated : 07/30/2021---
-# Troubleshoot user behavior analytics tools in Application Insights
-Have questions about the [user behavior analytics tools in Application Insights](usage-overview.md): [Users, Sessions, Events](usage-segmentation.md), [Funnels](usage-funnels.md), [User Flows](usage-flows.md), [Retention](usage-retention.md), or Cohorts? Here are some answers.
-
-## Counting Users
-**The user behavior analytics tools show that my app has one user/session, but I know my app has many users/sessions. How can I fix these incorrect counts?**
-
-All telemetry events in Application Insights have an [anonymous user ID](./data-model-context.md#anonymous-user-id) and a [session ID](./data-model-context.md#session-id) as two of their standard properties. By default, all of the usage analytics tools count users and sessions based on these IDs. If these standard properties aren't being populated with unique IDs for each user and session of your app, you'll see an incorrect count of users and sessions in the usage analytics tools.
-
-If you're monitoring a web app, the easiest solution is to add the [Application Insights JavaScript SDK](./javascript.md) to your app, and make sure the script snippet is loaded on each page you want to monitor. The JavaScript SDK automatically generates anonymous user and session IDs, then populates telemetry events with these IDs as they're sent from your app.
-
-If you're monitoring a web service (no user interface), [create a telemetry initializer that populates the anonymous user ID and session ID properties](./usage-overview.md) according to your service's notions of unique users and sessions.
-
-If your app is sending [authenticated user IDs](./api-custom-events-metrics.md#authenticated-users), you can count based on authenticated user IDs in the Users tool. In the "Show" dropdown, choose "Authenticated users."
-
-The user behavior analytics tools don't currently support counting users or sessions based on properties other than anonymous user ID, authenticated user ID, or session ID.
-
-## Naming Events
-**My app has thousands of different page view and custom event names. It's hard to distinguish between them, and the user behavior analytics tools often become unresponsive. How can I fix these naming issues?**
-
-Page view and custom event names are used throughout the user behavior analytics tools. Naming events well is critical to getting value from these tools. The goal is a balance between having too few, overly generic names ("Button clicked") and having too many, overly specific names ("Edit button clicked on http:\//www.contoso.com/index").
-
-To make any changes to the page view and custom event names your app is sending, you need to change your app's source code and redeploy. **All telemetry data in Application Insights is stored for 90 days and cannot be deleted**, so changes you make to event names will take 90 days to fully manifest. For the 90 days after making name changes, both the old and new event names will show up in your telemetry, so adjust queries and communicate within your teams, accordingly.
-
-If your app is sending too many page view names, check whether these page view names are specified manually in code or if they're being sent automatically by the Application Insights JavaScript SDK:
-
-* If the page view names are manually specified in code using the [`trackPageView` API](https://github.com/Microsoft/ApplicationInsights-JS/blob/master/API-reference.md), change the name to be less specific. Avoid common mistakes like putting the URL in the name of the page view. Instead, use the URL parameter of the `trackPageView` API. Move other details from the page view name into custom properties.
-
-* If the Application Insights JavaScript SDK is automatically sending page view names, you can either change your pages' titles or switch to manually sending page view names. The SDK sends the [title](https://developer.mozilla.org/docs/Web/HTML/Element/title) of each page as the page view name, by default. You could change your titles to be more general, but be mindful of SEO and other impacts this change could have. Manually specifying page view names with the `trackPageView` API overrides the automatically collected names, so you could send more general names in telemetry without changing page titles.
-
-If your app is sending too many custom event names, change the name in the code to be less specific. Again, avoid putting URLs and other per-page or dynamic information in the custom event names directly. Instead, move these details into custom properties of the custom event with the `trackEvent` API. For example, instead of `appInsights.trackEvent("Edit button clicked on http://www.contoso.com/index")`, we suggest something like `appInsights.trackEvent("Edit button clicked", { "Source URL": "http://www.contoso.com/index" })`.
-
-## Next steps
-
-* [User behavior analytics tools overview](usage-overview.md)
-
-## Get help
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/ms-application-insights)
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Title: Configure ContainerLogv2 schema (preview) for Container Insights
-description: Switch your ContainerLog table to the ContainerLogv2 schema
+ Title: Configure the ContainerLogV2 schema (preview) for Container Insights
+description: Switch your ContainerLog table to the ContainerLogV2 schema.
Last updated 05/11/2022
-# Enable ContainerLogV2 schema (preview)
-Azure Monitor Container Insights is now in Public Preview of new schema for container logs called ContainerLogV2. As part of this schema, there are new fields to make common queries to view AKS (Azure Kubernetes Service) and Azure Arc enabled Kubernetes data. In addition, this schema is compatible as a part of [Basic Logs](../logs/basic-logs-configure.md), which offer a low cost alternative to standard analytics logs.
+# Enable the ContainerLogV2 schema (preview)
+Azure Monitor Container insights is now in public preview of a new schema for container logs, called ContainerLogV2. As part of this schema, there are new fields to make common queries to view Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes data. In addition, this schema is compatible with [Basic Logs](../logs/basic-logs-configure.md), which offers a low-cost alternative to standard analytics logs.
-> [!NOTE]
-> The ContainerLogv2 schema is currently a preview feature, Container Insights does not yet support the "View in Analytics" option, however the data is still available when queried directly from the [Log Analytics](./container-insights-log-query.md) interface.
+The ContainerLogV2 schema is a preview feature. Container insights does not yet support the **View in Analytics** option, but the data is available when it's queried directly from the [Log Analytics](./container-insights-log-query.md) interface.
->[!NOTE]
->The new fields are:
->* ContainerName
->* PodName
->* PodNamespace
+The new fields are:
+* `ContainerName`
+* `PodName`
+* `PodNamespace`
## ContainerLogV2 schema ```kusto
Azure Monitor Container Insights is now in Public Preview of new schema for cont
LogSource: string, TimeGenerated: datetime ```
-## Enable ContainerLogV2 schema
-1. Customers can enable ContainerLogV2 schema at cluster level.
-2. To enable ContainerLogV2 schema, configure the cluster's configmap, Learn more about [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) in Kubernetes documentation & [Azure Monitor configmap](./container-insights-agent-config.md#configmap-file-settings-overview).
-3. Follow the instructions accordingly when configuring an existing ConfigMap or using a new one.
+## Enable the ContainerLogV2 schema
+Customers can enable the ContainerLogV2 schema at the cluster level. To enable the ContainerLogV2 schema, configure the cluster's ConfigMap. Learn more about ConfigMap in [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and in [Azure Monitor documentation](./container-insights-agent-config.md#configmap-file-settings-overview).
+Follow the instructions to configure an existing ConfigMap or to use a new one.
-### Configuring an existing ConfigMap
-If your ConfigMap doesn't yet have the "[log_collection_settings.schema]" field, you'll need to append the following section in your existing ConfigMap yaml file:
+### Configure an existing ConfigMap
+If your ConfigMap doesn't yet have the `log_collection_settings.schema` field, you'll need to append the following section in your existing ConfigMap .yaml file:
```yaml [log_collection_settings.schema]
- # In the absence of this configmap, default value for containerlog_schema_version is "v1"
+ # In the absence of this ConfigMap, the default value for containerlog_schema_version is "v1"
# Supported values for this setting are "v1","v2" # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema containerlog_schema_version = "v2" ```
-### Configuring a new ConfigMap
-1. Download the new ConfigMap from [here](https://aka.ms/container-azm-ms-agentconfig). For the newly downloaded configmapdefault, the value for containerlog_schema_version is "v1"
-1. Update the "containerlog_schema_version = "v2""
+### Configure a new ConfigMap
+1. [Download the new ConfigMap](https://aka.ms/container-azm-ms-agentconfig). For the newly downloaded ConfigMap, the default value for `containerlog_schema_version` is `"v1"`.
+1. Update `containerlog_schema_version` to `"v2"`:
-```yaml
-[log_collection_settings.schema]
- # In the absence of this configmap, default value for containerlog_schema_version is "v1"
- # Supported values for this setting are "v1","v2"
- # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema
- containerlog_schema_version = "v2"
-```
+ ```yaml
+ [log_collection_settings.schema]
+ # In the absence of this ConfigMap, the default value for containerlog_schema_version is "v1"
+ # Supported values for this setting are "v1","v2"
+ # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema
+ containerlog_schema_version = "v2"
+ ```
-1. Once you have finished configuring the configmap, run the following kubectl command: kubectl apply -f `<configname>`
+3. After you finish configuring the ConfigMap, run the following kubectl command: `kubectl apply -f <configname>`.
->[!TIP]
->Example: kubectl apply -f container-azm-ms-agentconfig.yaml.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`
>[!NOTE]
->* The configuration change can take a few minutes to complete before taking effect, all omsagent pods in the cluster will restart.
->* The restart is a rolling restart for all omsagent pods, it will not restart all of them at the same time.
+>* The configuration change can take a few minutes to complete before it takes effect. All OMS agent pods in the cluster will restart.
+>* The restart is a rolling restart for all OMS agent pods. It won't restart all of them at the same time.
## Next steps
-* Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2
+* Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2.
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Title: Azure Activity log
-description: View the Azure Activity log and send it to Azure Monitor Logs, Azure Event Hubs, and Azure Storage.
+ Title: Azure activity log
+description: View the Azure Monitor activity log and send it to Azure Monitor Logs, Azure Event Hubs, and Azure Storage.
Last updated 09/09/2021
-# Azure Activity log
-The Activity log is a [platform log](./platform-logs-overview.md) in Azure that provides insight into subscription-level events. Activity log includes such information as when a resource is modified or when a virtual machine is started. You can view the Activity log in the Azure portal or retrieve entries with PowerShell and CLI. This article provides details on viewing the Activity log and sending it to different destinations.
+# Azure Monitor activity log
-For more functionality, you should create a diagnostic setting to send the Activity log to one or more of these locations for the following reasons:
-- to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting, and longer retention (up to two years) -- to Azure Event Hubs to forward outside of Azure-- to Azure Storage for cheaper, long-term archiving
+The Azure Monitor activity log is a [platform log](./platform-logs-overview.md) in Azure that provides insight into subscription-level events. The activity log includes information like when a resource is modified or a virtual machine is started. You can view the activity log in the Azure portal or retrieve entries with PowerShell and the Azure CLI. This article provides information on how to view the activity log and send it to different destinations.
-See [Create diagnostic settings to send platform logs and metrics to different destinations](./diagnostic-settings.md) for details on creating a diagnostic setting.
+For more functionality, create a diagnostic setting to send the activity log to one or more of these locations for the following reasons:
+
+- Send to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting and for longer retention, up to two years.
+- Send to Azure Event Hubs to forward outside of Azure.
+- Send to Azure Storage for cheaper, long-term archiving.
+
+For details on how to create a diagnostic setting, see [Create diagnostic settings to send platform logs and metrics to different destinations](./diagnostic-settings.md).
> [!NOTE]
-> Entries in the Activity Log are system generated and cannot be changed or deleted.
+> Entries in the activity log are system generated and can't be changed or deleted.
+
+## Retention period
+
+Activity log events are retained in Azure for *90 days* and then deleted. There's no charge for entries during this time regardless of volume. For more functionality, such as longer retention, create a diagnostic setting and route the entries to another location based on your needs. See the criteria in the preceding section.
-## Retention Period
+## View the activity log
-Activity log events are retained in Azure for **90 days** and then deleted. There's no charge for entries during this time regardless of volume. For more functionality such as longer retention, you should create a diagnostic setting and route the entires to another location based on your needs. See the criteria in the earlier section of this article.
+You can access the activity log from most menus in the Azure portal. The menu that you open it from determines its initial filter. If you open it from the **Monitor** menu, the only filter is on the subscription. If you open it from a resource's menu, the filter is set to that resource. You can always change the filter to view all other entries. Select **Add Filter** to add more properties to the filter.
-## View the Activity log
-You can access the Activity log from most menus in the Azure portal. The menu that you open it from determines its initial filter. If you open it from the **Monitor** menu, then the only filter will be on the subscription. If you open it from a resource's menu, then the filter is set to that resource. You can always change the filter though to view all other entries. Select **Add Filter** to add more properties to the filter.
+![Screenshot that shows the activity log.](./media/activity-log/view-activity-log.png)
-![View Activity Log](./media/activity-log/view-activity-log.png)
+For a description of activity log categories, see [Azure activity log event schema](activity-log-schema.md#categories).
-For a description of Activity log categories see [Azure Activity Log event schema](activity-log-schema.md#categories).
+## Download the activity log
-## Download the Activity log
Select **Download as CSV** to download the events in the current view.
-![Download Activity log](media/activity-log/download-activity-log.png)
+![Screenshot that shows downloading the activity log.](media/activity-log/download-activity-log.png)
### View change history
-For some events, you can view the Change history, which shows what changes happened during that event time. Select an event from the Activity Log you want to look deeper into. Select the **Change history (Preview)** tab to view any associated changes with that event.
+For some events, you can view the change history, which shows what changes happened during that event time. Select an event from the activity log you want to look at more deeply. Select the **Change history (Preview)** tab to view any associated changes with that event.
-![Change history list for an event](media/activity-log/change-history-event.png)
+![Screenshot that shows the Change history list for an event.](media/activity-log/change-history-event.png)
-If there are any associated changes with the event, you'll see a list of changes that you can select. This opens up the **Change history (Preview)** page. On this page, you see the changes to the resource. In the following example, you can see not only that the VM changed sizes, but what the previous VM size was before the change and what it was changed to. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
+If any changes are associated with the event, you'll see a list of changes that you can select. Selecting a change opens the **Change history (Preview)** page. This page displays the changes to the resource. In the following example, you can see that the VM changed sizes. The page displays the VM size before the change and after the change. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
-![Change history page showing differences](media/activity-log/change-history-event-details.png)
+![Screenshot that shows the Change history page showing differences.](media/activity-log/change-history-event-details.png)
+### Other methods to retrieve activity log events
-### Other methods to retrieve Activity log events
-You can also access Activity log events using the following methods:
--- Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet to retrieve the Activity Log from PowerShell. See [Azure Monitor PowerShell samples](../powershell-samples.md#retrieve-activity-log).-- Use [az monitor activity-log](/cli/azure/monitor/activity-log) to retrieve the Activity Log from CLI. See [Azure Monitor CLI samples](../cli-samples.md#view-activity-log).-- Use the [Azure Monitor REST API](/rest/api/monitor/) to retrieve the Activity Log from a REST client.
+You can also access activity log events by using the following methods:
+- Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet to retrieve the activity log from PowerShell. See [Azure Monitor PowerShell samples](../powershell-samples.md#retrieve-activity-log).
+- Use [az monitor activity-log](/cli/azure/monitor/activity-log) to retrieve the activity log from the CLI. See [Azure Monitor CLI samples](../cli-samples.md#view-activity-log).
+- Use the [Azure Monitor REST API](/rest/api/monitor/) to retrieve the activity log from a REST client.
## Send to Log Analytics workspace
- Send the Activity log to a Log Analytics workspace to enable the features of [Azure Monitor Logs](../logs/data-platform-logs.md) which includes the following:
-- Correlate Activity log data with other monitoring data collected by Azure Monitor.
+ Send the activity log to a Log Analytics workspace to enable the [Azure Monitor Logs](../logs/data-platform-logs.md) feature, where you:
+
+- Correlate activity log data with other monitoring data collected by Azure Monitor.
- Consolidate log entries from multiple Azure subscriptions and tenants into one location for analysis together.-- Use log queries to perform complex analysis and gain deep insights on Activity Log entries.-- Use log alerts with Activity entries allowing for more complex alerting logic.-- Store Activity log entries for longer than the Activity Log retention period.-- No data ingestion charges for Activity log data stored in a Log Analytics workspace.-- No data retention charges for the first 90 days for Activity log data stored in a Log Analytics workspace.
+- Use log queries to perform complex analysis and gain deep insights on activity log entries.
+- Use log alerts with Activity entries for more complex alerting logic.
+- Store activity log entries for longer than the activity log retention period.
+- Incur no data ingestion charges for activity log data stored in a Log Analytics workspace.
+- Incur no data retention charges for the first 90 days for activity log data stored in a Log Analytics workspace.
- Select **Export Activity Logs**.
+ Select **Export Activity Logs** to send the activity log to a Log Analytics workspace.
- ![Export activity logs](media/activity-log/diagnostic-settings-export.png)
+ ![Screenshot that shows exporting activity logs.](media/activity-log/diagnostic-settings-export.png)
-to send the Activity log to a Log Analytics workspace. You can send the Activity log from any single subscription to up to five workspaces.
+You can send the activity log from any single subscription to up to five workspaces.
-Activity log data in a Log Analytics workspace is stored in a table called *AzureActivity* that you can retrieve with a [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-tutorial.md). The structure of this table varies depending on the [category of the log entry](activity-log-schema.md). For a description of the table properties, see the [Azure Monitor data reference](/azure/azure-monitor/reference/tables/azureactivity).
+Activity log data in a Log Analytics workspace is stored in a table called `AzureActivity` that you can retrieve with a [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-tutorial.md). The structure of this table varies depending on the [category of the log entry](activity-log-schema.md). For a description of the table properties, see the [Azure Monitor data reference](/azure/azure-monitor/reference/tables/azureactivity).
-For example, to view a count of Activity log records for each category, use the following query:
+For example, to view a count of activity log records for each category, use the following query:
```kusto AzureActivity
AzureActivity
| where CategoryValue == "Administrative" ``` - ## Send to Azure Event Hubs
-Send the Activity Log to Azure Event Hubs to send entries outside of Azure, for example to a third-party SIEM or other log analytics solutions. Activity log events from Event Hubs are consumed in JSON format with a `records` element containing the records in each payload. The schema depends on the category and is described in [Schema from Storage Account and Event Hubs](activity-log-schema.md).
-Following is sample output data from Event Hubs for an Activity log:
+Send the activity log to Azure Event Hubs to send entries outside of Azure, for example, to a third-party SIEM or other log analytics solutions. Activity log events from event hubs are consumed in JSON format with a `records` element that contains the records in each payload. The schema depends on the category and is described in [Azure activity log event schema](activity-log-schema.md).
+
+The following sample output data is from event hubs for an activity log:
``` JSON {
Following is sample output data from Event Hubs for an Activity log:
} ```
-## Send to Azure storage
-Send the Activity Log to an Azure Storage Account if you want to retain your log data longer than 90 days for audit, static analysis, or backup. If you only must retain your events for 90 days or less you don't need to set up archival to a Storage Account, since Activity Log events are retained in the Azure platform for 90 days.
+## Send to Azure Storage
+
+Send the activity log to an Azure Storage account if you want to retain your log data longer than 90 days for audit, static analysis, or backup. If you're required to retain your events for 90 days or less, you don't need to set up archival to a storage account. Activity log events are retained in the Azure platform for 90 days.
-When you send the Activity log to Azure, a storage container is created in the Storage Account as soon as an event occurs. The blobs in the container use the following naming convention:
+When you send the activity log to Azure, a storage container is created in the storage account as soon as an event occurs. The blobs in the container use the following naming convention:
``` insights-activity-logs/resourceId=/SUBSCRIPTIONS/{subscription ID}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json ```
-For example, a particular blob might have a name similar to the following:
+For example, a particular blob might have a name similar to:
``` insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/y=2020/m=06/d=08/h=18/m=00/PT1H.json ```
-Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL (for example, h=12). During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00, since resource log events are broken into individual blobs per hour.
+Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL, for example, h=12. During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00 because resource log events are broken into individual blobs per hour.
-Each event is stored in the PT1H.json file with the following format that uses a common top-level schema but is otherwise unique for each category as described in [Activity log schema](activity-log-schema.md).
+Each event is stored in the PT1H.json file with the following format. This format uses a common top-level schema but is otherwise unique for each category, as described in [Activity log schema](activity-log-schema.md).
``` JSON { "time": "2020-06-12T13:07:46.766Z", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.COMPUTE/VIRTUALMACHINES/MV-VM-01", "correlationId": "0f0cb6b4-804b-4129-b893-70aeeb63997e", "operationName": "Microsoft.Resourcehealth/healthevent/Updated/action", "level": "Information", "resultType": "Updated", "category": "ResourceHealth", "properties": {"eventCategory":"ResourceHealth","eventProperties":{"title":"This virtual machine is starting as requested by an authorized user or process. It will be online shortly.","details":"VirtualMachineStartInitiatedByControlPlane","currentHealthStatus":"Unknown","previousHealthStatus":"Unknown","type":"Downtime","cause":"UserInitiated"}}} ``` - ## Legacy collection methods
-This section describes legacy methods for collecting the Activity log that were used prior to diagnostic settings. If you're using these methods, you should consider transitioning to diagnostic settings that provide better functionality and consistency with resource logs.
+
+This section describes legacy methods for collecting the activity log that were used prior to diagnostic settings. If you're using these methods, consider transitioning to diagnostic settings that provide better functionality and consistency with resource logs.
### Log profiles
-Log profiles are the legacy method for sending the Activity log to Azure storage or Event Hubs. Use the following procedure to continue working with a log profile or to disable it in preparation for migrating to a diagnostic setting.
-1. From the **Azure Monitor** menu in the Azure portal, select **Activity log**.
-3. Select **Export Activity Logs**.
+Log profiles are the legacy method for sending the activity log to storage or event hubs. Use the following procedure to continue working with a log profile or to disable it in preparation for migrating to a diagnostic setting.
- ![Export activity logs](media/activity-log/diagnostic-settings-export.png)
+1. From the **Azure Monitor** menu in the Azure portal, select **Activity log**.
+1. Select **Export Activity Logs**.
-4. Select the purple banner for the legacy experience.
+ ![Screenshot that shows exporting activity logs.](media/activity-log/diagnostic-settings-export.png)
- ![Legacy experience](media/activity-log/legacy-experience.png)
+1. Select the purple banner for the legacy experience.
+ ![Screenshot that shows the legacy experience.](media/activity-log/legacy-experience.png)
-### Configure log profile using PowerShell
+### Configure a log profile by using PowerShell
-If a log profile already exists, you first must remove the existing log profile and then create new one.
+If a log profile already exists, you first must remove the existing log profile and then create a new one.
-1. Use `Get-AzLogProfile` to identify if a log profile exists. If a log profile does exist, note the *name* property.
+1. Use `Get-AzLogProfile` to identify if a log profile exists. If a log profile exists, note the `Name` property.
-1. Use `Remove-AzLogProfile` to remove the log profile using the value from the *name* property.
+1. Use `Remove-AzLogProfile` to remove the log profile by using the value from the `Name` property.
```powershell # For example, if the log profile name is 'default' Remove-AzLogProfile -Name "default" ```
-3. Use `Add-AzLogProfile` to create a new log profile:
+1. Use `Add-AzLogProfile` to create a new log profile:
```powershell Add-AzLogProfile -Name my_log_profile -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/my_storage -serviceBusRuleId /subscriptions/s1/resourceGroups/Default-ServiceBus-EastUS/providers/Microsoft.ServiceBus/namespaces/mytestSB/authorizationrules/RootManageSharedAccessKey -Location global,westus,eastus -RetentionInDays 90 -Category Write,Delete,Action
If a log profile already exists, you first must remove the existing log profile
| Property | Required | Description | | | | | | Name |Yes |Name of your log profile. |
- | StorageAccountId |No |Resource ID of the Storage Account where the Activity Log should be saved. |
- | serviceBusRuleId |No |Service Bus Rule ID for the Service Bus namespace you would like to have Event Hubs created in. This is a string with the format: `{service bus resource ID}/authorizationrules/{key name}`. |
- | Location |Yes |Comma-separated list of regions for which you would like to collect Activity Log events. |
- | RetentionInDays |Yes |Number of days for which events should be retained in the Storage Account, from 1 through 365. A value of zero stores the logs indefinitely. |
- | Category |No |Comma-separated list of event categories that should be collected. Possible values are _Write_, _Delete_, and _Action_. |
+ | StorageAccountId |No |Resource ID of the storage account where the activity log should be saved. |
+ | serviceBusRuleId |No |Service Bus Rule ID for the Service Bus namespace where you want to have event hubs created. This string has the format `{service bus resource ID}/authorizationrules/{key name}`. |
+ | Location |Yes |Comma-separated list of regions for which you want to collect activity log events. |
+ | RetentionInDays |Yes |Number of days for which events should be retained in the storage account, from 1 through 365. A value of zero stores the logs indefinitely. |
+ | Category |No |Comma-separated list of event categories to be collected. Possible values are Write, Delete, and Action. |
### Example script
-Following is a sample PowerShell script to create a log profile that writes the Activity Log to both a Storage Account and an Event Hub.
+
+The following sample PowerShell script is used to create a log profile that writes the activity log to both a storage account and an event hub.
```powershell # Settings needed for the new log profile
Following is a sample PowerShell script to create a log profile that writes the
Add-AzLogProfile -Name $logProfileName -Location $locations -StorageAccountId $storageAccountId -ServiceBusRuleId $serviceBusRuleId ``` -
-### Configure log profile using Azure CLI
+### Configure a log profile by using the Azure CLI
If a log profile already exists, you first must remove the existing log profile and then create a log profile. 1. Use `az monitor log-profiles list` to identify if a log profile exists.
-2. Use `az monitor log-profiles delete --name "<log profile name>` to remove the log profile using the value from the *name* property.
-3. Use `az monitor log-profiles create` to create a log profile:
+1. Use `az monitor log-profiles delete --name "<log profile name>` to remove the log profile by using the value from the `name` property.
+1. Use `az monitor log-profiles create` to create a log profile:
```azurecli-interactive az monitor log-profiles create --name "default" --location null --locations "global" "eastus" "westus" --categories "Delete" "Write" "Action" --enabled false --days 0 --service-bus-rule-id "/subscriptions/<YOUR SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventHub/namespaces/<Event Hub NAME SPACE>/authorizationrules/RootManageSharedAccessKey"
If a log profile already exists, you first must remove the existing log profile
| Property | Required | Description | | | | | | name |Yes |Name of your log profile. |
- | storage-account-id |Yes |Resource ID of the Storage Account to which Activity Logs should be saved. |
- | locations |Yes |Space-separated list of regions for which you would like to collect Activity Log events. You can view a list of all regions for your subscription using `az account list-locations --query [].name`. |
- | days |Yes |Number of days for which events should be retained, from 1 through 365. A value of zero will store the logs indefinitely (forever). If zero, then the enabled parameter should be set to false. |
- |enabled | Yes |True or False. Used to enable or disable the retention policy. If True, then the days parameter must be a value greater than 0.
+ | storage-account-id |Yes |Resource ID of the storage account to which activity logs should be saved. |
+ | locations |Yes |Space-separated list of regions for which you want to collect activity log events. View a list of all regions for your subscription by using `az account list-locations --query [].name`. |
+ | days |Yes |Number of days for which events should be retained, from 1 through 365. A value of zero stores the logs indefinitely (forever). If zero, then the enabled parameter should be set to False. |
+ |enabled | Yes |True or False. Used to enable or disable the retention policy. If True, then the `days` parameter must be a value greater than zero.
| categories |Yes |Space-separated list of event categories that should be collected. Possible values are Write, Delete, and Action. | - ### Log Analytics workspace
-The legacy method for sending the Activity log into a Log Analytics workspace is connecting the sign in the workspace configuration.
-1. From the **Log Analytics workspaces** menu in the Azure portal, select the workspace to collect the Activity Log.
-1. In the **Workspace Data Sources** section of the workspace's menu, select **Azure Activity log**.
-1. Select the subscription that you want to connect.
+The legacy method for sending the activity log into a Log Analytics workspace is connecting the sign-in for the workspace configuration.
- ![Screenshot shows Log Analytics workspace with an Azure Activity log selected.](media/activity-log/workspaces.png)
+1. From the **Log Analytics workspaces** menu in the Azure portal, select the workspace to collect the activity log.
+1. In the **Workspace Data Sources** section of the workspace's menu, select **Azure Activity log**.
+1. Select the subscription that you want to connect to.
-2. Select **Connect** to connect the Activity sign in the subscription to the selected workspace. If the subscription is already connected to another workspace, select **Disconnect** first to disconnect it.
+ ![Screenshot that shows Log Analytics workspace with Azure Activity log selected.](media/activity-log/workspaces.png)
- ![Connect Workspaces](media/activity-log/connect-workspace.png)
+1. Select **Connect** to connect the activity sign-in subscription to the selected workspace. If the subscription is already connected to another workspace, select **Disconnect** first to disconnect it.
+ ![Screenshot that shows connecting workspaces.](media/activity-log/connect-workspace.png)
-To disable the setting, perform the same procedure and select **Disconnect** to remove the subscription from the workspace.
+To disable the setting, do the same procedure and select **Disconnect** to remove the subscription from the workspace.
### Data structure changes
-The Export activity logs experience, sends the same data as the legacy method used to send the Activity log with some changes to the structure of the *AzureActivity* table.
-The columns in the following table have been deprecated in the updated schema. They still exist in *AzureActivity* but they have no data. The replacements for these columns aren't new, but they contain the same data as the deprecated column. They are in a different format, so you might need to modify log queries that use them.
+The Export activity logs experience sends the same data as the legacy method used to send the activity log with some changes to the structure of the `AzureActivity` table.
-|Activity Log JSON | Log Analytics column name<br/>*(older deprecated)* | New Log Analytics column name | Notes |
+The columns in the following table have been deprecated in the updated schema. They still exist in `AzureActivity`, but they have no data. The replacements for these columns aren't new, but they contain the same data as the deprecated column. They're in a different format, so you might need to modify log queries that use them.
+
+|Activity log JSON | Log Analytics column name<br/>*(older deprecated)* | New Log Analytics column name | Notes |
|:|:|:|:| |category | Category | CategoryValue ||
-|status<br/><br/>*values are (success, start, accept, failure)* |ActivityStatus <br/><br/>*values same as JSON* |ActivityStatusValue<br/><br/>*values change to (succeeded, started, accepted, failed)* |The valid values change as shown|
+|status<br/><br/>Values are success, start, accept, failure |ActivityStatus <br/><br/>Values same as JSON |ActivityStatusValue<br/><br/>Values change to succeeded, started, accepted, failed |The valid values change as shown.|
|subStatus |ActivitySubstatus |ActivitySubstatusValue||
-|operationName | OperationName | OperationNameValue |REST API localizes operation name value. Log Analytics UI always shows English. |
+|operationName | OperationName | OperationNameValue |REST API localizes the operation name value. Log Analytics UI always shows English. |
|resourceProviderName | ResourceProvider | ResourceProviderValue || > [!Important]
-> In some cases, the values in these columns may be in all uppercase. If you have a query that includes these columns, you should use the [=~ operator](/azure/kusto/query/datatypes-string-operators) to do a case insensitive comparison.
-The following columns have been added to *AzureActivity* in the updated schema:
+> In some cases, the values in these columns might be all uppercase. If you have a query that includes these columns, use the [=~ operator](/azure/kusto/query/datatypes-string-operators) to do a case-insensitive comparison.
+
+The following columns have been added to `AzureActivity` in the updated schema:
- Authorization_d - Claims_d
The following columns have been added to *AzureActivity* in the updated schema:
## Activity log insights
-Activity log insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view Activity log insights in the Azure portal.
+Activity log insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view activity log insights in the Azure portal.
-Before using Activity log insights, you'll have to [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
+Before you use activity log insights, you must [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
-### How does Activity log insights work?
+### How do activity log insights work?
-Activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) are stored in a table called AzureActivity.
+Activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) are stored in a table called `AzureActivity`.
-Activity log insights are a curated [Log Analytics workbook](../visualize/workbooks-overview.md) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
+Activity log insights are a curated [Log Analytics workbook](../visualize/workbooks-overview.md) with dashboards that visualize the data in the `AzureActivity` table. For example, data might include which administrators deleted, updated, or created resources and whether the activities failed or succeeded.
-### View Activity log insights - Resource group / Subscription level
+### View activity log insights: Resource group or subscription level
-To view Activity log insights on a resource group or a subscription level:
+To view activity log insights on a resource group or a subscription level:
1. In the Azure portal, select **Monitor** > **Workbooks**.
-1. Select **Activity Logs Insights** in the **Insights** section.
+1. In the **Insights** section, select **Activity Logs Insights**.
- :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox= "media/activity-log/open-activity-log-insights-workbook.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a scale level.":::
+ :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox= "media/activity-log/open-activity-log-insights-workbook.png" alt-text="Screenshot that shows how to locate and open the Activity Logs Insights workbook on a scale level.":::
1. At the top of the **Activity Logs Insights** page, select:+ 1. One or more subscriptions from the **Subscriptions** dropdown. 1. Resources and resource groups from the **CurrentResource** dropdown. 1. A time range for which to view data from the **TimeRange** dropdown.
-### View Activity log insights on any Azure resource
+
+### View activity log insights on any Azure resource
>[!Note]
-> * Currently Applications Insights resources are not supported for this workbook.
+> Currently, Application Insights resources aren't supported for this workbook.
-To view Activity log insights on a resource level:
+To view activity log insights on a resource level:
-1. In the Azure portal, go to your resource, select **Workbooks**.
-1. Select **Activity Logs Insights** in the **Activity Logs Insights** section.
+1. In the Azure portal, go to your resource and select **Workbooks**.
+1. In the **Activity Logs Insights** section, select **Activity Logs Insights**.
- :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a resource level.":::
+ :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="Screenshot that shows how to locate and open the Activity Logs Insights workbook on a resource level.":::
-1. At the top of the **Activity Logs Insights** page, select:
-
- 1. A time range for which to view data from the **TimeRange** dropdown.
- * **Azure Activity Log Entries** shows the count of Activity log records in each activity log category.
+1. At the top of the **Activity Logs Insights** page, select a time range for which to view data from the **TimeRange** dropdown:
+
+ * **Azure Activity Log Entries** shows the count of activity log records in each activity log category.
- :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot of Azure Activity Logs by Category Value":::
+ :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot that shows Azure activity logs by category value.":::
- * **Activity Logs by Status** shows the count of Activity log records in each status.
+ * **Activity Logs by Status** shows the count of activity log records in each status.
- :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Screenshot of Azure Activity Logs by Status":::
+ :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Screenshot that shows Azure activity logs by status.":::
- * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of Activity log records for each resource and resource provider.
-
- :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Screenshot of Azure Activity Logs by Resource":::
+ * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of activity log records for each resource and resource provider.
+ :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Screenshot that shows Azure activity logs by resource.":::
+ ## Next steps+ * [Read an overview of platform logs](./platform-logs-overview.md)
-* [Review Activity log event schema](activity-log-schema.md)
-* [Create diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
+* [Review activity log event schema](activity-log-schema.md)
+* [Create a diagnostic setting to send activity logs to other destinations](./diagnostic-settings.md)
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
# Azure Monitor Metrics overview
-Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from [monitored resources](../monitor-reference.md) into a time series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.
-Metrics in Azure Monitor are lightweight and capable of supporting near real-time scenarios, so they're useful for alerting and fast detection of issues. You can analyze them interactively by using Metrics Explorer, be proactively notified with an alert when a value crosses a threshold, or visualize them in a workbook or dashboard.
+Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from [monitored resources](../monitor-reference.md) into a time-series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.
+
+Metrics in Azure Monitor are lightweight and capable of supporting near-real-time scenarios. For these reasons, they're useful for alerting and fast detection of issues. You can:
+
+- Analyze them interactively by using Metrics Explorer.
+- Be proactively notified with an alert when a value crosses a threshold.
+- Visualize them in a workbook or dashboard.
> [!NOTE]
-> Azure Monitor Metrics is one half of the data platform that supports Azure Monitor. The other is [Azure Monitor Logs](../logs/data-platform-logs.md), which collects and organizes log and performance data and allows that data to be analyzed with a rich query language.
+> Azure Monitor Metrics is one half of the data platform that supports Azure Monitor. The other half is [Azure Monitor Logs](../logs/data-platform-logs.md), which collects and organizes log and performance data. You can analyze that data by using a rich query language.
>
-> The Metrics feature can only store numeric data in a particular structure, whereas the Logs feature can store a variety of datatypes (each with its own structure). You can also perform complex analysis on log data by using log queries, which you can't use for analysis of metric data.
+> The Azure Monitor Metrics feature can only store numeric data in a particular structure. The Azure Monitor Logs feature can store a variety of datatypes, each with its own structure. You can also perform complex analysis on log data by using log queries, which you can't use for analysis of metric data.
## What can you do with Azure Monitor Metrics?
-The following table lists the ways that you can use the Metrics feature in Azure Monitor.
-| | Description |
+The following table lists the ways that you can use the Azure Monitor Metrics feature.
+
+| Uses | Description |
|:|:|
-| **Analyze** | Use [Metrics Explorer](metrics-charts.md) to analyze collected metrics on a chart and compare metrics from various resources. |
-| **Alert** | Configure a [metric alert rule](../alerts/alerts-metric.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the metric value crosses a threshold. |
-| **Visualize** | Pin a chart from Metrics Explorer to an [Azure dashboard](../app/tutorial-app-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources. |
-| **Automate** | Use [Autoscale](../autoscale/autoscale-overview.md) to increase or decrease resources based on a metric value crossing a threshold. |
-| **Retrieve** | Access metric values from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/metrics) or [Azure PowerShell cmdlets](/powershell/module/az.monitor).</li><li>Custom app via the [REST API](./rest-api-walkthrough.md) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
-| **Export** | [Route metrics to logs](./resource-logs.md#send-to-azure-storage) to analyze data in Azure Monitor Metrics together with data in Azure Monitor Logs and to store metric values for longer than 93 days.<br>Stream metrics to an [event hub](./stream-monitoring-data-event-hubs.md) to route them to external systems. |
-| **Archive** | [Archive](./platform-logs-overview.md) the performance or health history of your resource for compliance, auditing, or offline reporting purposes. |
+| Analyze | Use [Metrics Explorer](metrics-charts.md) to analyze collected metrics on a chart and compare metrics from various resources. |
+| Alert | Configure a [metric alert rule](../alerts/alerts-metric.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the metric value crosses a threshold. |
+| Visualize | Pin a chart from Metrics Explorer to an [Azure dashboard](../app/tutorial-app-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboards and combine with other data sources. |
+| Automate | Use [Autoscale](../autoscale/autoscale-overview.md) to increase or decrease resources based on a metric value crossing a threshold. |
+| Retrieve | Access metric values from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/metrics) or [Azure PowerShell cmdlets](/powershell/module/az.monitor).</li><li>Custom app via the [REST API](./rest-api-walkthrough.md) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
+| Export | [Route metrics to logs](./resource-logs.md#send-to-azure-storage) to analyze data in Azure Monitor Metrics together with data in Azure Monitor Logs and to store metric values for longer than 93 days.<br>Stream metrics to an [event hub](./stream-monitoring-data-event-hubs.md) to route them to external systems. |
+| Archive | [Archive](./platform-logs-overview.md) the performance or health history of your resource for compliance, auditing, or offline reporting purposes. |
![Diagram that shows sources and uses of metrics.](media/data-platform-metrics/metrics-overview.png) ## Data collection
-Azure Monitor collects metrics from the following sources. After these metrics are collected in the Azure Monitor metric database, they can be evaluated together regardless of their source.
--- **Azure resources**. Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a [distinct set of metrics](./metrics-supported.md) without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition. -- **Applications**. Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is being used. Values include _Server response time_ and _Browser exceptions_.
+Azure Monitor collects metrics from the following sources. After these metrics are collected in the Azure Monitor metric database, they can be evaluated together regardless of their source:
-- **Virtual machine agents**. Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and for Linux virtual machines by using the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/).--- **Custom metrics**. You can define metrics in addition to the standard metrics that are automatically available. You can [define custom metrics in your application](../app/api-custom-events-metrics.md) that's monitored by Application Insights or create custom metrics for an Azure service by using the [custom metrics API](./metrics-store-custom-rest-api.md).
+- **Azure resources**: Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a [distinct set of metrics](./metrics-supported.md) without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition.
+- **Applications**: Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is being used. Values include _Server response time_ and _Browser exceptions_.
+- **Virtual machine agents**: Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and for Linux virtual machines by using the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/).
+- **Custom metrics**: You can define metrics in addition to the standard metrics that are automatically available. You can [define custom metrics in your application](../app/api-custom-events-metrics.md) that's monitored by Application Insights. You can also create custom metrics for an Azure service by using the [custom metrics API](./metrics-store-custom-rest-api.md).
For a complete list of data sources that can send data to Azure Monitor Metrics, see [What is monitored by Azure Monitor?](../monitor-reference.md). ## Metrics Explorer+ Use [Metrics Explorer](metrics-charts.md) to interactively analyze the data in your metric database and chart the values of multiple metrics over time. You can pin the charts to a dashboard to view them with other visualizations. You can also retrieve metrics by using the [Azure monitoring REST API](./rest-api-walkthrough.md).
-![Screenshot of an example graph in Metrics Explorer that shows server requests, server response time, and failed requests.](media/data-platform-metrics/metrics-explorer.png)
+![Screenshot that shows an example graph in Metrics Explorer that displays server requests, server response time, and failed requests.](media/data-platform-metrics/metrics-explorer.png)
For more information, see [Getting started with Azure Monitor Metrics Explorer](./metrics-getting-started.md). ## Data structure+ Data that Azure Monitor Metrics collects is stored in a time-series database that's optimized for analyzing time-stamped data. Each set of metric values is a time series with the following properties:
-* The time that the value was collected.
+* The time when the value was collected.
* The resource that the value is associated with. * A namespace that acts like a category for the metric. * A metric name. * The value itself.
-* [Multiple dimensions](#multi-dimensional-metrics) when they're present. Note that custom metrics are limited to 10 dimensions.
+* [Multiple dimensions](#multi-dimensional-metrics) when they're present. Custom metrics are limited to 10 dimensions.
## Multi-dimensional metrics
-One of the challenges to metric data is that it often has limited information to provide context for collected values. Azure Monitor addresses this challenge with multi-dimensional metrics.
-Dimensions of a metric are name/value pairs that carry additional data to describe the metric value. For example, a metric called _Available disk space_ might have a dimension called _Drive_ with values _C:_ and _D:_. That dimension would allow viewing available disk space across all drives or for each drive individually.
+One of the challenges to metric data is that it often has limited information to provide context for collected values. Azure Monitor addresses this challenge with multi-dimensional metrics.
+
+Dimensions of a metric are name/value pairs that carry more data to describe the metric value. For example, a metric called _Available disk space_ might have a dimension called _Drive_ with values _C:_ and _D:_. That dimension would allow viewing available disk space across all drives or for each drive individually.
The following example illustrates two datasets for a hypothetical metric called _Network throughput_. The first dataset has no dimensions. The second dataset shows the values with two dimensions, _IP_ and _Direction_. ### Network throughput
-| Timestamp | Metric Value |
+| Timestamp | Metric value |
| - |:-| | 8/9/2017 8:14 | 1,331.8 Kbps | | 8/9/2017 8:15 | 1,141.4 Kbps |
This nondimensional metric can only answer a basic question like "What was my ne
### Network throughput and two dimensions ("IP" and "Direction")
-| Timestamp | Dimension "IP" | Dimension "Direction" | Metric Value|
+| Timestamp | Dimension "IP" | Dimension "Direction" | Metric value|
| - |:--|:- |:--| | 8/9/2017 8:14 | IP="192.168.5.2" | Direction="Send" | 646.5 Kbps | | 8/9/2017 8:14 | IP="192.168.5.2" | Direction="Receive" | 420.1 Kbps |
This nondimensional metric can only answer a basic question like "What was my ne
| 8/9/2017 8:15 | IP="10.24.2.15" | Direction="Send" | 155.0 Kbps | | 8/9/2017 8:15 | IP="10.24.2.15" | Direction="Receive" | 100.1 Kbps |
-This metric can answer questions such as "What was the network throughput for each IP address?" and "How much data was sent versus received?" Multi-dimensional metrics carry additional analytical and diagnostic value compared to nondimensional metrics.
+This metric can answer questions such as "What was the network throughput for each IP address?" and "How much data was sent versus received?" Multi-dimensional metrics carry more analytical and diagnostic value compared to nondimensional metrics.
+
+### View multi-dimensional performance counter metrics in Metrics Explorer
-### Viewing multi-dimensional performance counter metrics in Metrics Explorer
It's not possible to send performance counter metrics that contain an asterisk (\*) to Azure Monitor via the Classic Guest Metrics API. This API can't display metrics that contain an asterisk because it's a multi-dimensional metric, which classic metrics don't support. To configure and view multi-dimensional guest OS performance counter metrics by using the Azure Diagnostic extension: 1. Go to the **Diagnostic settings** page for your virtual machine.
-2. Select the **Performance counters** tab.
-3. Select **Custom** to configure the performance counters that you want to collect.
+1. Select the **Performance counters** tab.
+1. Select **Custom** to configure the performance counters that you want to collect.
- ![Screenshot of the performance counters section of the diagnostic settings page.](media/data-platform-metrics/azure-monitor-perf-counter.png)
+ ![Screenshot that shows the performance counters section of the Diagnostic settings page.](media/data-platform-metrics/azure-monitor-perf-counter.png)
-4. Select **Sinks**. Then select **Enabled** to send your data to Azure Monitor.
+1. Select **Sinks**. Then select **Enabled** to send your data to Azure Monitor.
- ![Screenshot of the sinks section of the diagnostic settings page.](media/data-platform-metrics/azure-monitor-sink.png)
+ ![Screenshot that shows the Sinks section of the Diagnostic settings page.](media/data-platform-metrics/azure-monitor-sink.png)
-5. To view your metric in Azure Monitor, select **Virtual Machine Guest** in the **Metric Namespace** dropdown list.
+1. To view your metric in Azure Monitor, select **Virtual Machine Guest** in the **Metric Namespace** dropdown.
- ![Screenshot of metric namespace.](media/data-platform-metrics/vm-guest-namespace.png)
+ ![Screenshot that shows the Metric Namespace dropdown.](media/data-platform-metrics/vm-guest-namespace.png)
-6. Select **Apply splitting** and fill in the details to split the metric by instance. You can then see the metric broken down by each of the possible values represented by the asterisk in the configuration. In this example, the asterisk represents the logical disk volumes plus the total.
+1. Select **Apply splitting** and fill in the details to split the metric by instance. You can then see the metric broken down by each of the possible values represented by the asterisk in the configuration. In this example, the asterisk represents the logical disk volumes plus the total.
- ![Screenshot of splitting a metric by instance.](media/data-platform-metrics/split-by-instance.png)
+ ![Screenshot that shows splitting a metric by instance.](media/data-platform-metrics/split-by-instance.png)
## Retention of metrics+ For most resources in Azure, platform metrics are stored for 93 days. There are some exceptions: -- **Classic guest OS metrics**: These are performance counters collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) or the [Linux diagnostic extension](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure storage account. Retention for these metrics is guaranteed to be at least 14 days, though no expiration date is written to the storage account.
+- **Classic guest OS metrics**: These performance counters are collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) or the [Linux diagnostic extension](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure Storage account. Retention for these metrics is guaranteed to be at least 14 days, although no expiration date is written to the storage account.
- For performance reasons, the portal limits how much data it displays based on volume. Therefore, the actual number of days that the portal retrieves can be longer than 14 days if the volume of data being written is not large.
+ For performance reasons, the portal limits how much data it displays based on volume. So, the actual number of days that the portal retrieves can be longer than 14 days if the volume of data being written isn't large.
-- **Guest OS metrics sent to Azure Monitor Metrics**: These are performance counters collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and sent to the [Azure Monitor data sink](../agents/diagnostics-extension-overview.md#data-destinations), or the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/) on Linux machines, or the newer [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) via data-collection rules. Retention for these metrics is 93 days.
+- **Guest OS metrics sent to Azure Monitor Metrics**: These performance counters are collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and sent to the [Azure Monitor data sink](../agents/diagnostics-extension-overview.md#data-destinations), or the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/) on Linux machines, or the newer [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) via data-collection rules. Retention for these metrics is 93 days.
-- **Guest OS metrics collected by the Log Analytics agent**: These are performance counters collected by the Log Analytics agent and sent to a Log Analytics workspace. Retention for these metrics is 31 days and can be extended up to 2 years.
+- **Guest OS metrics collected by the Log Analytics agent**: These performance counters are collected by the Log Analytics agent and sent to a Log Analytics workspace. Retention for these metrics is 31 days and can be extended up to 2 years.
-- **Application Insights log-based metrics**. Behind the scenes, [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md) translate into log queries. Their retention is variable and matches the retention of events in underlying logs (31 days to 2 years). For Application Insights resources, logs are stored for 90 days.
+- **Application Insights log-based metrics**: Behind the scenes, [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md) translate into log queries. Their retention is variable and matches the retention of events in underlying logs, which is 31 days to 2 years. For Application Insights resources, logs are stored for 90 days.
> [!NOTE] > You can [send platform metrics for Azure Monitor resources to a Log Analytics workspace](./resource-logs.md#send-to-azure-storage) for long-term trending.
-> [!NOTE]
-> As mentioned earlier, for most resources in Azure, platform metrics are stored for 93 days. However, you can only query (in the **Metrics** tile) for a maximum of 30 days worth of data on any single chart. This limitation doesn't apply to log-based metrics.
->
-> If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can [pan](./metrics-charts.md#pan) the chart to view the full retention window.
+As mentioned earlier, for most resources in Azure, platform metrics are stored for 93 days. However, you can only query (in the **Metrics** tile) for a maximum of 30 days' worth of data on any single chart. This limitation doesn't apply to log-based metrics.
+
+If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can [pan](./metrics-charts.md#pan) the chart to view the full retention window.
## Next steps
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
Title: Monitor Azure resources with Azure Monitor | Microsoft Docs
-description: Describes how to collect and analyze monitoring data from resources in Azure using Azure Monitor.
+description: This article describes how to collect and analyze monitoring data from resources in Azure by using Azure Monitor.
Last updated 09/15/2021
# Tutorial: Monitor Azure resources with Azure Monitor
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This monitoring is provided by Azure Monitor, which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
-In this tutorial, you learn:
+When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. Azure Monitor is a full-stack monitoring service that provides a complete set of features to monitor your Azure resources. You can also use Azure Monitor to monitor resources in other clouds and on-premises.
+
+In this tutorial, you learn about:
> [!div class="checklist"]
-> * What Azure Monitor is and how it's integrated into the portal for other Azure services
-> * The types of data collected by Azure Monitor for Azure resources
-> * Azure Monitor tools used to collect and analyze data
+> * Azure Monitor and how it's integrated into the portal for other Azure services.
+> * The types of data collected by Azure Monitor for Azure resources.
+> * Azure Monitor tools that are used to collect and analyze data.
> [!NOTE] > This tutorial describes Azure Monitor concepts and walks you through different menu items. To jump right into using Azure Monitor features, start with [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md). - ## Monitoring data
+This section discusses collecting and monitoring data.
+ ### Azure Monitor data collection
-As soon as you create an Azure resource, Azure Monitor is enabled and starts collecting metrics and activity logs. With some configuration, you can gather additional monitoring data and enable additional features. The Azure Monitor data platform is made up of Metrics and Logs. Each collects different kinds of data and enables different Azure Monitor features.
-- [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time series database. The metric database is automatically created for each Azure subscription. Use [metrics explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Metrics.-- [Azure Monitor Logs](../logs/data-platform-logs.md) collects logs and performance data where they can be retrieved and analyzed in a different ways using log queries. You must create a Log Analytics workspace to collect log data. Use [Log Analytics](../logs/log-analytics-tutorial.md) to analyze data from Azure Monitor Logs.
+As soon as you create an Azure resource, Azure Monitor is enabled and starts collecting metrics and activity logs. With some configuration, you can gather more monitoring data and enable other features. The Azure Monitor data platform is made up of Metrics and Logs. Each feature collects different kinds of data and enables different Azure Monitor features.
+
+- [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time-series database. The metric database is automatically created for each Azure subscription. Use [Metrics Explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Metrics.
+- [Azure Monitor Logs](../logs/data-platform-logs.md) collects logs and performance data where they can be retrieved and analyzed in different ways by using log queries. You must create a Log Analytics workspace to collect log data. Use [Log Analytics](../logs/log-analytics-tutorial.md) to analyze data from Azure Monitor Logs.
### <a id="monitoring-data-from-azure-resources"></a> Monitor data from Azure resources+ While resources from different Azure services have different monitoring requirements, they generate monitoring data in the same formats so that you can use the same Azure Monitor tools to analyze all Azure resources. Diagnostic settings define where resource logs and metrics for a particular resource should be sent. Possible destinations are: -- [Activity log](./platform-logs-overview.md) - Subscription level events that track operations for each Azure resource, for example creating a new resource or starting a virtual machine. Activity log events are automatically generated and collected for viewing in the Azure portal. You can create a diagnostic setting to send the Activity log to Azure Monitor Logs.-- [Platform metrics](../essentials/data-platform-metrics.md) - Numerical values that are automatically collected at regular intervals and describe some aspect of a resource at a particular time. Platform metrics are automatically generated and collected in Azure Monitor Metrics.-- [Resource logs](./platform-logs-overview.md) - Provide insight into operations that were performed by an Azure resource, for example getting a secret from a Key Vault or making a request to a database. Resource logs are generated automatically, but you must create a diagnostic setting to send them to Azure Monitor Logs.-- [Virtual machine guest metrics and logs]() - Performance and log data from the guest operating system of Azure virtual machines. You must install an agent on the virtual machine to collect this data and send it to Azure Monitor Metrics and Azure Monitor Logs.-
+- [Activity log](./platform-logs-overview.md): Subscription-level events that track operations for each Azure resource, for example, creating a new resource or starting a virtual machine. Activity log events are automatically generated and collected for viewing in the Azure portal. You can create a diagnostic setting to send the activity log to Azure Monitor Logs.
+- [Platform metrics](../essentials/data-platform-metrics.md): Numerical values that are automatically collected at regular intervals and describe some aspect of a resource at a particular time. Platform metrics are automatically generated and collected in Azure Monitor Metrics.
+- [Resource logs](./platform-logs-overview.md): Provide insight into operations that were performed by an Azure resource. Operation examples might be getting a secret from a key vault or making a request to a database. Resource logs are generated automatically, but you must create a diagnostic setting to send them to Azure Monitor Logs.
+- [Virtual machine guest metrics and logs](): Performance and log data from the guest operating system of Azure virtual machines. You must install an agent on the virtual machine to collect this data and send it to Azure Monitor Metrics and Azure Monitor Logs.
## Menu options
-While you can access Azure Monitor features from the **Monitor** menu in the Azure portal, Azure Monitor features can be accessed directly from the menu for different Azure services. While different Azure services may have slightly different experiences, they share a common set of monitoring options in the Azure portal. This includes **Overview** and **Activity log** and multiple options in the **Monitoring** section of the menu.
+You can access Azure Monitor features from the **Monitor** menu in the Azure portal. You can also access Azure Monitor features directly from the menu for different Azure services. Different Azure services might have slightly different experiences, but they share a common set of monitoring options in the Azure portal. These menu items include **Overview** and **Activity log** and multiple options in the **Monitoring** section of the menu.
## Overview page
-The **Overview** page includes details about the resource and often its current state. For example, a virtual machine will show its current running state. Many Azure services will have a **Monitoring** tab that includes charts for a set of key metrics. This is a quick way to view the operation of the resource, and you can click on any of the charts to open them in metrics explorer for more detailed analysis.
-See [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md) for a tutorial on using metrics explorer.
+The **Overview** page includes details about the resource and often its current state. For example, a virtual machine shows its current running state. Many Azure services have a **Monitoring** tab that includes charts for a set of key metrics. Charts are a quick way to view the operation of the resource. You can select any of the charts to open them in Metrics Explorer for more detailed analysis.
-![Overview page](media/monitor-azure-resource/overview-page.png)
-### Activity log
-The **Activity log** menu item lets you view entries in the [activity log](../essentials/activity-log.md) for the current resource.
+For a tutorial on using Metrics Explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
+![Screenshot that shows the Overview page.](media/monitor-azure-resource/overview-page.png)
+
+### Activity log
+
+The **Activity log** menu item lets you view entries in the [activity log](../essentials/activity-log.md) for the current resource.
+ ## Alerts
-The **Alerts** page will show you any recent alerts that have been fired for the resource. Alerts proactively notify you when important conditions are found in your monitoring data and can use data from either Metrics or Logs.
-See [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md) for tutorials on create alert rules and viewing alerts.
+The **Alerts** page shows you any recent alerts that were fired for the resource. Alerts proactively notify you when important conditions are found in your monitoring data and can use data from either Metrics or Logs.
+For tutorials on how to create alert rules and view alerts, see [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md).
+ ## Metrics
-The **Metrics** menu item opens [metrics explorer](./metrics-getting-started.md) which allows you to work with individual metrics or combine multiple to identify correlations and trends. This is the same metrics explorer that's opened when you click on one of the charts in the **Overview** page.
-See [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md) for a tutorial on using metrics explorer.
+The **Metrics** menu item opens [Metrics Explorer](./metrics-getting-started.md). You can use it to work with individual metrics or combine multiple metrics to identify correlations and trends. This is the same Metrics Explorer that opens when you select one of the charts on the **Overview** page.
+For a tutorial on how to use Metrics Explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
## Diagnostic settings
-The **Diagnostic settings** page lets you create a [diagnostic setting](../essentials/diagnostic-settings.md) to collect the resource logs for your resource. You can send them to multiple locations, but the most common is to send to a Log Analytics workspace so you can analyze them with Log Analytics.
-
-See [Tutorial: Collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md) for a tutorial on creating a diagnostic setting.
+The **Diagnostic settings** page lets you create a [diagnostic setting](../essentials/diagnostic-settings.md) to collect the resource logs for your resource. You can send them to multiple locations, but the most common use is to send them to a Log Analytics workspace so you can analyze them with Log Analytics.
+For a tutorial on how to create a diagnostic setting, see [Tutorial: Collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).
-## Insights
-The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../monitor-reference.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features.
+## Insights
+The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../monitor-reference.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features.
-See [Insights and Core solutions](../monitor-reference.md#insights-and-curated-visualizations) for a list of insights that are available and links to their documentation.
+For a list of insights that are available and links to their documentation, see [Insights and core solutions](../monitor-reference.md#insights-and-curated-visualizations).
## Next steps
-Now that you have a basic understanding of Azure Monitor, get start analyzing some metrics for an Azure resource.
+
+Now that you have a basic understanding of Azure Monitor, get started analyzing some metrics for an Azure resource.
> [!div class="nextstepaction"] > [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md)
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
Title: Overview of Azure platform logs | Microsoft Docs
-description: Overview of logs in Azure Monitor which provide rich, frequent data about the operation of an Azure resource.
+description: Overview of logs in Azure Monitor, which provide rich, frequent data about the operation of an Azure resource.
Last updated 12/19/2019
# Overview of Azure platform logs
-Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. They are automatically generated although you need to configure certain platform logs to be forwarded to one or more destinations to be retained. This article provides an overview of platform logs including what information they provide and how you can configure them for collection and analysis.
+
+Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. Although they're automatically generated, you need to configure certain platform logs to be forwarded to one or more destinations to be retained. This article provides an overview of platform logs including what information they provide and how you can configure them for collection and analysis.
## Types of platform logs+ The following table lists the specific platform logs that are available at different layers of Azure. | Log | Layer | Description | |:|:|:|
-| [Resource logs](./resource-logs.md) | Azure Resources | Provide insight into operations that were performed within an Azure resource (the *data plane*), for example getting a secret from a Key Vault or making a request to a database. The content of resource logs varies by the Azure service and resource type.<br><br>*Resource logs were previously referred to as diagnostic logs.* |
-| [Activity log](../essentials/activity-log.md) | Azure Subscription | Provides insight into the operations on each Azure resource in the subscription from the outside (*the management plane*) in addition to updates on Service Health events. Use the Activity Log, to determine the _what_, _who_, and _when_ for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There is a single Activity log for each Azure subscription. |
-| [Azure Active Directory logs](../../active-directory/reports-monitoring/overview-reports.md) | Azure Tenant | Contains the history of sign-in activity and audit trail of changes made in the Azure Active Directory for a particular tenant. |
+| [Resource logs](./resource-logs.md) | Azure Resources | Provide insight into operations that were performed within an Azure resource (the *data plane*). Examples might be getting a secret from a key vault or making a request to a database. The content of resource logs varies by the Azure service and resource type.<br><br>*Resource logs were previously referred to as diagnostic logs.* |
+| [Activity log](../essentials/activity-log.md) | Azure Subscription | Provides insight into the operations on each Azure resource in the subscription from the outside (the *management plane*) in addition to updates on Service Health events. Use the Activity log to determine the _what_, _who_, and _when_ for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There's a single activity log for each Azure subscription. |
+| [Azure Active Directory (Azure AD) logs](../../active-directory/reports-monitoring/overview-reports.md) | Azure Tenant | Contain the history of sign-in activity and audit trail of changes made in Azure AD for a particular tenant. |
> [!NOTE]
-> The Azure Activity Log is primarily for activities that occur in Azure Resource Manager. It does not track resources using the Classic/RDFE model. Some Classic resource types have a proxy resource provider in Azure Resource Manager (for example, Microsoft.ClassicCompute). If you interact with a Classic resource type through Azure Resource Manager using these proxy resource providers, the operations appear in the Activity Log. If you interact with a Classic resource type outside of the Azure Resource Manager proxies, your actions are only recorded in the Operation Log. The Operation Log can be browsed in a separate section of the portal.
-
-![Platform logs overview](media/platform-logs-overview/logs-overview.png)
+> The Azure activity log is primarily for activities that occur in Azure Resource Manager. It doesn't track resources by using the classic/RDFE model. Some classic resource types have a proxy resource provider in Resource Manager (for example, Microsoft.ClassicCompute). If you interact with a classic resource type through Resource Manager by using these proxy resource providers, the operations appear in the activity log. If you interact with a classic resource type outside of the Resource Manager proxies, your actions are only recorded in the Operation log. The Operation log can be browsed in a separate section of the portal.
+![Diagram that shows a platform logs overview.](media/platform-logs-overview/logs-overview.png)
+## View platform logs
+There are different options for viewing and analyzing the different Azure platform logs:
-## Viewing platform logs
-There are different options for viewing and analyzing the different Azure platform logs.
--- View the Activity log in the Azure portal and access events from PowerShell and CLI. See [View the Activity log](../essentials/activity-log.md#view-the-activity-log) for details. -- View Azure Active Directory Security and Activity reports in the Azure portal. See [What are Azure Active Directory reports?](../../active-directory/reports-monitoring/overview-reports.md) for details.-- Resource logs are automatically generated by supported Azure resources, but they aren't available to be viewed unless you create a [diagnostic setting](#diagnostic-settings).
+- View the activity log in the Azure portal and access events from PowerShell and the Azure CLI. See [View the activity log](../essentials/activity-log.md#view-the-activity-log) for details.
+- View Azure AD security and activity reports in the Azure portal. See [What are Azure AD reports?](../../active-directory/reports-monitoring/overview-reports.md) for details.
+- Resource logs are automatically generated by supported Azure resources. They aren't available to be viewed unless you create a [diagnostic setting](#diagnostic-settings).
## Diagnostic settings
-Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send platform logs to one of the following destinations for analysis or other purposes. Resource logs must have a diagnostic setting be used since they have no other way of being viewed.
+
+Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send platform logs to one of the following destinations for analysis or other purposes. Resource logs must have a diagnostic setting to be used because they have no other way of being viewed.
| Destination | Description | |:|:|
-| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. |
-| Event hub | Send platform log data outside of Azure, for example to a third-party SIEM or custom telemetry platform. |
-| Azure storage | Archive the logs for audit or backup. |
-| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you are already using one of the partners. |
+| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. |
+| Event hub | Send platform log data outside of Azure, for example, to a third-party SIEM or custom telemetry platform. |
+| Azure Storage | Archive the logs for audit or backup. |
+| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners. |
-- For details on creating a diagnostic setting for activity log or resource logs, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md). -- For details on creating a diagnostic setting for Azure Active Directory logs, see the following articles.
+- For details on how to create a diagnostic setting for activity logs or resource logs, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).
+- For details on how to create a diagnostic setting for Azure AD logs, see the following articles:
- [Integrate Azure AD logs with Azure Monitor logs](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
- - [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
- - [Tutorial: Archive Azure AD logs to an Azure storage account](../../active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md)
+ - [Tutorial: Stream Azure AD logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
+ - [Tutorial: Archive Azure AD logs to an Azure Storage account](../../active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md)
## Pricing model
-Processing data to stream logs is charged for [certain services](resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a Log Analytics charge for ingesting the data into a workspace.
+Processing data to stream logs is charged for [certain services](resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace. There is a Log Analytics charge for ingesting the data into a workspace.
-The charge is based on the number of bytes in the exported JSON formatted log data, measured in GB (10^9 bytes).
+The charge is based on the number of bytes in the exported JSON-formatted log data, measured in GB (10^9 bytes).
-Pricing is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+Pricing is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
## Next steps
-* [Read more details about the Activity log](../essentials/activity-log.md)
+* [Read more details about activity logs](../essentials/activity-log.md)
* [Read more details about resource logs](./resource-logs.md)
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
# Azure resource logs
-Azure resource logs are [platform logs](../essentials/platform-logs-overview.md) that provide insight into operations that were performed within an Azure resource. The content of resource logs varies by the Azure service and resource type. Resource logs are not collected by default. This article describes the [diagnostic setting](diagnostic-settings.md) required for each Azure resource to send its resource logs to different destinations.
+
+Azure resource logs are [platform logs](../essentials/platform-logs-overview.md) that provide insight into operations that were performed within an Azure resource. The content of resource logs varies by the Azure service and resource type. Resource logs aren't collected by default. This article describes the [diagnostic setting](diagnostic-settings.md) required for each Azure resource to send its resource logs to different destinations.
## Send to Log Analytics workspace
- Send resource logs to a Log Analytics workspace to enable the features of [Azure Monitor Logs](../logs/data-platform-logs.md) which includes the following:
+
+ Send resource logs to a Log Analytics workspace to enable the features of [Azure Monitor Logs](../logs/data-platform-logs.md), where you can:
- Correlate resource log data with other monitoring data collected by Azure Monitor. - Consolidate log entries from multiple Azure resources, subscriptions, and tenants into one location for analysis together.
Azure resource logs are [platform logs](../essentials/platform-logs-overview.md)
[Create a diagnostic setting](../essentials/diagnostic-settings.md) to send resource logs to a Log Analytics workspace. This data is stored in tables as described in [Structure of Azure Monitor Logs](../logs/data-platform-logs.md). The tables used by resource logs depend on what type of collection the resource is using: -- Azure diagnostics - All data written is to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.-- Resource-specific - Data is written to individual table for each category of the resource.
+- **Azure diagnostics**: All data is written to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
+- **Resource-specific**: Data is written to individual tables for each category of the resource.
### Resource-specific
-In this mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting. This method is recommended since it makes it much easier to work with the data in log queries, provides better discoverability of schemas and their structure, improves performance across both ingestion latency and query times, and the ability to grant Azure RBAC rights on a specific table. All Azure services will eventually migrate to the Resource-Specific mode.
-The example above would result in three tables being created:
-
-- Table *Service1AuditLogs* as follows:
+In this mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting. We recommend this method because it:
+
+- Makes it easier to work with the data in log queries.
+- Provides better discoverability of schemas and their structure.
+- Improves performance across ingestion latency and query times.
+- Provides the ability to grant Azure role-based access control rights on a specific table.
+
+All Azure services will eventually migrate to the resource-specific mode.
- | Resource Provider | Category | A | B | C |
+The preceding example creates three tables:
+
+- Table `Service1AuditLogs`
+
+ | Resource provider | Category | A | B | C |
| -- | -- | -- | -- | -- | | Service1 | AuditLogs | x1 | y1 | z1 | | Service1 | AuditLogs | x5 | y5 | z5 | | ... | -- Table *Service1ErrorLogs* as follows:
+- Table `Service1ErrorLogs`
- | Resource Provider | Category | D | E | F |
+ | Resource provider | Category | D | E | F |
| -- | -- | -- | -- | -- | | Service1 | ErrorLogs | q1 | w1 | e1 | | Service1 | ErrorLogs | q2 | w2 | e2 | | ... | -- Table *Service2AuditLogs* as follows:
+- Table `Service2AuditLogs`
- | Resource Provider | Category | G | H | I |
+ | Resource provider | Category | G | H | I |
| -- | -- | -- | -- | -- | | Service2 | AuditLogs | j1 | k1 | l1| | Service2 | AuditLogs | j3 | k3 | l3| | ... |
-### Azure diagnostics mode
-In this mode, all data from any diagnostic setting will be collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. This is the legacy method used today by most Azure services. Since multiple resource types send data to the same table, its schema is the superset of the schemas of all the different data types being collected. See [AzureDiagnostics reference](/azure/azure-monitor/reference/tables/azurediagnostics) for details on the structure of this table and how it works with this potentially large number of columns.
+### Azure diagnostics mode
-Consider the following example where diagnostic settings are being collected in the same workspace for the following data types:
+In this mode, all data from any diagnostic setting is collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. This legacy method is used today by most Azure services. Because multiple resource types send data to the same table, its schema is the superset of the schemas of all the different data types being collected. For details on the structure of this table and how it works with this potentially large number of columns, see [AzureDiagnostics reference](/azure/azure-monitor/reference/tables/azurediagnostics).
-- Audit logs of service 1 (having a schema consisting of columns A, B, and C) -- Error logs of service 1 (having a schema consisting of columns D, E, and F) -- Audit logs of service 2 (having a schema consisting of columns G, H, and I)
+Consider an example where diagnostic settings are collected in the same workspace for the following data types:
-The AzureDiagnostics table will look as follows:
+- Audit logs of service 1 have a schema that consists of columns A, B, and C
+- Error logs of service 1 have a schema that consists of columns D, E, and F
+- Audit logs of service 2 have a schema that consists of columns G, H, and I
+
+The `AzureDiagnostics` table looks like this example:
| ResourceProvider | Category | A | B | C | D | E | F | G | H | I | | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- |
The AzureDiagnostics table will look as follows:
| ... | ### Select the collection mode
-Most Azure resources will write data to the workspace in either **Azure Diagnostic** or **Resource-Specific mode** without giving you a choice. See the [documentation for each service](./resource-logs-schema.md) for details on which mode it uses. All Azure services will eventually use Resource-Specific mode. As part of this transition, some resources will allow you to select a mode in the diagnostic setting. Specify resource-specific mode for any new diagnostic settings since this makes the data easier to manage and may help you to avoid complex migrations at a later date.
-
- ![Diagnostic Settings mode selector](media/resource-logs/diagnostic-settings-mode-selector.png)
-> [!NOTE]
-> For an example setting the collection mode using a resource manager template, see [Resource Manager template samples for diagnostic settings in Azure Monitor](./resource-manager-diagnostic-settings.md#diagnostic-setting-for-recovery-services-vault).
+Most Azure resources write data to the workspace in either **Azure diagnostics** or **resource-specific** mode without giving you a choice. For more information, see [Common and service-specific schemas for Azure resource logs](./resource-logs-schema.md).
+All Azure services eventually use the resource-specific mode. As part of this transition, some resources allow you to select a mode in the diagnostic setting. Specify resource-specific mode for any new diagnostic settings because this mode makes the data easier to manage. It also might help you avoid complex migrations later.
+
+ ![Screenshot that shows the Diagnostics settings mode selector.](media/resource-logs/diagnostic-settings-mode-selector.png)
-You can modify an existing diagnostic setting to resource-specific mode. In this case, data that was already collected will remain in the _AzureDiagnostics_ table until it's removed according to your retention setting for the workspace. New data will be collected in the dedicated table. Use the [union](/azure/kusto/query/unionoperator) operator to query data across both tables.
+> [!NOTE]
+> For an example that sets the collection mode by using an Azure Resource Manager template, see [Resource Manager template samples for diagnostic settings in Azure Monitor](./resource-manager-diagnostic-settings.md#diagnostic-setting-for-recovery-services-vault).
-Continue to watch [Azure Updates](https://azure.microsoft.com/updates/) blog for announcements about Azure services supporting Resource-Specific mode.
+You can modify an existing diagnostic setting to resource-specific mode. In this case, data that was already collected remains in the `AzureDiagnostics` table until it's removed according to your retention setting for the workspace. New data is collected in the dedicated table. Use the [union](/azure/kusto/query/unionoperator) operator to query data across both tables.
+Continue to watch the [Azure Updates](https://azure.microsoft.com/updates/) blog for announcements about Azure services that support resource-specific mode.
## Send to Azure Event Hubs
-Send resource logs to an event hub to send them outside of Azure, for example to a third-party SIEM or other log analytics solutions. Resource logs from event hubs are consumed in JSON format with a `records` element containing the records in each payload. The schema depends on the resource type as described in [Common and service-specific schema for Azure Resource Logs](resource-logs-schema.md).
-Following is sample output data from Event Hubs for a resource log:
+Send resource logs to an event hub to send them outside of Azure. For example, resource logs might be sent to a third-party SIEM or other log analytics solutions. Resource logs from event hubs are consumed in JSON format with a `records` element that contains the records in each payload. The schema depends on the resource type as described in [Common and service-specific schema for Azure resource logs](resource-logs-schema.md).
+
+The following sample output data is from Azure Event Hubs for a resource log:
```json {
Following is sample output data from Event Hubs for a resource log:
``` ## Send to Azure Storage
-Send resource logs to Azure storage to retain it for archiving. Once you have created the diagnostic setting, a storage container is created in the storage account as soon as an event occurs in one of the enabled log categories.
+
+Send resource logs to Azure Storage to retain them for archiving. After you've created the diagnostic setting, a storage container is created in the storage account as soon as an event occurs in one of the enabled log categories.
> [!NOTE] > An alternate strategy for archiving is to send the resource log to a Log Analytics workspace with an [archive policy](../logs/data-retention-archive.md).
The blobs within the container use the following naming convention:
insights-logs-{log category name}/resourceId=/SUBSCRIPTIONS/{subscription ID}/RESOURCEGROUPS/{resource group name}/PROVIDERS/{resource provider name}/{resource type}/{resource name}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json ```
-For example, the blob for a network security group might have a name similar to the following:
+The blob for a network security group might have a name similar to this example:
``` insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUP/TESTNSG/y=2016/m=08/d=22/h=18/m=00/PT1H.json ```
-Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL (for example, h=12). During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00, since resource log events are broken into individual blobs per hour.
+Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL, for example, h=12. During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00 because resource log events are broken into individual blobs per hour.
-Within the PT1H.json file, each event is stored with the following format. This will use a common top-level schema but be unique for each Azure service as described in [Resource logs schema](./resource-logs-schema.md).
+Within the PT1H.json file, each event is stored in the following format. It uses a common top-level schema but is unique for each Azure service, as described in [Resource logs schema](./resource-logs-schema.md).
> [!NOTE]
-> Logs are written to the blob relevant to time that the log was generated, not time that it was received. This means at the turn of the hour, both the previous hour and current hour blobs could be receiving new writes.
-
+> Logs are written to the blob relevant to the time that the log was generated, not the time that it was received. So, at the turn of the hour, both the previous hour and current hour blobs could be receiving new writes.
``` JSON {"time": "2016-07-01T00:00:37.2040000Z","systemId": "46cdbb41-cb9c-4f3d-a5b4-1d458d827ff1","category": "NetworkSecurityGroupRuleCounter","resourceId": "/SUBSCRIPTIONS/s1id1234-5679-0123-4567-890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/TESTNSG","operationName": "NetworkSecurityGroupCounters","properties": {"vnetResourceGuid": "{12345678-9012-3456-7890-123456789012}","subnetPrefix": "10.3.0.0/24","macAddress": "000123456789","ruleName": "/subscriptions/ s1id1234-5679-0123-4567-890123456789/resourceGroups/testresourcegroup/providers/Microsoft.Network/networkSecurityGroups/testnsg/securityRules/default-allow-rdp","direction": "In","type": "allow","matchedConnections": 1988}} ``` ## Azure Monitor partner integrations
-Resource logs can also be sent partner solutions that are fully integrated into Azure. See [Azure Monitor partner integrations](../../partner-solutions/overview.md) for a list of these solutions and details on configuring them.
+
+Resource logs can also be sent to partner solutions that are fully integrated into Azure. For a list of these solutions and details on how to configure them, see [Azure Monitor partner integrations](../../partner-solutions/overview.md).
## Next steps
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Queries with Basic Logs must use a workspace for the scope. You can't run querie
You can run two concurrent queries per user. ### Purge
-You canΓÇÖt [purge personal data](personal-data-mgmt.md#how-to-export-and-delete-private-data) from Basic Logs tables.
+You canΓÇÖt [purge personal data](personal-data-mgmt.md#exporting-and-deleting-personal-data) from Basic Logs tables.
## Run a query on a Basic Logs table
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
If you set the data retention policy to 30 days, you can purge older data immedi
Note that workspaces with a 30-day retention policy might actually keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter.
-You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#how-to-export-and-delete-private-data), which removes personal data. You canΓÇÖt purge data from archived logs.
+You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#exporting-and-deleting-personal-data), which removes personal data. You canΓÇÖt purge data from archived logs.
The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. **To lower retention costs, decrease the retention period for the workspace or for specific tables.**
azure-monitor Personal Data Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/personal-data-mgmt.md
Title: Guidance for personal data stored in Azure Log Analytics| Microsoft Docs
-description: This article describes how to manage personal data stored in Azure Log Analytics and the methods to identify and remove it.
+ Title: Managing personal data in Azure Monitor Log Analytics and Application Insights
+description: This article describes how to manage personal data stored in Azure Monitor Log Analytics and the methods to identify and remove it.
-- Previously updated : 05/18/2018+++ Last updated : 06/28/2022
+# Customer intent: As an Azure Monitor admin user, I want to understand how to manage personal data in logs Azure Monitor collects.
-# Guidance for personal data stored in Log Analytics and Application Insights
+# Managing personal data in Log Analytics and Application Insights
-Log Analytics is a data store where personal data is likely to be found. Application Insights stores its data in a Log Analytics partition. This article will discuss where in Log Analytics and Application Insights such data is typically found, as well as the capabilities available to you to handle such data.
+Log Analytics is a data store where personal data is likely to be found. Application Insights stores its data in a Log Analytics partition. This article explains where Log Analytics and Application Insights store personal data and how to manage this data.
-> [!NOTE]
-> For the purposes of this article _log data_ refers to data sent to a Log Analytics workspace, while _application data_ refers to data collected by Application Insights. If you are using a workspace-based Application Insights resource, the information on log data will apply but if you are using the classic Application Insights resource then the application data applies.
+In this article, _log data_ refers to data sent to a Log Analytics workspace, while _application data_ refers to data collected by Application Insights. If you're using a workspace-based Application Insights resource, the information on log data applies. If you're using a classic Application Insights resource, the application data applies.
[!INCLUDE [gdpr-dsr-and-stp-note](../../../includes/gdpr-dsr-and-stp-note.md)] ## Strategy for personal data handling
-While it will be up to you and your company to ultimately determine the strategy with which you will handle your private data (if at all), the following are some possible approaches. They are listed in order of preference from a technical point of view from most to least preferable:
+While it's up to you and your company to define a strategy for handling personal data, here are a few approaches, listed from most to least preferable from a technical point of view:
+
+* Stop collecting personal data, or obfuscate, anonymize, or adjust collected data to exclude it from being considered "personal". This is _by far_ the preferred approach, which saves you the need to create a costly and impactful data handling strategy.
+* Normalize the data to reduce negative affects on the data platform and performance. For example, instead of logging an explicit User ID, create a lookup to correlate the username and their details to an internal ID that can then be logged elsewhere. That way, if a user asks you to delete their personal information, you can delete only the row in the lookup table that corresponds to the user.
+* If you need to collect personal data, build a process using the purge API path and the existing query API to meet any obligations to export and delete any personal data associated with a user.
-* Where possible, stop collection of, obfuscate, anonymize, or otherwise adjust the data being collected to exclude it from being considered "private". This is _by far_ the preferred approach, saving you the need to create a very costly and impactful data handling strategy.
-* Where not possible, attempt to normalize the data to reduce the impact on the data platform and performance. For example, instead of logging an explicit User ID, create a lookup data that will correlate the username and their details to an internal ID that can then be logged elsewhere. That way, should one of your users ask you to delete their personal information, it is possible that only deleting the row in the lookup table corresponding to the user will be sufficient.
-* Finally, if private data must be collected, build a process around the purge API path and the existing query API path to meet any obligations you may have around exporting and deleting any private data associated with a user.
+## Where to look for personal data in Log Analytics
-## Where to look for private data in Log Analytics?
+Log Analytics prescribes a schema to your data, but allows you to override every field with custom values. You can also ingest custom schemas. As such, it's impossible to say exactly where personal data will be found in your specific workspace. The following locations, however, are good starting points in your inventory.
-Log Analytics is a flexible store, which while prescribing a schema to your data, allows you to override every field with custom values. Additionally, any custom schema can be ingested. As such, it is impossible to say exactly where Private data will be found in your specific workspace. The following locations, however, are good starting points in your inventory:
+> [!NOTE]
+> Some of the queries below use `search *` to query all tables in a workspace. We highly recommend you avoid using `search *`, which creates a highly inefficient query, whenever possible. Instead, query a specific table.
### Log data
-* *IP addresses*: Log Analytics collects a variety of IP information across many different tables. For example, the following query shows all tables where IPv4 addresses have been collected over the last 24 hours:
+* **IP addresses**: Log Analytics collects various IP information in multiple tables. For example, the following query shows all tables that collected IPv4 addresses in the last 24 hours:
``` search * | where * matches regex @'\b((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.|$)){4}\b' //RegEx originally provided on https://stackoverflow.com/questions/5284147/validating-ipv4-addresses-with-regexp | summarize count() by $table ```
-* *User IDs*: User IDs are found in a large variety of solutions and tables. You can look for a particular username across your entire dataset using the search command:
+
+* **User IDs**: You'll find user usernames and user IDs in various solutions and tables. You can look for a particular username or user ID across your entire dataset using the search command:
```
- search "[username goes here]"
+ search "<username or user ID>"
```
- Remember to look not only for human-readable user names but also GUIDs that can directly be traced back to a particular user!
-* *Device IDs*: Like user IDs, device IDs are sometimes considered "private". Use the same approach as listed above for user IDs to identify tables where this might be a concern.
-* *Custom data*: Log Analytics allows the collection in a variety of methods: custom logs and custom fields, the [HTTP Data Collector API](../logs/data-collector-api.md) , and custom data collected as part of system event logs. All of these are susceptible to containing private data, and should be examined to verify whether any such data exists.
-* *Solution-captured data*: Because the solution mechanism is an open-ended one, we recommend reviewing all tables generated by solutions to ensure compliance.
+
+ Remember to look not only for human-readable usernames but also for GUIDs that can be traced back to a particular user.
+* **Device IDs**: Like user IDs, device IDs are sometimes considered personal data. Use the approach listed above for user IDs to identify tables that hold personal data.
+* **Custom data**: Log Analytics lets you collect custom data through custom logs, custom fields, the [HTTP Data Collector API](../logs/data-collector-api.md), and as part of system event logs. Check all custom data for personal data.
+* **Solution-captured data**: Because the solution mechanism is open-ended, we recommend reviewing all tables generated by solutions to ensure compliance.
### Application data
-* *IP addresses*: While Application Insights will by default obfuscate all IP address fields to "0.0.0.0", it is a fairly common pattern to override this value with the actual user IP to maintain session information. The Analytics query below can be used to find any table that contains values in the IP address column other than "0.0.0.0" over the last 24 hours:
+* **IP addresses**: While Application Insights obfuscates all IP address fields to `0.0.0.0` by default, it's fairly common to override this value with the actual user IP to maintain session information. Use the query below to find any table that contains values in the *IP address* column other than `0.0.0.0` in the last 24 hours:
``` search client_IP != "0.0.0.0" | where timestamp > ago(1d) | summarize numNonObfuscatedIPs_24h = count() by $table ```
-* *User IDs*: By default, Application Insights will use randomly generated IDs for user and session tracking. However, it is common to see these fields overridden to store an ID more relevant to the application. For example: usernames, AAD GUIDs, etc. These IDs are often considered to be in-scope as personal data, and therefore, should be handled appropriately. Our recommendation is always to attempt to obfuscate or anonymize these IDs. Fields where these values are commonly found include session_Id, user_Id, user_AuthenticatedId, user_AccountId, as well as customDimensions.
-* *Custom data*: Application Insights allows you to append a set of custom dimensions to any data type. These dimensions can be *any* data. Use the following query to identify any custom dimensions collected over the last 24 hours:
+
+* **User IDs**: By default, Application Insights uses randomly generated IDs for user and session tracking in fields such as *session_Id*, *user_Id*, *user_AuthenticatedId*, *user_AccountId*, and *customDimensions*. However, it's common to override these fields with an ID that's more relevant to the application, such as usernames or Azure Active Directory GUIDs. These IDs are often considered to be personal data. We recommend obfuscating or anonymizing these IDs.
+* **Custom data**: Application Insights allows you to append a set of custom dimensions to any data type. Use the following query to identify custom dimensions collected in the last 24 hours:
``` search * | where isnotempty(customDimensions) | where timestamp > ago(1d) | project $table, timestamp, name, customDimensions ```
-* *In-memory and in-transit data*: Application Insights will track exceptions, requests, dependency calls, and traces. Private data can often be collected at the code and HTTP call level. Review the exceptions, requests, dependencies, and traces tables to identify any such data. Use [telemetry initializers](../app/api-filtering-sampling.md) where possible to obfuscate this data.
-* *Snapshot Debugger captures*: The [Snapshot Debugger](../app/snapshot-debugger.md) feature in Application Insights allows you to collect debug snapshots whenever an exception is caught on the production instance of your application. Snapshots will expose the full stack trace leading to the exceptions as well as the values for local variables at every step in the stack. Unfortunately, this feature does not allow for selective deletion of snap points, or programmatic access to data within the snapshot. Therefore, if the default snapshot retention rate does not satisfy your compliance requirements, the recommendation is to turn off the feature.
-
-## How to export and delete private data
+
+* **In-memory and in-transit data**: Application Insights tracks exceptions, requests, dependency calls, and traces. You'll often find personal data at the code and HTTP call level. Review exceptions, requests, dependencies, and traces tables to identify any such data. Use [telemetry initializers](../app/api-filtering-sampling.md) where possible to obfuscate this data.
+* **Snapshot Debugger captures**: The [Snapshot Debugger](../app/snapshot-debugger.md) feature in Application Insights lets you collect debug snapshots when Application Insights detects an exception on the production instance of your application. Snapshots expose the full stack trace leading to the exceptions and the values for local variables at every step in the stack. Unfortunately, this feature doesn't allow selective deletion of snap points or programmatic access to data within the snapshot. Therefore, if the default snapshot retention rate doesn't satisfy your compliance requirements, we recommend you turn off the feature.
-As mentioned in the [strategy for personal data handling](#strategy-for-personal-data-handling) section earlier, it is __strongly__ recommended to if it all possible, to restructure your data collection policy to disable the collection of private data, obfuscating or anonymizing it, or otherwise modifying it to remove it from being considered "private". Handling the data will foremost result in costs to you and your team to define and automate a strategy, build an interface for your customers to interact with their data through, and ongoing maintenance costs. Further, it is computationally costly for Log Analytics and Application Insights, and a large volume of concurrent query or purge API calls have the potential to negatively impact all other interaction with Log Analytics functionality. That said, there are indeed some valid scenarios where private data must be collected. For these cases, data should be handled as described in this section.
+## Exporting and deleting personal data
+We __strongly__ recommend you restructure your data collection policy to stop collecting personal data, obfuscate or anonymize personal data, or otherwise modify such data until it's no longer considered personal. In handling personal, data you'll incur costs in defining and automating a strategy, building an interface through which your customers interact with their data, and ongoing maintenance. It's also computationally costly for Log Analytics and Application Insights, and a large volume of concurrent Query or Purge API calls can negatively affect all other interactions with Log Analytics functionality. However, if you have to collect personal data, follow the guidelines in this section.
+> [!IMPORTANT]
+> While most purge operations complete much quicker, **the formal SLA for the completion of purge operations is set at 30 days** due to their heavy impact on the data platform. This SLA meets GDPR requirements. It's an automated process, so there's no way to expedite the operation.
### View and export
-For both view and export data requests, the [Log Analytics query API](https://dev.loganalytics.io/) or the [Application Insights query API](https://dev.applicationinsights.io/quickstart) should be used. Logic to convert the shape of the data to an appropriate one to deliver to your users will be up to you to implement. [Azure Functions](https://azure.microsoft.com/services/functions/) makes a great place to host such logic.
+Use the [Log Analytics query API](/rest/api/loganalytics/dataaccess/query) or the [Application Insights query API](/rest/api/application-insights/query) for view and export data requests.
-> [!IMPORTANT]
-> While the vast majority of purge operations may complete much quicker than the SLA, **the formal SLA for the completion of purge operations is set at 30 days** due to their heavy impact on the data platform used. This SLA meets GDPR requirements. It's an automated process so there is no way to request that an operation be handled faster.
+You need to implement the logic for converting the data to an appropriate format for delivery to your users. [Azure Functions](https://azure.microsoft.com/services/functions/) is a great place to host such logic.
### Delete > [!WARNING] > Deletes in Log Analytics are destructive and non-reversible! Please use extreme caution in their execution.
-We have made available as part of a privacy handling a *purge* API path. This path should be used sparingly due to the risk associated with doing so, the potential performance impact, and the potential to skew all-up aggregations, measurements, and other aspects of your Log Analytics data. See the [Strategy for personal data handling](#strategy-for-personal-data-handling) section for alternative approaches to handle private data.
-
-> [!NOTE]
-> Once the purge operation has been performed, the data cannot be accessed while the [purge operation status](/rest/api/loganalytics/workspacepurge/getpurgestatus) is *pending*.
-
-Purge is a highly privileged operation that no app or user in Azure (including even the resource owner) will have permissions to execute without explicitly being granted a role in Azure Resource Manager. This role is _Data Purger_ and should be cautiously delegated due to the potential for data loss.
-
-> [!IMPORTANT]
-> In order to manage system resources, purge requests are throttled at 50 requests per hour. You should batch the execution of purge requests by sending a single command whose predicate includes all user identities that require purging. Use the [in operator](/azure/kusto/query/inoperator) to specify multiple identities. You should run the query before executing the purge request to verify that the results are expected.
+Azure Monitor's Purge API lets you delete personal data. Use the purge operation sparingly to avoid potential risks, performance impact, and the potential to skew all-up aggregations, measurements, and other aspects of your Log Analytics data. See the [Strategy for personal data handling](#strategy-for-personal-data-handling) section for alternative approaches to handling personal data.
+Purge is a highly privileged operation. Applications and Azure users, including the resource owner, can't execute a purge operation without explicitly being granted the _Data Purger_ role in Azure Resource Manager. Grant this role cautiously due to the potential for data loss.
+To manage system resources, we limit purge requests to 50 requests an hour. Batch the execution of purge requests by sending a single command whose predicate includes all user identities that require purging. Use the [in operator](/azure/kusto/query/inoperator) to specify multiple identities. Run the query before executing the purge request to verify the expected results.
-Once the Azure Resource Manager role has been assigned, two new API paths are available:
+> [!NOTE]
+> After initiating a purge request, you cannot access the related data while the [purge operation status](/rest/api/loganalytics/workspacepurge/getpurgestatus) is *pending*.
#### Log data
-* [POST purge](/rest/api/loganalytics/workspacepurge/purge) - takes an object specifying parameters of data to delete and returns a reference GUID
-* GET purge status - the POST purge call will return an 'x-ms-status-location' header that will include a URL that you can call to determine the status of your purge API. For example:
+* The [Workspace Purge POST API](/rest/api/loganalytics/workspacepurge/purge) takes an object specifying parameters of data to delete and returns a reference GUID.
+* The [Get Purge Status POST API](/rest/api/loganalytics/workspace-purge/get-purge-status) returns an 'x-ms-status-location' header that includes a URL you can call to determine the status of your purge operation. For example:
``` x-ms-status-location: https://management.azure.com/subscriptions/[SubscriptionId]/resourceGroups/[ResourceGroupName]/providers/Microsoft.OperationalInsights/workspaces/[WorkspaceName]/operations/purge-[PurgeOperationId]?api-version=2015-03-20 ```
-> [!IMPORTANT]
-> While we expect the vast majority of purge operations to complete much quicker than our SLA, due to their heavy impact on the data platform used by Log Analytics, **the formal SLA for the completion of purge operations is set at 30 days**.
- #### Application data
-* [POST purge](/rest/api/application-insights/components/purge) - takes an object specifying parameters of data to delete and returns a reference GUID
-* GET purge status - the POST purge call will return an 'x-ms-status-location' header that will include a URL that you can call to determine the status of your purge API. For example:
+* The [Components - Purge POST API](/rest/api/application-insights/components/purge) takes an object specifying parameters of data to delete and returns a reference GUID.
+* The [Components - Get Purge Status GET API](/rest/api/application-insights/components/get-purge-status) returns an 'x-ms-status-location' header that includes a URL you can call to determine the status of your purge operation. For example:
``` x-ms-status-location: https://management.azure.com/subscriptions/[SubscriptionId]/resourceGroups/[ResourceGroupName]/providers/microsoft.insights/components/[ComponentName]/operations/purge-[PurgeOperationId]?api-version=2015-05-01 ```
-> [!IMPORTANT]
-> While the vast majority of purge operations may complete much quicker than the SLA, due to their heavy impact on the data platform used by Application Insights, **the formal SLA for the completion of purge operations is set at 30 days**.
- ## Next steps-- To learn more about how Log Analytics data is collected, processed, and secured, see [Log Analytics data security](../logs/data-security.md).-- To learn more about how Application Insights data is collected, processed, and secured, see [Application Insights data security](../app/data-retention-privacy.md).
+- Learn more about [how Log Analytics collects, processes, and secures data](../logs/data-security.md).
+- Learn more about [how Application Insights collects, processes, and secures data](../app/data-retention-privacy.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 06/17/2022 Last updated : 06/28/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files standard network features are supported for the following reg
* Australia Central 2 * Australia East * Australia Southeast
+* Canada Central
* East US * East US 2 * France Central
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 06/15/2022 Last updated : 06/28/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
[Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for AVS provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
- Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West US. Regional coverage will expand as the preview progresses.
+ Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West Europe, West US. Regional coverage will expand as the preview progresses.
* [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions)
azure-percept Azure Percept Devkit Software Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-software-release-notes.md
This page provides information of changes and fixes for each Azure Percept DK OS
To download the update images, refer to [Azure Percept DK software releases for USB cable update](./software-releases-usb-cable-updates.md) or [Azure Percept DK software releases for OTA update](./software-releases-over-the-air-updates.md).
+## June (2206) Release
+
+- Operating System
+ - Latest security updates on OpenSSL, cifs-utils, zlib, cpio, Nginx, and Lua packages.
+
## May (2205) Release - Operating System
azure-percept Software Releases Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-over-the-air-updates.md
Microsoft would service each dev kit release with OTA packages. However, as ther
|Release|Applicable Version(s)|Download Links|Note| |||||
-|March Service Release (2203)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108, <br>2021.111.124.109, <br>2022.101.112.106, <br>2022.102.109.102|[2022.103.110.103 OTA update package](<https://download.microsoft.com/download/2/3/4/234bdbf8-8f08-4d7a-8b33-7d5afc921bf1/2022.103.110.103 OTA update package.zip>)|Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management > Updates** in your IoT Hub, select the **switch to the older version** link in the banner. For more information, please refer to [Update Azure Percept DK over-the-air](./how-to-update-over-the-air.md).|
+|June Service Release (2206)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108, <br>2021.111.124.109, <br>2022.101.112.106, <br>2022.102.109.102, <br>2022.103.110.103|[2022.106.120.102 OTA update package](<https://download.microsoft.com/download/b/7/1/b71877b8-4882-4447-b3f3-8359ee8341e2/2022.106.120.102 OTA update package.zip>)|Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management > Updates** in your IoT Hub, select the **switch to the older version** link in the banner. For more information, please refer to [Update Azure Percept DK over-the-air](./how-to-update-over-the-air.md).|
**Hard-stop releases:**
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
This page provides information and download links for all the dev kit OS/firmwar
## Latest releases - **Latest service release**
-May Service Release (2205): [Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)
+June Service Release (2206): [Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip](https://download.microsoft.com/download/4/7/a/47af6fc2-d9a0-4e66-822b-ad36700fefff/Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip)
- **Latest major update or known stable version** Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)
Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download
|Release|Download Links|Note| |||::|
+|June Service Release (2206)|[Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip](https://download.microsoft.com/download/4/7/a/47af6fc2-d9a0-4e66-822b-ad36700fefff/Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip)||
|May Service Release (2205)|[Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)|| |March Service Release (2203)|[Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip)|| |February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip)||
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-powershell.md
You need a Bicep file to deploy. The file must be local.
You need Azure PowerShell and to be connected to Azure: - **Install Azure PowerShell cmdlets on your local computer.** To deploy Bicep files, you need [Azure PowerShell](/powershell/azure/install-az-ps) version **5.6.0 or later**. For more information, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps).
+- **Install Bicep CLI.** Azure PowerShell doesn't automatically install the Bicep CLI. Instead, you must [manually install the Bicep CLI](install.md#install-manually).
- **Connect to Azure by using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)**. If you have multiple Azure subscriptions, you might also need to run [Set-AzContext](/powershell/module/Az.Accounts/Set-AzContext). For more information, see [Use multiple Azure subscriptions](/powershell/azure/manage-subscriptions-azureps). If you don't have PowerShell installed, you can use Azure Cloud Shell. For more information, see [Deploy Bicep files from Azure Cloud Shell](./deploy-cloud-shell.md).
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | deployments | resource group | 1-64 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
-> | resourcegroups | subscription | 1-90 | Letters or digits as defined by the [Char.IsLetterOrDigit](/dotnet/api/system.char.isletterordigit) function.<br><br>Valid characters are members of the following categories in [UnicodeCategory](/dotnet/api/system.globalization.unicodecategory):<br>**UppercaseLetter**,<br>**LowercaseLetter**,<br>**TitlecaseLetter**,<br>**ModifierLetter**,<br>**OtherLetter**,<br>**DecimalDigitNumber**.<br><br>Can't end with period. |
+> | resourcegroups | subscription | 1-90 | Underscores, hyphens, periods, and letters or digits as defined by the [Char.IsLetterOrDigit](/dotnet/api/system.char.isletterordigit) function.<br><br>Valid characters are members of the following categories in [UnicodeCategory](/dotnet/api/system.globalization.unicodecategory):<br>**UppercaseLetter**,<br>**LowercaseLetter**,<br>**TitlecaseLetter**,<br>**ModifierLetter**,<br>**OtherLetter**,<br>**DecimalDigitNumber**.<br><br>Can't end with period. |
> | tagNames | resource | 1-512 | Can't use:<br>`<>%&\?/` or control characters | > | tagNames / tagValues | tag name | 1-256 | All characters. | > | templateSpecs | resource group | 1-90 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / sqlDatabases | Yes | Yes | > | workspaces / sqlPools | Yes | Yes |
+<a id="synapsenote"></a>
+
+> [!NOTE]
+> The Master database doesn't support tags, but other databases, including Azure Synapse Analytics databases, support tags. Azure Synapse Analytics databases must be in Active (not Paused) state.
+ ## Microsoft.TestBase > [!div class="mx-tableFixed"]
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
The following table lists all the valid names for key/value pairs in the connect
| key | Description | Required | Default value | Example value | | -- | -- | -- | -- | |
-| Endpoint | The URI of your ASRS instance. | Y | N/A | https://foo.service.signalr.net |
+| Endpoint | The URI of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` |
| Port | The port that your ASRS instance is listening on. | N | 80/443, depends on endpoint uri schema | 8080 | | Version | The version of given connection string. | N | 1.0 | 1.0 |
-| ClientEndpoint | The URI of your reverse proxy, like App Gateway or API Management | N | null | https://foo.bar |
+| ClientEndpoint | The URI of your reverse proxy, like App Gateway or API Management | N | null | `https://foo.bar` |
| AuthType | The auth type, we'll use AccessKey to authorize requests by default. **Case insensitive** | N | null | azure, azure.msi, azure.app | ### Use AccessKey
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
If you're not going to continue to use this app, delete all resources created by
In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. > [!div class="nextstepaction"]
-> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings)
+> [Azure Web PubSub bindings for Azure Functions](/azure/azure-web-pubsub/reference-functions-bindings)
> [!div class="nextstepaction"]
-> [Quick start: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+> [Quick start: Create a simple chatroom with Azure Web PubSub](/azure/azure-web-pubsub/tutorial-build-chat)
> [!div class="nextstepaction"] > [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
If you're not going to continue to use this app, delete all resources created by
In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. > [!div class="nextstepaction"]
-> [Tutorial: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+> [Tutorial: Create a simple chatroom with Azure Web PubSub](/azure/azure-web-pubsub/tutorial-build-chat)
> [!div class="nextstepaction"]
-> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings)
+> [Azure Web PubSub bindings for Azure Functions](/azure/azure-web-pubsub/reference-functions-bindings)
> [!div class="nextstepaction"] > [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
cdn Cdn App Dev Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-node.md
You will then be presented a series of questions to initialize your project. Fo
![NPM init output](./media/cdn-app-dev-node/cdn-npm-init.png)
-Our project is now initialized with a *packages.json* file. Our project is going to use some Azure libraries contained in NPM packages. We'll use the library for Azure Active Directory authentication in Node.js (@azure/ms-rest-nodeauth) and the Azure CDN Client Library for JavaScript (@azure/arm-cdn). Let's add those to the project as dependencies.
+Our project is now initialized with a *packages.json* file. Our project is going to use some Azure libraries contained in NPM packages. We'll use the library for Azure Active Directory authentication in Node.js (@azure/identity) and the Azure CDN Client Library for JavaScript (@azure/arm-cdn). Let's add those to the project as dependencies.
```console
-npm install --save @azure/ms-rest-nodeauth
+npm install --save @azure/identity
npm install --save @azure/arm-cdn ```
After the packages are done installing, the *package.json* file should look simi
"author": "Cam Soper", "license": "MIT", "dependencies": {
- "@azure/arm-cdn": "^5.2.0",
- "@azure/ms-rest-nodeauth": "^3.0.0"
+ "@azure/arm-cdn": "^7.0.1",
+ "@azure/identity": "^2.0.4"
} } ```
With *app.js* open in our editor, let's get the basic structure of our program w
1. Add the "requires" for our NPM packages at the top with the following: ``` javascript
- var msRestAzure = require('@azure/ms-rest-nodeauth');
+ const { DefaultAzureCredential } = require("@azure/identity");
const { CdnManagementClient } = require('@azure/arm-cdn'); ``` 2. We need to define some constants our methods will use. Add the following. Be sure to replace the placeholders, including the **&lt;angle brackets&gt;**, with your own values as needed.
With *app.js* open in our editor, let's get the basic structure of our program w
3. Next, we'll instantiate the CDN management client and give it our credentials. ``` javascript
- var credentials = new msRestAzure.ApplicationTokenCredentials(clientId, tenantId, clientSecret);
+ var credentials = new DefaultAzureCredential();
var cdnClient = new CdnManagementClient(credentials, subscriptionId); ```
function cdnCreate() {
} // create profile <profile name>
-function cdnCreateProfile() {
+async function cdnCreateProfile() {
requireParms(3); console.log("Creating profile..."); var standardCreateParameters = {
function cdnCreateProfile() {
} };
- cdnClient.profiles.create( resourceGroupName, parms[2], standardCreateParameters, callback);
+ await cdnClient.profiles.beginCreateAndWait( resourceGroupName, parms[2], standardCreateParameters, callback);
} // create endpoint <profile name> <endpoint name> <origin hostname>
-function cdnCreateEndpoint() {
+async function cdnCreateEndpoint() {
requireParms(5); console.log("Creating endpoint..."); var endpointProperties = {
function cdnCreateEndpoint() {
}] };
- cdnClient.endpoints.create(resourceGroupName, parms[2], parms[3], endpointProperties, callback);
+ await cdnClient.endpoints.beginCreateAndWait(resourceGroupName, parms[2], parms[3], endpointProperties, callback);
} ```
Assuming the endpoint has been created, one common task that we might want to pe
```javascript // purge <profile name> <endpoint name> <path>
-function cdnPurge() {
+async function cdnPurge() {
requireParms(4); console.log("Purging endpoint..."); var purgeContentPaths = [ parms[3] ];
- cdnClient.endpoints.purgeContent(resourceGroupName, parms[2], parms[3], purgeContentPaths, callback);
+ await cdnClient.endpoints.beginPurgeContentAndWait(resourceGroupName, parms[2], parms[3], purgeContentPaths, callback);
} ```
function cdnPurge() {
The last function we will include deletes endpoints and profiles. ```javascript
-function cdnDelete() {
+async function cdnDelete() {
requireParms(2); switch(parms[1].toLowerCase()) {
function cdnDelete() {
case "profile": requireParms(3); console.log("Deleting profile...");
- cdnClient.profiles.deleteMethod(resourceGroupName, parms[2], callback);
+ await cdnClient.profiles.beginDeleteAndWait(resourceGroupName, parms[2], callback);
break; // delete endpoint <profile name> <endpoint name> case "endpoint": requireParms(4); console.log("Deleting endpoint...");
- cdnClient.endpoints.deleteMethod(resourceGroupName, parms[2], parms[3], callback);
+ await cdnClient.endpoints.beginDeleteAndWait(resourceGroupName, parms[2], parms[3], callback);
break; default:
cloud-services-extended-support Enable Key Vault Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md
Title: Apply the Key Vault VM Extension in Azure Cloud Services (extended support)
-description: Enable KeyVault VM Extension for Cloud Services (extended support)
+ Title: Apply the Key Vault VM extension in Azure Cloud Services (extended support)
+description: Learn about the Key Vault VM extension for Windows and how to enable it in Azure Cloud Services.
# Apply the Key Vault VM extension to Azure Cloud Services (extended support)
-## What is the Key Vault VM Extension?
-The Key Vault VM extension provides automatic refresh of certificates stored in an Azure Key Vault. Specifically, the extension monitors a list of observed certificates stored in key vaults, and upon detecting a change, retrieves, and installs the corresponding certificates. For more details, see [Key Vault VM extension for Windows](../virtual-machines/extensions/key-vault-windows.md).
+This article provides basic information about the Azure Key Vault VM extension for Windows and shows you how to enable it in Azure Cloud Services.
-## What's new in the Key Vault VM Extension?
-The Key Vault VM extension is now supported on the Azure Cloud Services (extended support) platform to enable the management of certificates end to end. The extension can now pull certificates from a configured Key Vault at a pre-defined polling interval and install them for use by the service.
+## What is the Key Vault VM extension?
+The Key Vault VM extension provides automatic refresh of certificates stored in an Azure key vault. Specifically, the extension monitors a list of observed certificates stored in key vaults. When the extension detects a change, it retrieves and installs the corresponding certificates. For more information, see [Key Vault VM extension for Windows](../virtual-machines/extensions/key-vault-windows.md).
-## How can I leverage the Key Vault VM extension?
-The following tutorial will show you how to install the Key Vault VM extension on PaaSV1 services by first creating a bootstrap certificate in your vault to get a token from AAD that will help in the authentication of the extension with the vault. Once the authentication process is set up and the extension is installed all latest certificates will be pulled down automatically at regular polling intervals.
+## What's new in the Key Vault VM extension?
+The Key Vault VM extension is now supported on the Azure Cloud Services (extended support) platform to enable the management of certificates end to end. The extension can now pull certificates from a configured key vault at a predefined polling interval and install them for the service to use.
+
+## How can I use the Key Vault VM extension?
+The following procedure will show you how to install the Key Vault VM extension on Azure Cloud Services by first creating a bootstrap certificate in your vault to get a token from Azure Active Directory (Azure AD). That token will help in the authentication of the extension with the vault. After the authentication process is set up and the extension is installed, all the latest certificates will be pulled down automatically at regular polling intervals.
> [!NOTE]
-> The Key Vault VM extension downloads all the certificates in the windows certificate store or to the location provided by "certificateStoreLocation" property in the VM extension settings. Currently, the KV VM extension grants access to the private key of the certificate only to the local system admin account.
+> The Key Vault VM extension downloads all the certificates in the Windows certificate store to the location provided by the `certificateStoreLocation` property in the VM extension settings. Currently, the Key Vault VM extension grants access to the private key of the certificate only to the local system admin account.
-## Prerequisites
-To use the Azure Key Vault VM extension, you need to have an Azure Active Directory tenant. For more information on setting up a new Active Directory tenant, see [Setup your AAD tenant](../active-directory/develop/quickstart-create-new-tenant.md)
+### Prerequisites
+To use the Azure Key Vault VM extension, you need to have an Azure AD tenant. For more information, see [Quickstart: Set up a tenant](../active-directory/develop/quickstart-create-new-tenant.md).
-## Enable the Azure Key Vault VM extension
+### Enable the Azure Key Vault VM extension
-1. [Generate a certificate](../key-vault/certificates/create-certificate-signing-request.md) in your vault and download the .cer for that certificate.
+1. [Generate a certificate](../key-vault/certificates/create-certificate-signing-request.md) in your vault and download the .cer file for that certificate.
-2. In the [Azure portal](https://portal.azure.com) navigate to **App Registrations**.
+2. In the [Azure portal](https://portal.azure.com), go to **App registrations**.
- :::image type="content" source="media/app-registration-0.jpg" alt-text="Shows selecting app registration in the portal.":::
+ :::image type="content" source="media/app-registration-0.jpg" alt-text="Screenshot of resources available in the Azure portal, including app registrations.":::
-3. In the App Registrations page select **New Registration** on the top left corner of the page
+3. On the **App registrations** page, select **New registration**.
- :::image type="content" source="media/app-registration-1.png" alt-text="Shows the app registration sin the Azure portal.":::
+ :::image type="content" source="media/app-registration-1.png" alt-text="Screenshot that shows the page for app registrations in the Azure portal.":::
-4. On the next page you can fill the form and complete the app creation.
+4. On the next page, fill out the form and complete the app creation.
-5. Upload the .cer of the certificate to the Azure Active Directory app portal.
+5. Upload the .cer file of the certificate to the Azure AD app portal.
- - Optionally you can also leverage the [Key Vault Event Grid notification feature](https://azure.microsoft.com/updates/azure-key-vault-event-grid-integration-is-now-available/) to upload the certificate.
+ Optionally, you can use the [Azure Event Grid notification feature for Key Vault](https://azure.microsoft.com/updates/azure-key-vault-event-grid-integration-is-now-available/) to upload the certificate.
-6. Grant the Azure Active Directory app secret list/get permissions in Key Vault:
- - If you are using RBAC preview, search for the name of the AAD app you created and assign it to the Key Vault Secrets User (preview) role.
- - If you are using vault access policies, then assign **Secret-Get** permissions to the AAD app you created. For more information, see [Assign access policies](../key-vault/general/assign-access-policy-portal.md)
+6. Grant the Azure Active Directory app secret permissions in Key Vault:
+
+ - If you're using a role-based access control (RBAC) preview, search for the name of the Azure AD app that you created and assign it to the Key Vault Secrets User (preview) role.
+ - If you're using vault access policies, assign **Secret-Get** permissions to the Azure AD app that you created. For more information, see [Assign access policies](../key-vault/general/assign-access-policy-portal.md).
-7. Install first
-step and the Key Vault VM extension using the ARM template snippet for `cloudService` resource as shown below:
+7. Install the Key Vault VM extension by using the Azure Resource Manager template snippet for the `cloudService` resource:
```json {
step and the Key Vault VM extension using the ARM template snippet for `cloudSer
} } ```
- You might need to specify the certificate store for boot strap certificate in ServiceDefinition.csdef like below:
+ You might need to specify the certificate store for the bootstrap certificate in *ServiceDefinition.csdef*:
```xml <Certificates>
step and the Key Vault VM extension using the ARM template snippet for `cloudSer
``` ## Next steps
-Further improve your deployment by [enabling monitoring in Cloud Services (extended support)](enable-alerts.md)
+Further improve your deployment by [enabling monitoring in Azure Cloud Services (extended support)](enable-alerts.md).
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-get-started.md
For a video introduction to Azure Storage best practices and patterns, see Micro
For more information, see the following resources:
-* [Azure Cloud Services Part 1: Introduction](https://justazure.com/microsoft-azure-cloud-services-part-1-introduction/)
* [How to manage Cloud Services](cloud-services-how-to-manage-portal.md) * [Azure Storage](../storage/index.yml) * [How to choose a cloud service provider](https://azure.microsoft.com/overview/choosing-a-cloud-service-provider/)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/call-api.md
Previously updated : 06/03/2022 Last updated : 06/28/2022
You can also use the client libraries provided by the Azure SDK to send requests
|Language |Package version | |||
- |.NET | [1.0.0-beta.3 ](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0-beta.3) |
- |Python | [1.1.0b1](https://pypi.org/project/azure-ai-language-conversations/) |
+ |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
5. See the following reference documentation for more information:
- * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
+ * [C#](/dotnet/api/azure.ai.language.conversations)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/call-api.md
Previously updated : 05/20/2022 Last updated : 06/28/2022 ms.devlang: csharp, python
You can also use the client libraries provided by the Azure SDK to send requests
:::image type="content" source="../../custom-text-classification/media/get-endpoint-azure.png" alt-text="Screenshot showing how to get the Azure endpoint." lightbox="../../custom-text-classification/media/get-endpoint-azure.png"::: - 3. Download and install the client library package for your language of choice: |Language |Package version | |||
- |.NET | [1.0.0-beta.3 ](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0-beta.3) |
- |Python | [1.1.0b1](https://pypi.org/project/azure-ai-language-conversations/) |
+ |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
5. See the following reference documentation for more information:
- * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
+ * [C#](/dotnet/api/azure.ai.language.conversations)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
Last updated 11/23/2021
The question answering Authoring API is used to automate common tasks like adding new question answer pairs, as well as creating, publishing, and maintaining projects/knowledge bases. > [!NOTE]
-> Currently authoring functionality is only available via the REST API. This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
+> Authoring functionality is available via the REST API and [Authoring SDK (preview)](https://docs.microsoft.com/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
## Prerequisites
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 06/22/2022 Last updated : 06/28/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features. ## June 2022
-* Python client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python).
+* v1.0 client libraries for [conversational language understanding](./conversational-language-understanding/how-to/call-api.md?tabs=azure-sdk#send-a-conversational-language-understanding-request) and [orchestration workflow](./orchestration-workflow/how-to/call-api.md?tabs=azure-sdk#send-an-orchestration-workflow-request) are Generally Available for the following languages:
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
+* v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for:
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
+ ## May 2022
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Previously updated : 01/31/2022 Last updated : 06/23/2022 zone_pivot_groups: container-apps
You learn how to:
> * Deploy two dapr-enabled container apps: one that produces orders and one that consumes orders and stores them > * Verify the interaction between the two microservices.
-With Azure Container Apps, you get a fully managed version of the Dapr APIs when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
+With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) quickstart.
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Previously updated : 03/22/2022 Last updated : 06/23/2022 ms.devlang: azurecli
You learn how to:
> * Deploy two apps that produce and consume messages and persist them in the state store > * Verify the interaction between the two microservices.
-With Azure Container Apps, you get a fully managed version of the Dapr APIs when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
+With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
container-apps Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices.md
Previously updated : 11/02/2021 Last updated : 06/23/2022
- Independent [scaling](scale-app.md), [versioning](application-lifecycle-management.md), and [upgrades](application-lifecycle-management.md) - [Service discovery](connect-apps.md)-- Native [Dapr integration](microservices-dapr.md)
+- Native [Dapr integration](./dapr-overview.md)
:::image type="content" source="media/microservices/azure-container-services-microservices.png" alt-text="Container apps are deployed as microservices.":::
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
Previously updated : 11/02/2021 Last updated : 06/23/2022
With Azure Container Apps, you can:
- [**Use internal ingress and service discovery**](connect-apps.md) for secure internal-only endpoints with built-in DNS-based service discovery. -- [**Build microservices with Dapr**](microservices.md) and access its rich set of APIs.
+- [**Build microservices with Dapr**](microservices.md) and [access its rich set of APIs](./dapr-overview.md).
- [**Run containers from any registry**](containers.md), public or private, including Docker Hub and Azure Container Registry (ACR).
container-registry Container Registry Tasks Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-managed-identity.md
Last updated 01/14/2020-+ # Use an Azure-managed identity in ACR Tasks
az acr task credential add \
You can get the client ID of the identity by running the [az identity show][az-identity-show] command. The client ID is a GUID of the form `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+The `--use-identity` parameter is not optional, if the registry has public network access disabled and relies only on certain trusted services to run ACR tasks. See, [example of ACR Tasks](allow-access-trusted-services.md#example-acr-tasks) as a trusted service.
+ ### 5. Run the task After configuring a task with a managed identity, run the task. For example, to test one of the tasks created in this article, manually trigger it using the [az acr task run][az-acr-task-run] command. If you configured additional, automated task triggers, the task runs when automatically triggered.
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-login.md
Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command to get
See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions.
-If you're experiencing problems using the registry with Azure Kubernetes Service, run the [az aks check-acr](/cli/azure/aks#az-aks-check-acr) command to validate that the registry is accessible from the AKS cluster.
+Follow the instructions from the [AKS support doc](https://docs.microsoft.com/troubleshoot/azure/azure-kubernetes/cannot-pull-image-from-acr-to-aks-cluster) if you fail to pull images from ACR to the AKS cluster.
> [!NOTE] > Some authentication or authorization errors can also occur if there are firewall or network configurations that prevent registry access. See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md).
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
To learn more, see [how to configure analytical TTL on a container](configure-sy
Data tiering refers to the separation of data between storage infrastructures optimized for different scenarios. Thereby improving the overall performance and cost-effectiveness of the end-to-end data stack. With analytical store, Azure Cosmos DB now supports automatic tiering of data from the transactional store to analytical store with different data layouts. With analytical store optimized in terms of storage cost compared to the transactional store, allows you to retain much longer horizons of operational data for historical analysis.
-After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure the transactional store Time-to-Live (TTTL) property to have records automatically deleted from the transactional store after a certain time period. Similarly, the analytical store Time-to-Live (ATTL) allows you to manage the lifecycle of data retained in the analytical store independent from the transactional store. By enabling analytical store and configuring TTL properties, you can seamlessly tier and define the data retention period for the two stores.
+After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure `transactional TTL` property to have records automatically deleted from the transactional store after a certain time period. Similarly, the `analytical TTL` allows you to manage the lifecycle of data retained in the analytical store, independent from the transactional store. By enabling analytical store and configuring transactional and analytical `TTL` properties, you can seamlessly tier and define the data retention period for the two stores.
+
+> [!NOTE]
+> When `analytical TTL` is bigger than `transactional TTL`, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL` in analytical store. If your container data may need an update or a delete at some point in time in the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future.
+
+> [!NOTE]
+> If your scenario doesn't demand physical deletes, you can adopt a logical delete/update approach. Insert in transactional store another version of the same document that only exists in analytical store but needs a logical delete/update. Maybe with a flag indicating that it's a delete or an update of an expired document. Both versions of the same document will co-exist in analytical store, and your application should only consider the last one.
++
+## Resilience
+
+Analytical store relies on Azure Storage and offers the following protection against physical failure:
+
+ * Single region Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) Azure Storage accounts.
+ * If any geo-region replication is configured for the Azure Cosmos DB database account, analytical store is allocated in Zone-Redundant Storage (ZRS) Azure storage accounts.
## Backup
-Currently analytical store doesn't support backup and restore, and your backup policy can't be planned relying on that. For more information, check the limitations section of [this](synapse-link.md#limitations) document. While continuous backup mode isn't supported in database accounts with Synapse Link enabled, periodic backup mode is.
+Although analytical store has built-in protection against physical failures, backup can be necessary for accidental deletes or updates in transactional store. In those cases, you can restore a container and use the restored container to backfill the data in the original container, or fully rebuild analytical store if necessary.
-With periodic backup mode and existing containers, you can:
+> [!NOTE]
+> Currently analytical store isn't backuped and can't be restored, and your backup policy can't be planned relying on that.
+
+Synapse Link, and analytical store by consequence, has different compatibility levels with Azure Cosmos DB backup modes:
+
+* Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account without any restriction.
+* Continuous backup mode isn't fully supported yet:
+ * Currently continuous backup mode can't be used in database accounts with Synapse Link enabled.
+ * Currently database accounts with continuous backup mode enabled can enable Synapse Link through a support case.
+ * Currently new database accounts can be created with continous backup mode and Synapse Link enabled, using Azure CLI or PowerShell. Those two features must be turned on at the same time, in the exact same command that creates the database account.
+
+### Backup Polices
- ### Fully rebuild analytical store when TTTL >= ATTL
+There two possible backup polices and to understand how to use them, two details about Cosmos DB backups are very important:
+
+ * The original container is restored without analytical store in both backup modes.
+ * Cosmos DB doesn't support containers overwrite from a restore.
+
+Now let's see how to use backup and restores from the analytical store perspective.
+
+ #### Restoring a container with TTTL >= ATTL
- The original container is restored without analytical store. But you can enable it and it will be rebuild with all data that existing in the container.
+ When `transactional TTL` is equal or bigger than `analytical TTL`, all data in analytical store still exists in transactional store. In case of a restore, you have two possible situations:
+ * To use the restored container as a replacement for the original container. To rebuild analytical store, just enable Synapse Link at account level and container level.
+ * To use the restored container as a data source to backfill or update the data in the original container. In this case, analytical store will automatically reflect the data operations.
- ### Partially rebuild analytical store when TTTL < ATTL
+ #### Restoring a container with TTTL < ATTL
-The data that was only in analytical store isn't restored, but it will be kept available for queries as long as you keep the original container. Analytical store is only deleted when you delete the container. Your analytical queries in Azure Synapse Analytics can read data from both original and restored container's analytical stores. Example:
+When `transactional TTL` is smaller than `analytical TTL`, some data only exists in analytical store and won't be in the restored container. Again your have two possible situations:
+ * To use the restored container as a replacement for the original container. In this case, when you enable Synapse Link at container level, only the data that was in transactional store will be included in the new analytical store. But please note that the analytical store of the original container remains available for queries as long as the original container exists. You may want to change your application to query both.
+ * To use the restored container as a data source to backfill or update the data in the original container:
+ * Analytical store will automatically reflect the data operations for the data that is in transactional store.
+ * If you re-insert data that was previously removed from transactional store due to `transactional TTL`, this data will be duplicated in analytical store.
+
+Example:
* Container `OnlineOrders` has TTTL set to one month and ATTL set for one year. * When you restore it to `OnlineOrdersNew` and turn on analytical store to rebuild it, there will be only one month of data in both transactional and analytical store.
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
description: Azure Cosmos DB's point-in-time restore feature helps to recover da
Previously updated : 04/06/2022 Last updated : 06/28/2022 # Continuous backup with point-in-time restore in Azure Cosmos DB+ [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)]
-Azure Cosmos DB's point-in-time restore feature helps in multiple scenarios such as the following:
+Azure Cosmos DB's point-in-time restore feature helps in multiple scenarios including:
-* To recover from an accidental write or delete operation within a container.
-* To restore a deleted account, database, or a container.
-* To restore into any region (where backups existed) at the restore point in time.
+* Recovering from an accidental write or delete operation within a container.
+* Restoring a deleted account, database, or a container.
+* Restoring into any region (where backups existed) at the restore point in time.
>
-> [!VIDEO https://aka.ms/docs.continuous-backup-restore]
+> [!VIDEO <https://aka.ms/docs.continuous-backup-restore>]
+
+Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. Continuous backups are taken in every region where the account exists. For example, an account can have a write region in West US and read regions in East US and East US 2. These replica regions can then be backed up to a remote Azure Storage account in each respective region. By default, each region stores the backup in Locally Redundant storage accounts. If the region has [Availability zones](/azure/architecture/reliability/architect) enabled then the backup is stored in Zone-Redundant storage accounts.
-Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. Continuous backups are taken in every region where the account exists. The following image shows how a container with write region in West US, read regions in East and East US 2 is backed up to a remote Azure Blob Storage account in the respective regions. By default, each region stores the backup in Locally Redundant storage accounts. If the region has [Availability zones](/azure/architecture/reliability/architect) enabled then the backup is stored in Zone-Redundant storage accounts.
+Diagram illustrating how a container with a write region in West US and read regions in East and East US 2 is backed up. The container is backed up to a remote Azure Blob Storage account in each respective write and read region.
+The time window available for restore (also known as retention period) is the lower value of the following two options: 30-day &amp; 7-day.
-The available time window for restore (also known as retention period) is the lower value of the following two: *30 days back in past from now* or *up to the resource creation time*. The point in time for restore can be any timestamp within the retention period. In strong consistency mode, backup taken in the write region is more up to date when compared to the read regions. Read regions can lag behind due to network or other transient issues. While doing restore, you can [get latest restorable timestamp](get-latest-restore-timestamp.md) for a given resource in that region to ensure that the resource has taken backups up to the given timestamp and can restore in that region.
+The selected option depends on the chosen tier of continuous backup. The point in time for restore can be any timestamp within the retention period no further back than the point when the resource was created. In strong consistency mode, backups taken in the write region are more up to date when compared to the read regions. Read regions can lag behind due to network or other transient issues. While doing restore, you can [get the latest restorable timestamp](get-latest-restore-timestamp.md) for a given resource in a specific region. Getting the latest timestamp ensures that the resource has taken backups up to the given timestamp, and can restore in that region.
-Currently, you can restore the Azure Cosmos DB account for SQL API or MongoDB contents point in time to another account via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (az CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). Table API or Gremlin APIs are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (az CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell).
+Currently, you can restore an Azure Cosmos DB account (SQL API or API for MongoDB) contents at a specific point in time to another account. You can perform this restore operation via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). Table API or Gremlin APIs are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell).
## Backup storage redundancy
By default, Azure Cosmos DB stores continuous mode backup data in locally redund
## What is restored?
-In a steady state, all mutations performed on the source account (which includes databases, containers, and items) are backed up asynchronously within 100 seconds. If the backup media (that is Azure storage) is down or unavailable, the mutations are persisted locally until the media is available back and then they are flushed out to prevent any loss in fidelity of operations that can be restored.
+In a steady state, all mutations performed on the source account (which includes databases, containers, and items) are backed up asynchronously within 100 seconds. If the Azure Storage backup media is down or unavailable, the mutations are persisted locally until the media is available. Then the mutations are flushed out to prevent any loss in fidelity of operations that can be restored.
You can choose to restore any combination of provisioned throughput containers, shared throughput database, or the entire account. The restore action restores all data and its index properties into a new account. The restore process ensures that all the data restored in an account, database, or a container is guaranteed to be consistent up to the restore time specified. The duration of restore will depend on the amount of data that needs to be restored. > [!NOTE] > With the continuous backup mode, the backups are taken in every region where your Azure Cosmos DB account is available. Backups taken for each region account are Locally redundant by default and Zone redundant if your account has [availability zone](/azure/architecture/reliability/architect) feature enabled for that region. The restore action always restores data into a new account.
-## What is not restored?
+## What isn't restored?
The following configurations aren't restored after the point-in-time recovery:
You can add these configurations to the restored account after the restore is co
## Restorable timestamp for live accounts
-To restore Azure Cosmos DB live accounts that are not deleted, it is a best practice to always identify the [latest restorable timestamp](get-latest-restore-timestamp.md) for the container. You can then use this timestamp to restore the account to its latest version.
+To restore Azure Cosmos DB live accounts that aren't deleted, it's a best practice to always identify the [latest restorable timestamp](get-latest-restore-timestamp.md) for the container. You can then use this timestamp to restore the account to its latest version.
## Restore scenarios The following are some of the key scenarios that are addressed by the point-in-time-restore feature. Scenarios [1] through [3] demonstrate how to trigger a restore if the restore timestamp is known beforehand.
-However, there could be scenarios where you don't know the exact time of accidental deletion or corruption. Scenarios [4] and [5] demonstrate how to _discover_ the restore timestamp using the new event feed APIs on the restorable database or containers.
+However, there could be scenarios where you don't know the exact time of accidental deletion or corruption. Scenarios [4] and [5] demonstrate how to *discover* the restore timestamp using the new event feed APIs on the restorable database or containers.
:::image type="content" source="./media/continuous-backup-restore-introduction/restorable-account-scenario.png" alt-text="Life-cycle events with timestamps for a restorable account." lightbox="./media/continuous-backup-restore-introduction/restorable-account-scenario.png" border="false":::
Azure Cosmos DB allows you to isolate and restrict the restore permissions for c
## <a id="continuous-backup-pricing"></a>Pricing
-Azure Cosmos DB accounts that have continuous backup enabled will incur an additional monthly charge to *store the backup* and to *restore your data*. The restore cost is added every time the restore operation is initiated. If you configure an account with continuous backup but don't restore the data, only backup storage cost is included in your bill.
+Azure Cosmos DB accounts that have continuous 30-day backup enabled will incur an extra monthly charge to *store the backup*. Both the 30-day and 7-day tier of continuous back incur charges to *restore your data*. The restore cost is added every time the restore operation is initiated. If you configure an account with continuous backup but don't restore the data, only backup storage cost is included in your bill.
-The following example is based on the price for an Azure Cosmos account deployed in West US. The pricing and calculation can vary depending on the region you are using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
+The following example is based on the price for an Azure Cosmos account deployed in West US. The pricing and calculation can vary depending on the region you're using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
-* All accounts enabled with continuous backup policy incur an additional monthly charge for backup storage that is calculated as follows:
+* All accounts enabled with continuous backup policy with 30-day incur a monthly charge for backup storage that is calculated as follows:
- $0.20/GB * Data size in GB in account * Number of regions
+ $0.20/GB \* Data size in GB in account \* Number of regions
-* Every restore API invocation incurs a one time charge. The charge is a function of the amount of data restore and it is calculated as follows:
+* Every restore API invocation incurs a one time charge. The charge is a function of the amount of data restore and it's calculated as follows:
- $0.15/GB * Data size in GB.
+ $0.15/GB \* Data size in GB.
For example, if you have 1 TB of data in two regions then:
-* Backup storage cost is calculated as (1000 * 0.20 * 2) = $400 per month
+* Backup storage cost is calculated as (1000 \* 0.20 \* 2) = $400 per month
-* Restore cost is calculated as (1000 * 0.15) = $150 per restore
+* Restore cost is calculated as (1000 \* 0.15) = $150 per restore
> [!TIP]
-> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db).
+> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db). Continous 7-day tier does not incur charges for backup of the data.
+
+## Continuous 30-day tier vs Continuous 7-day tier
+
+* Retention period for one tier is 30-day vs 7-day for another tier.
+* 30-day retention tier is charged for backup storage, 7-day retention tier isn't charged.
+* Restore is always charged in either tier
## Customer-managed keys See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk.md#how-do-customer-managed-keys-affect-continuous-backups) to learn: -- How to configure your Azure Cosmos DB account when using customer-managed keys in conjunction with continuous backups.-- How do customer-managed keys affect restores?
+* How to configure your Azure Cosmos DB account when using customer-managed keys with continuous backups.
+* How do customer-managed keys affect restores?
## Current limitations Currently the point in time restore functionality has the following limitations:
-* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra API is not supported at present
+* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra API isn't supported now.
* Table API and Gremlin API are in preview and supported via PowerShell and Azure CLI.
-* Multi-regions write accounts are not supported.
+* Multi-regions write accounts aren't supported.
-* Azure Synapse Link and periodic backup mode can coexist in the same database account. However, analytical store data isn't included in backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval.
+* Azure Synapse Link and periodic backup mode can coexist in the same database account. However, analytical store data isn't included in backups and restores. When Azure Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval.
-* Azure Synapse Link and continuous backup mode can't coexist in the same database account. Currently database accounts with Synapse Link enabled can't use continuous backup mode and vice-versa.
+* Azure Synapse Link and continuous backup mode can't coexist in the same database account. Currently database accounts with Azure Synapse Link enabled can't use continuous backup mode and vice-versa.
-* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account did not exist.
+* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account didn't exist.
-* The restore window is only 30 days and it cannot be changed.
+* The restore window is only 30-day for continuous 30-day tier and it can't be changed. Similarly it's only 7-day for continuous 7-day tier and that also can't be changed.
-* The backups are not automatically geo-disaster resistant. You have to explicitly add another region to have resiliency for the account and the backup.
+* The backups aren't automatically geo-disaster resistant. You've to explicitly add another region to have resiliency for the account and the backup.
-* While a restore is in progress, don't modify or delete the Identity and Access Management (IAM) policies that grant the permissions for the account or change any VNET, firewall configuration.
+* While a restore is in progress, don't modify or delete the Identity and Access Management (IAM) policies. These policies grant the permissions for the account to change any VNET, firewall configuration.
-* Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created are not supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb/custom-commands.md).
+* Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created aren't supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb/custom-commands.md).
-* The point-in-time restore functionality always restores to a new Azure Cosmos account. Restoring to an existing account is currently not supported. If you are interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative.
+* The point-in-time restore functionality always restores to a new Azure Cosmos account. Restoring to an existing account is currently not supported. If you're interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative.
-* After restoring, it is possible that for certain collections the consistent index may be rebuilding. You can check the status of the rebuild operation via the [IndexTransformationProgress](how-to-manage-indexing-policy.md) property.
+* After restoring, it's possible that for certain collections the consistent index may be rebuilding. You can check the status of the rebuild operation via the [IndexTransformationProgress](how-to-manage-indexing-policy.md) property.
-* The restore process restores all the properties of a container including its TTL configuration. As a result, it is possible that the data restored is deleted immediately if you configured that way. In order to prevent this situation, the restore timestamp must be before the TTL properties were added into the container.
+* The restore process restores all the properties of a container including its TTL configuration. As a result, it's possible that the data restored is deleted immediately if you configured that way. In order to prevent this situation, the restore timestamp must be before the TTL properties were added into the container.
-* Unique indexes in API for MongoDB can't be added or updated when you create a continuous backup mode account or migrate an account from periodic to continuous mode.
+* Unique indexes in API for MongoDB can't be added or updated when you create a continuous backup mode account. They also can't be modified when you migrate an account from periodic to continuous mode.
-* Continuous mode restore may not restore throughput setting valid as of restore point.
+* Continuous mode restore may not restore throughput setting valid as of restore point.
## Next steps
-* Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
+* Enable continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
* [Get the latest restorable timestamp](get-latest-restore-timestamp.md) for SQL and MongoDB accounts. * Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). * [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md). * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
-* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
cosmos-db Continuous Backup Restore Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-resource-model.md
Previously updated : 03/02/2022 Last updated : 06/28/2022 - # Resource model for the Azure Cosmos DB point-in-time restore feature+ [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)] This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored. This feature is supported in Azure Cosmos DB API for SQL and the Cosmos DB API for MongoDB. Currently, this feature is in preview for Azure Cosmos DB Gremlin API and Table API accounts.
The database account's resource model is updated with a few extra properties to
### BackupPolicy
-A new property in the account level backup policy named `Type` under `backuppolicy` parameter enables continuous backup and point-in-time restore functionalities. This mode is called **continuous backup**. You can set this mode when creating the account or while [migrating an account from periodic to continuous mode](migrate-continuous-backup.md). After continuous mode is enabled, all the containers and databases created within this account will have continuous backup and point-in-time restore functionalities enabled by default.
+A new property in the account level backup policy named ``Type`` under the ``backuppolicy`` parameter enables continuous backup and point-in-time restore. This mode is referred to as **continuous backup**. You can set this mode when creating the account or while [migrating an account from periodic to continuous mode](migrate-continuous-backup.md). After continuous mode is enabled, all the containers and databases created within this account will have point-in-time restore and continuous backup enabled by default. The continuous backup tier can be set to ``Continuous7Days`` or ``Continuous30Days``. By default, if no tier is provided, ``Continuous30Days`` is applied on the account.
> [!NOTE]
-> Currently the point-in-time restore feature is available for Azure Cosmos DB API for MongoDB and SQL accounts. After you create an account with continuous mode you can't switch it to a periodic mode.
+> Currently the point-in-time restore feature is available for Azure Cosmos DB API for MongoDB and SQL API accounts. It is also available for Table API and Gremlin API in preview. After you create an account with continuous mode you can't switch it to a periodic mode. The ``Continuous7Days`` tier is in preview.
### CreateMode
This property indicates how the account was created. The possible values are *De
The `RestoreParameters` resource contains the restore operation details including, the account ID, the time to restore, and resources that need to be restored.
-|Property Name |Description |
-|||
-|restoreMode | The restore mode should be *PointInTime* |
-|restoreSource | The instanceId of the source account from which the restore will be initiated. |
-|restoreTimestampInUtc | Point in time in UTC to restore the account. |
-|databasesToRestore | List of `DatabaseRestoreResource` objects to specify which databases and containers should be restored. Each resource represents a single database and all the collections under that database, see the [restorable SQL resources](#restorable-sql-resources) section for more details. If this value is empty, then the entire account is restored. |
-|gremlinDatabasesToRestore | List of `GremlinDatabaseRestoreResource` objects to specify which databases and graphs should be restored. Each resource represents a single database and all the graphs under that database. See the [restorable Gremlin resources](#restorable-graph-resources) section for more details. If this value is empty, then the entire account is restored. |
-|tablesToRestore | List of `TableRestoreResource` objects to specify which tables should be restored. Each resource represents a table under that database, see the [restorable Table resources](#restorable-table-resources) section for more details. If this value is empty, then the entire account is restored. |
+| Property Name | Description |
+| | |
+| ``restoreMode`` | The restore mode should be ``PointInTime``. |
+| ``restoreSource`` | The instanceId of the source account from which the restore will be initiated. |
+| ``restoreTimestampInUtc`` | Point in time in UTC to restore the account. |
+| ``databasesToRestore`` | List of `DatabaseRestoreResource` objects to specify which databases and containers should be restored. Each resource represents a single database and all the collections under that database. For more information, see [restorable SQL resources](#restorable-sql-resources). If this value is empty, then the entire account is restored. |
+| ``gremlinDatabasesToRestore`` | List of `GremlinDatabaseRestoreResource` objects to specify which databases and graphs should be restored. Each resource represents a single database and all the graphs under that database. For more information, see [restorable Gremlin resources](#restorable-graph-resources). If this value is empty, then the entire account is restored. |
+| ``tablesToRestore`` | List of `TableRestoreResource` objects to specify which tables should be restored. Each resource represents a table under that database. For more information, see [restorable Table resources](#restorable-table-resources). If this value is empty, then the entire account is restored. |
### Sample resource
The following JSON is a sample database account resource with continuous backup
}, "backupPolicy": { "type": "Continuous"
+ ....
} } } ``` - ## Restorable resources
-A set of new resources and APIs is available to help you discover critical information about resources, which can be restored, locations where they can be restored from, and the timestamps when key operations were performed on these resources.
+A set of new resources and APIs is available to help you discover critical information about resources, which includes:
+
+* Where the resources can be restored
+* Locations where the resources can be restored from
+* Timestamps when key operations were performed on these resources.
> [!NOTE] > All the API used to enumerate these resources require the following permissions:
+>
> * `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` > * `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read`
+>
### Restorable database account This resource contains a database account instance that can be restored. The database account can either be a deleted or a live account. It contains information that allows you to find the source database account that you want to restore.
-|Property Name |Description |
-|||
-| ID | The unique identifier of the resource. |
-| accountName | The global database account name. |
-| creationTime | The time in UTC when the account was created or migrated. |
-| deletionTime | The time in UTC when the account was deleted. This value is empty if the account is live. |
-| apiType | The API type of the Azure Cosmos DB account. |
-| restorableLocations | The list of locations where the account existed. |
-| restorableLocations: locationName | The region name of the regional account. |
-| restorableLocations: regionalDatabaseAccountInstanceId | The GUID of the regional account. |
-| restorableLocations: creationTime | The time in UTC when the regional account was created r migrated.|
-| restorableLocations: deletionTime | The time in UTC when the regional account was deleted. This value is empty if the regional account is live.|
+| Property Name | Description |
+| | |
+| ``ID`` | The unique identifier of the resource. |
+| ``accountName`` | The global database account name. |
+| ``creationTime`` | The time in UTC when the account was created or migrated. |
+| ``deletionTime`` | The time in UTC when the account was deleted. This value is empty if the account is live. |
+| ``apiType`` | The API type of the Azure Cosmos DB account. |
+| ``restorableLocations`` | The list of locations where the account existed. |
+| ``restorableLocations: locationName`` | The region name of the regional account. |
+| ``restorableLocations: regionalDatabaseAccountInstanceId`` | The GUID of the regional account. |
+| ``restorableLocations: creationTime`` | The time in UTC when the regional account was created r migrated.|
+| ``restorableLocations: deletionTime`` | The time in UTC when the regional account was deleted. This value is empty if the regional account is live.|
+| ``OldestRestorableTimeStamp`` | The earliest time in UTC to which restore can be performed. For the 30 day tier, this time can be maximum 30 days from now, for the seven days tier, this time can be up to seven days from now. |
To get a list of all restorable accounts, see [Restorable Database Accounts - list](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-database-accounts/list) or [Restorable Database Accounts- list by location](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-database-accounts/list-by-location) articles.
To get a list of all restorable accounts, see [Restorable Database Accounts - li
Each resource contains information of a mutation event such as creation and deletion that occurred on the SQL Database. This information can help in scenarios where the database was accidentally deleted and if you need to find out when that event happened.
-|Property Name |Description |
-|||
-| eventTimestamp | The time in UTC when the database is created or deleted. |
-| ownerId | The name of the SQL database. |
-| ownerResourceId | The resource ID of the SQL database|
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event isn't initiated by the user</li></ul> |
-| database |The properties of the SQL database at the time of the event|
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when the database is created or deleted. |
+| ``ownerId`` | The name of the SQL database. |
+| ``ownerResourceId`` | The resource ID of the SQL database, |
+| ``operationType`` | The operation type of this database event. |
+| ``database`` | The properties of the SQL database at the time of the event, |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: database creation event
+> * ``Delete``: database deletion event
+> * ``Replace``: database modification event
+> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user
+>
To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-databases/list) article.
To get a list of all database mutations, see [Restorable Sql Databases - List](/
Each resource contains information of a mutation event such as creation and deletion that occurred on the SQL container. This information can help in scenarios where the container was modified or deleted, and if you need to find out when that event happened.
-|Property Name |Description |
-|||
-| eventTimestamp | The time in UTC when this container event happened.|
-| ownerId| The name of the SQL container.|
-| ownerResourceId | The resource ID of the SQL container.|
-| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event isn't initiated by the user</li></ul> |
-| container | The properties of the SQL container at the time of the event.|
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this container event happened. |
+| ``ownerId`` | The name of the SQL container. |
+| ``ownerResourceId`` | The resource ID of the SQL container.|
+| ``operationType`` | The operation type of this container event. |
+| ``container`` | The properties of the SQL container at the time of the event. |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: container creation event
+> * ``Delete``: container deletion event
+> * ``Replace``: container modification event
+> * ``SystemOperation``: container modification event triggered by the system. This event isn't initiated by the user
+>
To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-containers/list) article.
Each resource represents a single database and all the containers under that dat
|Property Name |Description | |||
-| databaseName | The name of the SQL database.
-| collectionNames | The list of SQL containers under this database.|
+| ``databaseName`` | The name of the SQL database.
+| ``collectionNames`` | The list of SQL containers under this database.|
To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable Sql Resources - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-resources/list) article.
To get a list of SQL database and container combo that exist on the account at t
Each resource contains information of a mutation event such as creation and deletion that occurred on the MongoDB Database. This information can help in the scenario where the database was accidentally deleted and user needs to find out when that event happened.
-|Property Name |Description |
-|||
-|eventTimestamp| The time in UTC when this database event happened.|
-| ownerId| The name of the MongoDB database. |
-| ownerResourceId | The resource ID of the MongoDB database. |
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user </li></ul> |
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this database event happened. |
+| ``ownerId`` | The name of the MongoDB database. |
+| ``ownerResourceId`` | The resource ID of the MongoDB database. |
+| ``operationType`` | The operation type of this database event. |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: database creation event
+> * ``Delete``: database deletion event
+> * ``Replace``: database modification event
+> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user
+>
To get a list of all database mutation, see [Restorable Mongodb Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-databases/list) article.
To get a list of all database mutation, see [Restorable Mongodb Databases - List
Each resource contains information of a mutation event such as creation and deletion that occurred on the MongoDB Collection. This information can help in scenarios where the collection was modified or deleted, and user needs to find out when that event happened.
-|Property Name |Description |
-|||
-| eventTimestamp |The time in UTC when this collection event happened. |
-| ownerId| The name of the MongoDB collection. |
-| ownerResourceId | The resource ID of the MongoDB collection. |
-| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event isn't initiated by the user</li></ul> |
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this collection event happened. |
+| ``ownerId`` | The name of the MongoDB collection. |
+| ``ownerResourceId`` | The resource ID of the MongoDB collection. |
+| ``operationType`` | The operation type of this collection event. |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: collection creation event
+> * ``Delete``: collection deletion event
+> * ``Replace``: collection modification event
+> * ``SystemOperation``: collection modification event triggered by the system. This event isn't initiated by the user
+>
-To get a list of all container mutations under the same database see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list) article.
+To get a list of all container mutations under the same database, see [restorable MongoDB resources - list](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list).
### Restorable MongoDB resources Each resource represents a single database and all the collections under that database.
-|Property Name |Description |
-|||
-| databaseName |The name of the MongoDB database. |
-| collectionNames | The list of MongoDB collections under this database. |
+| Property Name | Description |
+| | |
+| ``databaseName`` |The name of the MongoDB database. |
+| ``collectionNames`` | The list of MongoDB collections under this database. |
-To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [Restorable Mongodb Resources - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-resources/list) article.
+To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [restorable MongoDB resources - list](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-resources/list).
### Restorable Graph resources
-Each resource represents a single database and all the graphs under that database.
+Each resource represents a single database and all the graphs under that database.
-|Property Name |Description |
-|||
-| gremlinDatabaseName | The name of the Graph database. |
-| graphNames | The list of Graphs under this database. |
+| Property Name | Description |
+| | |
+| ``gremlinDatabaseName`` | The name of the Graph database. |
+| ``graphNames`` | The list of Graphs under this database. |
To get a list of all Gremlin database and graph combinations that exist on the account at the given timestamp and location, see [Restorable Graph Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-resources/list) article.
-### Restorable Graph database
+### Restorable Graph database
-Each resource contains information about a mutation event, such as a creation and deletion, that occurred on the Graph database. This information can help in the scenario where the database was accidentally deleted and user needs to find out when that event happened.
+Each resource contains information about a mutation event, such as a creation and deletion that occurred on the Graph database. This information can help in the scenario where the database was accidentally deleted and user needs to find out when that event happened.
-|Property Name |Description |
-|||
-|eventTimestamp| The time in UTC when this database event happened.|
-| ownerId| The name of the Graph database. |
-| ownerResourceId | The resource ID of the Graph database. |
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user. </li></ul> |
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this database event happened. |
+| ``ownerId`` | The name of the Graph database. |
+| ``ownerResourceId`` | The resource ID of the Graph database. |
+| ``operationType`` | The operation type of this database event. |
-To get an event feed of all mutations on the Gremlin database for the account, see theΓÇ»[Restorable Graph Databases - List]( /rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-databases/list) article.
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: database creation event
+> * ``Delete``: database deletion event
+> * ``Replace``: database modification event
+> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user.
+>
-### Restorable Graphs
+To get an event feed of all mutations on the Gremlin database, see [restorable graph databases - list]( /rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-databases/list).
-Each resource contains information of a mutation event such as creation and deletion that occurred on the Graph. This information can help in scenarios where the graph was modified or deleted, and if you need to find out when that event happened.
+### Restorable Graphs
-|Property Name |Description |
-|||
-| eventTimestamp |The time in UTC when this collection event happened. |
-| ownerId| The name of the Graph collection. |
-| ownerResourceId | The resource ID of the Graph collection. |
-| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: Graph creation event</li><li>Delete: Graph deletion event</li><li>Replace: Graph modification event</li><li>SystemOperation: collection modification event triggered by the system. This event isn't initiated by the user.</li></ul> |
+Each resource contains information of a mutation event such as creation and deletion that occurred on the Graph. This information can help in scenarios where the graph was modified or deleted, and if you need to find out when that event happened.
+
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this collection event happened. |
+| ``ownerId`` | The name of the Graph collection. |
+| ``ownerResourceId`` | The resource ID of the Graph collection. |
+| ``operationType`` | The operation type of this collection event. |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: Graph creation event
+> * ``Delete``: Graph deletion event
+> * ``Replace``: Graph modification event
+> * ``SystemOperation``: collection modification event triggered by the system. This event isn't initiated by the user.
+>
To get a list of all container mutations under the same database, see graph [Restorable Graphs - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-graphs/list) article.
-### Restorable Table resources
+### Restorable Table resources
Lists all the restorable Azure Cosmos DB Tables available for a specific database account at a given time and location. Note the Table API doesn't specify an explicit database.
-|Property Name |Description |
-|||
-| TableNames | The list of Table containers under this account. |
+| Property Name | Description |
+| | |
+| ``TableNames`` | The list of Table containers under this account. |
-To get a list of tables that exist on the account at the given timestamp and location, see [Restorable Table Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-table-resources/list) article.
+To get a list of tables that exist on the account at the given timestamp and location, see [Restorable Table Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-table-resources/list) article.
### Restorable Table
-Each resource contains information of a mutation event such as creation and deletion that occurred on the Table. This information can help in scenarios where the table was modified or deleted, and if you need to find out when that event happened.
+Each resource contains information of a mutation event such as creation and deletion that occurred on the Table. This information can help in scenarios where the table was modified or deleted, and if you need to find out when that event happened.
-|Property Name |Description |
-|||
-|eventTimestamp| The time in UTC when this database event happened.|
-| ownerId| The name of the Table database. |
-| ownerResourceId | The resource ID of the Table resource. |
-| operationType | The operation type of this Table event. Here are the possible values:<br/><ul><li> Create: Table creation event</li><li> Delete: Table deletion event</li><li> Replace: Table modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user </li></ul> |
-
-To get a list of all table mutations under the same database, see [Restorable Table - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-tables/list) article.
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this database event happened. |
+| ``ownerId`` | The name of the Table database. |
+| ``ownerResourceId`` | The resource ID of the Table resource. |
+| ``operationType`` | The operation type of this Table event. |
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: Table creation event
+> * ``Delete``: Table deletion event
+> * ``Replace``: Table modification event
+> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user
+>
+
+To get a list of all table mutations under the same database, see [Restorable Table - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-tables/list) article.
## Next steps
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Previously updated : 04/08/2022 Last updated : 06/28/2022 # Migrate an Azure Cosmos DB account from periodic to continuous backup mode+ [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)] Azure Cosmos DB accounts with periodic mode backup policy can be migrated to continuous mode using [Azure portal](#portal), [CLI](#cli), [PowerShell](#powershell), or [Resource Manager templates](#ARM-template). Migration from periodic to continuous mode is a one-way migration and itΓÇÖs not reversible. After migrating from periodic to continuous mode, you can apply the benefits of continuous mode.
Azure Cosmos DB accounts with periodic mode backup policy can be migrated to con
The following are the key reasons to migrate into continuous mode: * The ability to do self-service restore using Azure portal, CLI, or PowerShell.
-* The ability to restore at time granularity of the second within the last 30-day window.
+* The ability to restore at time granularity of a second within the last 30-day or 7-day window.
* The ability to make sure that the backup is consistent across shards or partition key ranges within a period. * The ability to restore container, database, or the full account when it's deleted or modified. * The ability to choose the events on the container, database, or account and decide when to initiate the restore.
+> [!IMPORTANT]
+> Support for 7-day continous backup in both provisioning and migration scenarios is still in preview. Please use PowerShell and Azure CLI to migrate or provision an account with continous backup configured at the 7-day tier.
+ > [!NOTE] > The migration capability is one-way only and it's an irreversible action. Which means once you migrate from periodic mode to continuous mode, you canΓÇÖt switch back to periodic mode. >
To perform the migration, you need `Microsoft.DocumentDB/databaseAccounts/write`
## Pricing after migration
-After you migrate your account to continuous backup mode, the cost with this mode is different when compared to the periodic backup mode. The continuous mode backup cost can vary from periodic mode. To learn more, see [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing).
+After you migrate your account to continuous backup mode, the costs change when compared to the periodic backup mode. The tier choice of 30 days versus seven days will also have an influence on the cost of the backup. To learn more, see [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing).
## <a id="portal"></a> Migrate using portal
Use the following steps to migrate your account from periodic backup to continuo
## <a id="powershell"></a>Migrate using PowerShell
-Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or version higher than 6.2.0. Next, run the following steps:
+1. Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or any version higher than 6.2.0.
+1. To use ``Continous7Days`` mode for provisioning or migrating, you'll have to use preview of the ``cosmosdb`` extension. Use ``Install-Module -Name Az.CosmosDB -AllowPrerelease``
+1. Next, run the following steps:
-1. Connect to your Azure account:
+ 1. Connect to your Azure account:
- ```azurepowershell-interactive
- Connect-AzAccount
- ```
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
-1. Migrate your account from periodic to continuous backup mode:
+ 1. Migrate your account from periodic to continuous backup mode with ``continuous30days`` tier or ``continuous7days`` days. If a tier value isn't provided, it's assumed to be ``continous30days``:
- ```azurepowershell-interactive
- Update-AzCosmosDBAccount `
- -ResourceGroupName "myrg" `
- -Name "myAccount" `
- -BackupPolicyType Continuous
- ```
+ ```azurepowershell-interactive
+ Update-AzCosmosDBAccount `
+ -ResourceGroupName "myrg" `
+ -Name "myAccount" `
+ -BackupPolicyType "Continuous"
+ ```
+
+ ```azurepowershell-interactive
+ Update-AzCosmosDBAccount `
+ -ResourceGroupName "myrg" `
+ -Name "myAccount" `
+ -BackupPolicyType "Continuous" `
+ -ContinuousTier "Continuous7Days"
+ ```
## <a id="cli"></a>Migrate using CLI 1. Install the latest version of Azure CLI:
- * If you donΓÇÖt have CLI, [install](/cli/azure/) the latest version of Azure CLI or version higher than 2.26.0.
- * If you already have Azure CLI installed, use `az upgrade` command to upgrade to the latest version.
- * Alternatively, user can also use Cloud Shell from Azure portal.
+ * If you donΓÇÖt have the Azure CLI already installed, see [install Azure CLI](/cli/azure/). Install the latest version of Azure CLI or any version higher than 2.26.0.
+ * If you already have Azure CLI installed, use the ``az upgrade`` command to upgrade to the latest version. Alternatively, you can also use the Azure Cloud Shell from the Azure portal.
+ * To use ``Continous7Days`` mode for provisioning or migrating, you'll have to use preview of the ``cosmosdb`` extension. Use ``az extension update --name cosmosdb-preview`` to manage the extension.
1. Sign in to your Azure account and run the following command to migrate your account to continuous mode: ```azurecli-interactive az login
+ ```
+
+1. Migrate the account to ``continuous30days`` or ``continuous7days`` tier. If tier value isn't provided, it's assumed to be ``continous30days``:
+ ```azurecli-interactive
az cosmosdb update -n <myaccount> -g <myresourcegroup> --backup-policy-type continuous ```
-1. After the migration completes successfully, the output shows the backupPolicy object has the type property set to Continuous.
+ ```azurecli-interactive
+ az cosmosdb update -g "my-rg" -n "my-continuous-backup-account" --backup-policy-type "Continuous" --continuous-tier "Continuous7Days"
+ ```
+
+1. After the migration completes successfully, the output shows the ``backupPolicy`` object, which includes ``type`` property with a value of ``Continuous``.
```console { "apiProperties": null, "backupPolicy": {
- "type": "Continuous"
- },
- "capabilities": [],
- "connectorOffer": null,
- "consistencyPolicy": {
- "defaultConsistencyLevel": "Session",
- "maxIntervalInSeconds": 5,
- "maxStalenessPrefix": 100
+ "continuousModeProperties": {
+ "tier": "Continuous7Days"
+ },
+ "migrationState": null,
+ "type": "Continuous"
},
- …
+ …
} ``` ### Check the migration status
-Run the following command and check the **status**, **targetType** properties of the **backupPolicy** object. The status shows in-progress after the migration starts:
+Run the following command and check the **status** and **targetType** properties of the **backupPolicy** object. The status shows *in-progress* after the migration starts:
```azurecli-interactive az cosmosdb show -n "myAccount" -g "myrg"
az cosmosdb show -n "myAccount" -g "myrg"
:::image type="content" source="./media/migrate-continuous-backup/migration-status-started-powershell.png" alt-text="Check the migration status using PowerShell command":::
-When the migration is complete, backup type changes to **Continuous**. Run the same command again to check the status:
+When the migration is complete, the backup type changes to **Continuous** and shows the chosen tier. If a tier wasn't provided, the tier would be set to ``Continuous30Days``. Run the same command again to check the status:
```azurecli-interactive az cosmosdb show -n "myAccount" -g "myrg"
az cosmosdb show -n "myAccount" -g "myrg"
:::image type="content" source="./media/migrate-continuous-backup/migration-status-complete-powershell.png" alt-text="Backup type changes to continuous after the migration is complete":::
-## <a id="ARM-template"></a> Migrate using Resource Manager template
+## <a id="ARM-template"></a> Migrate from periodic mode to Continuous mode using Resource Manager template
To migrate to continuous backup mode using ARM template, find the backupPolicy section of your template and update the `type` property. For example, if your existing template has backup policy like the following JSON object:
To migrate to continuous backup mode using ARM template, find the backupPolicy s
"backupIntervalInMinutes": 240, "backupRetentionIntervalInHours": 8 }
-},
+}
``` Replace it with the following JSON object: ```json
-"backupPolicy": {
- "type": "Continuous"
-},
+"backupPolicy":ΓÇ»{
+ΓÇ» "type":ΓÇ»"Continuous",
+   "continuousModeProperties": {
+    "tier": "Continuous7Days"
+    }
+}
``` Next deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
Next deploy the template by using Azure PowerShell or CLI. The following example
az deployment group create -g <ResourceGroup> --template-file <ProvisionTemplateFilePath> ```
+## Change Continuous Mode tiers
+
+You can switch between ``Continous30Days`` and ``Continous7Days`` in Azure PowerShell, Azure CLI or the Azure portal.
+
+The Following Azure CLI command illustrates switching an existing account to ``Continous7Days``:
+
+```azurecli-interactive
+az cosmosdb update \
+ΓÇ» ΓÇ» --resource-group "my-rg" \
+ΓÇ» ΓÇ» --name "my-continuous-backup-account" \
+ΓÇ» ΓÇ» --backup-policy-type "Continuous" \
+ΓÇ» ΓÇ» --continuous-tier "Continuous7Days"
+```
+
+The following Azure PowerShell command illustrates switching an existing account to ``Continous7Days``:
+
+```azurepowershell-interactive
+Update-AzCosmosDBAccount `
+ -ResourceGroupName "myrg" `
+ -Name "myAccount" `
+ -BackupPolicyType Continuous `
+ -ContinuousTier Continuous7Days
+```
+
+You can also use an ARM template in a method similar to using the Azure CLI and Azure PowerShell.
+
+> [!NOTE]
+> When changing from the 30 to 7 days tier, the ability to restore more than 7 days in history is immediately unavaiailable. When changing from 7 to 30 days tier, you will not be able to restore more than 7 days immediately. The earliest time to restore can be extracted from the account metadata available via Azure Powershell or Azure CLI. The price impact of switching between the 7 and 30 days tiers would also be immediately visible.
+ ## What to expect during and after migration? When migrating from periodic mode to continuous mode, you can't run any control plane operations that performs account level updates or deletes. For example, operations such as adding or removing regions, account failover, updating backup policy etc. can't be run while the migration is in progress. The time for migration depends on the size of data and the number of regions in your account. Restore action on the migrated accounts only succeeds from the time when migration successfully completes.
You can restore your account after the migration completes. If the migration com
## Frequently asked questions
-#### Does the migration only happen at the account level?
+### Does the migration only happen at the account level?
+ Yes.
-#### Which accounts can be targeted for backup migration?
+### Which accounts can be targeted for backup migration?
+ Currently, SQL API and API for MongoDB accounts with single write region that have shared, provisioned, or autoscale provisioned throughput support migration. Table API and Gremlin API are in preview. Accounts enabled with analytical storage and multiple-write regions aren't supported for migration.
-#### Does the migration take time? What is the typical time?
-Migration takes time and it depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with tens of terabytes of data, the migration can take up to few days to complete.
+### Does the migration take time? What is the typical time?
+
+Migration takes a varying amount of time that largely depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with tens of terabytes of data, the migration can take up to few days to complete.
-#### Does the migration cause any availability impact/downtime?
-No, the migration operation takes place in the background, so the client requests aren't impacted. However, we need to perform some backend operations during the migration, and it might take extra time if the account is under heavy load.
+### Does the migration cause any availability impact/downtime?
-#### What happens if the migration fails? Will I still get the periodic backups or get the continuous backups?
-Once the migration process is started, the account will start to become a continuous mode. If the migration fails, you must initiate migration again until it succeeds.
+No, the migration operation takes place in the background. So, client requests aren't impacted. However, we need to perform some backend operations during the migration, and it may take extra time if the account is under heavy load.
-#### How do I perform a restore to a timestamp before/during/after the migration?
-Assume that you started migration at t1 and finished at t5, you canΓÇÖt use a restore timestamp between t1 and t5.
+### What happens if the migration fails? Will I still get the periodic backups or get the continuous backups?
-To restore to a time after t5 because your account is now in continuous mode, you can perform the restore using Azure portal, CLI, or PowerShell like you normally do with continuous account. This self-service restore request can only be done after the migration is complete.
+Once the migration process is started, the account will be enabled in continuous mode. If the migration fails, you must initiate migration again until it succeeds.
-To restore to a time before t1, you can open a support ticket like you normally do with the periodic backup account. After the migration, you have up to 30 days to perform the periodic restore. During these 30 days, you can restore based on the backup retention/interval of your account before the migration. For example, if the backup config was to retain 24 copies at 1 hour interval, then you can restore to anytime between [t1 ΓÇô 24 hours] and [t1].
+### How do I perform a restore to a timestamp before/during/after the migration?
-#### Which account level control plane operations are blocked during migration?
-Operations such as add/remove region, failover, changing backup policy, throughput changes resulting in data movement are blocked during migration.
+Assume that you started migration at ``t1`` and finished at ``t5``, you canΓÇÖt use a restore timestamp between ``t1`` and ``t5``.
+
+Also assume that your account is now in continuous mode. To restore to a time after ``t5``, perform the restore using Azure portal, CLI, or PowerShell like normally with a continuous account. This self-service restore request can only be done after the migration is complete.
+
+To restore to a time before ``t1``, you can open a support ticket like you normally would with a periodic backup account. After the migration, you have up to 30 days to perform the periodic restore. During these 30 days, you can restore based on the backup retention/interval of your account before the migration. For example, if the backup was configured to retain 24 copies at 1 hour intervals, then you can restore to anytime between ``(t1 ΓÇô 24 hours)`` and ``t1``.
+
+### Which account level control plane operations are blocked during migration?
+
+Operations such as add/remove region, failover, changing backup policy, and any throughput changes resulting in data movement are blocked during migration.
+
+### If the migration fails for some underlying issue, would it still block the control plane operation until it's retried and completed successfully?
-#### If the migration fails for some underlying issue, would it still block the control plane operation until it's retried and completed successfully?
Failed migration won't block any control plane operations. If migration fails, itΓÇÖs recommended to retry until it succeeds before performing any other control plane operations.
-#### Is it possible to cancel the migration?
-It isn't possible to cancel the migration because it isn't a reversible operation.
+### Is it possible to cancel the migration?
-#### Is there a tool that can help estimate migration time based on the data usage and number of regions?
-There isn't a tool to estimate time. But our scale runs indicate single region with 1 TB of data takes roughly one and half hour.
+It isn't possible to cancel the migration because migrations aren't a reversible operation.
-For multi-region accounts, calculate the total data size as `Number_of_regions * Data_in_single_region`.
+### Is there a tool that can help estimate migration time based on the data usage and number of regions?
-#### Since the continuous backup mode is now GA, would you still recommend restoring a copy of your account and try migration on the copy before deciding to migrate the production account?
-ItΓÇÖs recommended to test the continuous backup mode feature to see it works as expected before migrating production accounts. Because migration is a one-way operation and itΓÇÖs not reversible.
+There isn't a tool to estimate time. Our testings and scale runs indicate that a single region account with 1 TB of data takes roughly 90 minutes.
+
+For multi-region accounts, calculate the total data size as ``Number_of_regions * Data_in_single_region``.
+
+### Since the continuous backup mode is now GA, do you still recommend restoring a copy of your account? Would you recommend trying migration on the copy before deciding to migrate the production account?
+
+ItΓÇÖs recommended to test the continuous backup mode feature to see it works as expected before migrating production accounts. Migration is a one-way operation and itΓÇÖs not reversible.
## Next steps
To learn more about continuous backup mode, see the following articles:
* Restore an account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Online Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/online-backup-and-restore.md
description: This article describes how automatic backup, on-demand data restore
Previously updated : 11/15/2021 Last updated : 06/28/2022 - # Online backup and on-demand data restore in Azure Cosmos DB+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service. The automatic backups are helpful in scenarios when you accidentally delete or update your Azure Cosmos account, database, or container and later require the data recovery. Azure Cosmos DB backups are encrypted with Microsoft managed service keys. These backups are transferred over a secure non-public network. Which means, backup data remains encrypted while transferred over the wire and at rest. Backups of an account in a given region are uploaded to storage accounts in the same region.
Azure Cosmos DB automatically takes backups of your data at regular intervals. T
There are two backup modes:
-* **Continuous backup mode** ΓÇô This mode allows you to do restore to any point of time within the last 30 days. You can choose this mode while creating the Azure Cosmos DB account. To learn more, see the [Introduction to Continuous backup mode](continuous-backup-restore-introduction.md), provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template) articles. You can also [migrate the accounts from periodic to continuous mode](migrate-continuous-backup.md).
-* **Periodic backup mode** - This mode is the default backup mode for all existing accounts. In this mode, backup is taken at a periodic interval and the data is restored by creating a request with the support team. In this mode, you configure a backup interval and retention for your account. The maximum retention period extends to a month. The minimum backup interval can be one hour. To learn more, see the [Periodic backup mode](configure-periodic-backup-restore.md) article.
+* **Continuous backup mode** ΓÇô This mode has two tiers. One tier includes 7-day retention and the second includes 30-day retention. Continuous backup allows you to restore to any point of time within either 7 or 30 days. You can choose this appropriate tier while creating an Azure Cosmos DB account. For more information about the tiers, see [introduction to continuous backup mode](continuous-backup-restore-introduction.md). To enable continuous backup, see the appropriate articles using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). You can also [migrate the accounts from periodic to continuous mode](migrate-continuous-backup.md).
+
+ > [!NOTE]
+ > The 7-day retention tier is currently in preview.
+
+* **Periodic backup mode** - This mode is the default backup mode for all existing accounts. In this mode, backup is taken at a periodic interval and the data is restored by creating a request with the support team. In this mode, you configure a backup interval and retention for your account. The maximum retention period extends to a month. The minimum backup interval can be one hour. To learn more, see [periodic backup mode](configure-periodic-backup-restore.md).
> [!NOTE] > If you configure a new account with continuous backup, you can do self-service restore via Azure portal, PowerShell, or CLI. If your account is configured in continuous mode, you canΓÇÖt switch it back to periodic mode.
-For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time.
+For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Azure Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Within an analytical store, automatic backup and restore of your data isn't supported at this time.
## Frequently asked questions
No. You can't restore into an account with lower RU/s or fewer partitions.
### Is periodic backup mode supported for Azure Synapse Link enabled accounts?
-Yes. However, analytical store data isn't included in backups and restores. When Synapse Link is enabled on a database account, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at scheduled backup interval, as always.
+Yes. However, analytical store data isn't included in backups and restores. When Azure Synapse Link is enabled on a database account, Azure Cosmos DB will automatically back up your data in the transactional store at the scheduled backup interval.
### Is periodic backup mode supported for analytical store enabled containers?
-Yes, but only for the regular transactional data. Backup and restore of your data in the analytical store is not supported at this time.
+Yes, but only for the regular transactional data. Within an analytical store, backup and restore of your data isn't supported at this time.
## Next steps
Next you can learn about how to configure and manage periodic and continuous bac
* [Configure and manage periodic backup](configure-periodic-backup-restore.md) policy. * What is [continuous backup](continuous-backup-restore-introduction.md) mode?
-* Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
+* Enable continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
* Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). * [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md). * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
description: Learn how to provision an account with continuous backup and point
Previously updated : 04/18/2022 Last updated : 06/28/2022
ms.devlang: azurecli
-# Provision an Azure Cosmos DB account with continuous backup and point in time restore
+# Provision an Azure Cosmos DB account with continuous backup and point in time restore
+ [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-Azure Cosmos DB's point-in-time restore feature helps you to recover from an accidental change within a container, to restore a deleted account, database, or a container or to restore into any region (where backups existed). The continuous backup mode allows you to do restore to any point of time within the last 30 days.
+Azure Cosmos DB's point-in-time restore feature helps you to recover from an accidental change within a container, restore a deleted resource, or restore into any region where backups existed. The continuous backup mode allows you to restore to any point of time within the last 30 or 7 days. How far back you can go in time depends on the tier of the continuous mode for the account.
This article explains how to provision an account with continuous backup and point in time restore using [Azure portal](#provision-portal), [PowerShell](#provision-powershell), [CLI](#provision-cli) and [Resource Manager templates](#provision-arm-template).
+> [!IMPORTANT]
+> Support for 7-day continous backup in both provisioning and migration scenarios is still in preview. Please use PowerShell and Azure CLI to migrate or provision an account with continous backup configured at the 7-day tier.
+ > [!NOTE] > You can provision continuous backup mode account only if the following conditions are true: >
This article explains how to provision an account with continuous backup and poi
> * If the account is of type Table API or Gremlin API. > * If the account has a single write region. - ## <a id="provision-portal"></a>Provision using Azure portal When creating a new Azure Cosmos DB account, in the **Backup policy** tab, choose **continuous** mode to enable the point in time restore functionality for the new account. With the point-in-time restore, data is restored to a new account, currently you can't restore to an existing account.
Table API and Gremlin API are in preview and can be provisioned with PowerShell
## <a id="provision-powershell"></a>Provision using Azure PowerShell
-Before provisioning the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or version higher than 6.2.0. Next connect to your Azure account and select the required subscription with the following commands:
+For PowerShell and CLI commands, the tier value is optional, if it isn't already provided. If not provided the account backup will be retained for 30 days. The tiers are represented by the values ``Continuous7Days`` or ``Continuous30Days``.
+
+1. Install the latest version of Azure PowerShell
+
+ * Before provisioning the account, install any version of Azure PowerShell higher than 6.2.0. For more information about the latest version of Azure PowerShell, see [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true).
+ * For provisioning the ``Continuous7Days`` tier, you'll need to install the preview version of the module by running ``Install-Module -Name Az.CosmosDB -AllowPrerelease``.
+ * Next connect to your Azure account and select the required subscription with the following commands:
-1. Sign into Azure using the following command:
+ 1. Sign into Azure using the following command:
- ```azurepowershell
- Connect-AzAccount
- ```
+ ```azurepowershell
+ Connect-AzAccount
+ ```
-1. Select a specific subscription with the following command:
+ 1. Select a specific subscription with the following command:
- ```azurepowershell
- Select-AzSubscription -Subscription <SubscriptionName>
- ```
+ ```azurepowershell
+ Select-AzSubscription -Subscription <SubscriptionName>
+ ```
-#### <a id="provision-powershell-sql-api"></a>SQL API account
+### <a id="provision-powershell-sql-api"></a>SQL API account
To provision an account with continuous backup, add the argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
+The following cmdlet assumes a single region write account, *Pitracct*, in the in *West US* region in the *MyRG* resource group. The account has continuous backup policy enabled. Continuous backup is configured at the ``Continous7days`` tier:
```azurepowershell- New-AzCosmosDBAccount ` -ResourceGroupName "MyRG" ` -Location "West US" ` -BackupPolicyType Continuous `
+ -ContinuousTier Continuous7Days `
-Name "pitracct" ` -ApiKind "Sql"
-
```
-#### <a id="provision-powershell-mongodb-api"></a>API for MongoDB
+### <a id="provision-powershell-mongodb-api"></a>API for MongoDB
-The following cmdlet is an example of continuous backup account *Pitracct* created in *West US* region under *MyRG* resource group:
+The following cmdlet is an example of continuous backup account configured with the ``Continous30days`` tier:
```azurepowershell- New-AzCosmosDBAccount ` -ResourceGroupName "MyRG" ` -Location "West US" ` -BackupPolicyType Continuous `
+ -ContinuousTier Continuous30Days `
-Name "Pitracct" ` -ApiKind "MongoDB" ` -ServerVersion "3.6"- ```
-#### <a id="provision-powershell-table-api"></a>Table API account
+### <a id="provision-powershell-table-api"></a>Table API account
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
+The following cmdlet is an example of continuous backup policy with the ``Continous7days`` tier:
```azurepowershell- New-AzCosmosDBAccount ` -ResourceGroupName "MyRG" ` -Location "West US" ` -BackupPolicyType Continuous `
+ -ContinuousTier Continuous7Days `
-Name "pitracct" ` -ApiKind "Table"
-
```
-#### <a id="provision-powershell-graph-api"></a>Gremlin API account
+### <a id="provision-powershell-graph-api"></a>Gremlin API account
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
+The following cmdlet is an example of an account with continuous backup policy configured with the ``Continous30days`` tier:
```azurepowershell- New-AzCosmosDBAccount ` -ResourceGroupName "MyRG" ` -Location "West US" ` -BackupPolicyType Continuous `
+ -ContinuousTier Continuous30Days `
-Name "pitracct" `
- -ApiKind "Gremlin"
-
+ -ApiKind "Gremlin"
``` ## <a id="provision-cli"></a>Provision using Azure CLI
+For PowerShell and CLI commands tier value is optional, if it isn't provided ΓÇô the account backup will be retained for 30 days. The tiers are represented by ``Continuous7Days`` or ``Continuous30Days``.
+ Before provisioning the account, install Azure CLI with the following steps: 1. Install the latest version of Azure CLI
- * Install the latest version of [Azure CLI](/cli/azure/install-azure-cli) or version higher than 2.26.0
- * If you have already installed CLI, run `az upgrade` command to update to the latest version. This command will only work with CLI version higher than 2.11. If you have an earlier version, use the above link to install the latest version.
+ * Install a version of the Azure CLI higher than 2.26.0. For more information about the latest version of the Azure CLI, see [Azure CLI](/cli/azure/install-azure-cli).
+ * If you have already installed CLI, run ``az upgrade`` command to update to the latest version. This command will only work with CLI version higher than 2.11. If you have an earlier version, use the above link to install the latest version.
+ * For provisioning the ``Continuous7Days`` tier, you'll need to install the preview version of the extension by ``az extension update --name cosmosdb-preview``
1. Sign in and select your subscription
- * Sign into your Azure account with `az login` command.
- * Select the required subscription using `az account set -s <subscriptionguid>` command.
+ * Sign into your Azure account with ``az login`` command.
+ * Select the required subscription using ``az account set -s <subscriptionguid>`` command.
### <a id="provision-cli-sql-api"></a>SQL API account
-To provision a SQL API account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named *Pitracct* with continuous backup policy created in the *West US* region under *MyRG* resource group:
+To provision a SQL API account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous7days`` tier:
```azurecli-interactive
az cosmosdb create \
--name Pitracct \ --resource-group MyRG \ --backup-policy-type Continuous \
+ --continuous-tier "Continuous7Days" \
--default-consistency-level Session \ --locations regionName="West US"
az cosmosdb create \
### <a id="provision-cli-mongo-api"></a>API for MongoDB
-The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created in the *West US* region under *MyRG* resource group:
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous30days`` tier:
```azurecli-interactive- az cosmosdb create \ --name Pitracct \ --kind MongoDB \ --resource-group MyRG \ --server-version "3.6" \ --backup-policy-type Continuous \
+ --continuous-tier "Continuous30Days" \
--default-consistency-level Session \ --locations regionName="West US"- ```+ ### <a id="provision-cli-table-api"></a>Table API account
-The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created in the *West US* region under *MyRG* resource group:
-```azurecli-interactive
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous30days`` tier:
+```azurecli-interactive
az cosmosdb create \ --name Pitracct \ --kind GlobalDocumentDB \ --resource-group MyRG \ --capabilities EnableTable \ --backup-policy-type Continuous \
+ --continuous-tier "Continuous30Days" \
--default-consistency-level Session \ --locations regionName="West US" ```+ ### <a id="provision-cli-graph-api"></a>Gremlin API account
-The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created the *West US* region under *MyRG* resource group:
-```azurecli-interactive
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous7days`` tier created in *West US* region under *MyRG* resource group:
+```azurecli-interactive
az cosmosdb create \ --name Pitracct \ --kind GlobalDocumentDB \ --resource-group MyRG \ --capabilities EnableGremlin \ --backup-policy-type Continuous \
+ --continuous-tier "Continuous7Days" \
--default-consistency-level Session \ --locations regionName="West US" ``` ## <a id="provision-arm-template"></a>Provision using Resource Manager template
-You can use Azure Resource Manager templates to deploy an Azure Cosmos DB account with continuous mode. When defining the template to provision an account, include the `backupPolicy` parameter as shown in the following example:
+You can use Azure Resource Manager templates to deploy an Azure Cosmos DB account with continuous mode. When defining the template to provision an account, include the `backupPolicy` and tier parameter as shown in the following example, tier can be ``Continuous7Days`` or ``Continuous30Days`` :
```json {
You can use Azure Resource Manager templates to deploy an Azure Cosmos DB accoun
"locationName": "West US" } ],
- "backupPolicy": {
- "type": "Continuous"
- },
+ "backupPolicy":{
+ "type":"Continuous",
+ "continuousModeProperties":{
+ "tier":"Continuous7Days"
+ }
+ }
"databaseAccountOfferType": "Standard"
- }
- }
- ]
-}
+ }
+ ]
+ }
+ ``` Next, deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v3.md
Release history is maintained in the azure-sdk-for-java repo, for detailed list
## Recommended version
-It's strongly recommended to use version 3.10.0 and above.
+It's strongly recommended to use version 3.22.0 and above.
## Additional notes
cosmos-db Sql Api Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-v4.md
Release history is maintained in the azure-sdk-for-java repo, for detailed list
## Recommended version
-It's strongly recommended to use version 4.18.0 and above.
+It's strongly recommended to use version 4.31.0 and above.
## FAQ [!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)]
cost-management-billing Analyze Cost Data Azure Cost Management Power Bi Template App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md
Here's how values in the overview tiles are calculated.
- The value shown in the **New purchase amount** tile is calculated as the sum of `newPurchases`. - The value shown in the **Total charges** tile is calculated as the sum of (`adjustments` + `ServiceOverage` + `chargesBilledseparately` + `azureMarketplaceServiceCharges`).
-The EA portal doesn't the Total charges column. The Power BI template app includes Adjustments, Service Overage, Charges billed separately, and Azure marketplace service charges as Total charges.
+The EA portal doesn't show the Total charges column. The Power BI template app includes Adjustments, Service Overage, Charges billed separately, and Azure marketplace service charges as Total charges.
The Prepayment Usage shown in the EA portal isn't available in the Template app as part of the total charges.
For more information about configuring data, refresh, sharing reports, and addit
- [Subscribe yourself and others to reports and dashboards in the Power BI service](/power-bi/service-report-subscribe) - [Download a report from the Power BI service to Power BI Desktop](/power-bi/service-export-to-pbix) - [Save a report in Power BI service and Power BI Desktop](/power-bi/service-report-save)-- [Create a report in the Power BI service by importing a dataset](/power-bi/service-report-create-new)
+- [Create a report in the Power BI service by importing a dataset](/power-bi/service-report-create-new)
cost-management-billing Link Partner Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md
Title: Link an Azure account to a partner ID
+ Title: Link a partner ID to your account thatΓÇÖs used to manage customers
description: Track engagements with Azure customers by linking a partner ID to the user account that you use to manage the customer's resources. Previously updated : 11/04/2021 Last updated : 06/28/2022
-# Link a partner ID to your Azure accounts
+# Link a partner ID to your account thatΓÇÖs used to manage customers
Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When acting on behalf of the customer managing, configuring, and supporting Azure services, the partner users will need access to the customerΓÇÖs environment. Using Partner Admin Link (PAL), partners can associate their partner network ID with the credentials used for service delivery.
cost-management-billing Subscription States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-states.md
tags: billing
Previously updated : 09/15/2021 Last updated : 06/28/2022
This article describes the various states that an Azure subscription may have. Y
| **Disabled** | Your Azure subscription is disabled and can no longer be used to create or manage Azure resources. While in this state, your virtual machines are de-allocated, temporary IP addresses are freed, storage is read-only and other services are disabled. A subscription can get disabled because of the following reasons: Your credit may have expired. You may have reached your spending limit. You have a past due bill. Your credit card limit was exceeded. Or, it was explicitly disabled or canceled. Depending on the subscription type, a subscription may remain disabled between 1 - 90 days. After which, it's permanently deleted. For more information, see [Reactivate a disabled Azure subscription](subscription-disabled.md).<br><br>Operations to create or update resources (PUT, PATCH) are disabled. Operations that take an action (POST) are also disabled. You can retrieve or delete resources (GET, DELETE). Your resources are still available. | | **Expired** | Your Azure subscription is expired because it was canceled. You can reactivate an expired subscription. For more information, see [Reactivate a disabled Azure subscription](subscription-disabled.md).<br><br>Operations to create or update resources (PUT, PATCH) are disabled. Operations that take an action (POST) are also disabled. You can retrieve or delete resources (GET, DELETE).| | **Past Due** | Your Azure subscription has an outstanding payment pending. Your subscription is still active but failure to pay the dues may result in subscription being disabled. For more information, see [Resolve past due balance for your Azure subscription.](resolve-past-due-balance.md).<br><br>All operations are available. |
-| **Warned** | Your Azure subscription is in a warned state and will be disabled shortly if the warning reason isn't addressed. A subscription may be in warned state if its past due, canceled by user, or if the subscription has expired.<br><br>You can retrieve or delete resources (GET/DELETE), but you can't create any resources (PUT/PATCH/POST) |
+| **Warned** | Your Azure subscription is in a warned state and will be disabled shortly if the warning reason isn't addressed. A subscription may be in warned state if its past due, canceled by user, or if the subscription has expired.<br><br>You can retrieve or delete resources (GET/DELETE), but you can't create any resources (PUT/PATCH/POST) <p> Resources in this state go offline but can be recovered when the subscription resumes an active/enabled state. A subscription in this state isn't charged. |
data-factory Connector Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-service-cloud.md
Previously updated : 09/09/2021 Last updated : 06/23/2022 # Copy data from and to Salesforce Service Cloud using Azure Data Factory or Synapse Analytics
To copy data from Salesforce Service Cloud, the following properties are support
] ```
+> [!Note]
+> Salesforce Service Cloud source doesn't support proxy settings in the self-hosted integration runtime, but sink does.
+ ### Salesforce Service Cloud as a sink type To copy data to Salesforce Service Cloud, the following properties are supported in the copy activity **sink** section.
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
Previously updated : 06/10/2022 Last updated : 06/23/2022 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
To copy data from Salesforce, set the source type in the copy activity to **Sale
>[!NOTE] >For backward compatibility: When you copy data from Salesforce, if you use the previous "RelationalSource" type copy, the source keeps working while you see a suggestion to switch to the new "SalesforceSource" type.
+> [!Note]
+> Salesforce source doesn't support proxy settings in the self-hosted integration runtime, but sink does.
+ ### Salesforce as a sink type To copy data to Salesforce, set the sink type in the copy activity to **SalesforceSink**. The following properties are supported in the copy activity **sink** section.
When you copy data from Salesforce, the following mappings are used from Salesfo
| Text (Encrypted) |String | | URL |String |
+> [!Note]
+> Salesforce Number type is mapping to Decimal type in Azure Data Factory and Azure Synapse pipelines as a service interim data type. Decimal type honors the defined precision and scale. For data whose decimal places exceeds the defined scale, its value will be rounded off in preview data and copy. To avoid getting such precision loss in Azure Data Factory and Azure Synapse pipelines, consider increasing the decimal places to a reasonably large value in **Custom Field Definition Edit** page of Salesforce.
+ ## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 06/23/2022 Last updated : 06/28/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
1. In the banner seen at the top of the screen, you can click **Open settings to learn more and opt in**.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-1.png" alt-text="Screenshot of Azure Data Factory home page with an Opt in option in a banner at the top of the screen.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-1.png" alt-text="Screenshot of Azure Data Factory home page with an Opt-in option in a banner at the top of the screen.":::
2. Alternatively, you can click the **Settings** button.
There are two ways to enable preview experiences.
Similarly, you can disable preview features with the same steps. Click **Open settings to