Updates from: 06/29/2022 01:09:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
+
+ Title: Configure authentication in an Azure Static Web App by using Azure Active Directory B2C
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Static Web App.
++++++ Last updated : 06/28/2022+++++
+# Configure authentication in an Azure Static Web App by using Azure AD B2C
+
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Static Web App. For more information, check out the [Custom authentication in Azure Static Web Apps](../static-web-apps/authentication-custom.md) article.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. Use the OIDC to securely sign users in to an Azure Static Web App. The sign in flow involves the following steps:
+
+1. Users go to the Azure Static Web App and select **Sign-in**.
+1. The Azure Static Web App initiates an authentication request and redirects users to Azure AD B2C.
+1. Users [sign up or sign in](add-sign-up-and-sign-in-policy.md) and [reset the password](add-password-reset-policy.md). Alternatively, they can sign in with a [social account](add-identity-provider.md).
+1. After users sign in successfully, Azure AD B2C returns an ID token to the Azure Static Web App.
+1. Azure Static Web App validates the ID token, reads the claims, and returns a secure page to users.
+
+When the access token expires or the app session is invalidated, Azure Static Web App initiates a new authentication request and redirects users to Azure AD B2C. If the Azure AD B2C [SSO session](session-behavior.md) is active, Azure AD B2C issues an access token without prompting users to sign in again. If the Azure AD B2C session expires or becomes invalid, users are prompted to sign in again.
+
+## Prerequisites
+
+- If you haven't created an app yet, follow the guidance how to create an [Azure Static Web App](../static-web-apps/overview.md).
+- Familiarize yourself with the Azure Static Web App [staticwebapp.config.json](../static-web-apps/configuration.md) configuration file.
+- Familiarize yourself with the Azure Static Web App [App Settings](../static-web-apps/application-settings.md).
+
+## Step 1: Configure your user flow
++
+## Step 2: Register a web application
+
+To enable your application to sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app that you register establishes a trust relationship between the app and Azure AD B2C.
+
+During app registration, you specify a *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID and the redirect URI to create authentication requests. You also create a *client secret*, which your app uses to securely acquire the tokens.
+
+### Step 2.1: Register the app
+
+To register your application, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. Under **Name**, enter a name for the application (for example, *My Azure Static web app*).
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://<YOUR_SITE>/.auth/login/aadb2c/callback`. Replace the `<YOUR_SITE>` with your Azure Static Web App name. For example: `https://witty-island-11111111.azurestaticapps.net/.auth/login/aadb2c/callback`. If you configured an [Azure Static Web App's custom domains](../static-web-apps/custom-domain.md), use the custom domain in the redirect URI. For example, `https://www.example.com/.auth/login/aadb2c/callback`
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
+1. Select **Register**.
+1. Select **Overview**.
+1. Record the **Application (client) ID** for later use, when you configure the web application.
+
+ ![Screenshot of the web app Overview page for recording your web application I D.](./media/configure-authentication-in-azure-static-app/get-azure-ad-b2c-app-id.png)
+
+### Step 2.2: Create a client secret
+
+1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *My Azure Static web app*.
+1. In the left menu, under **Manage**, select **Certificates & secrets**.
+1. Select **New client secret**.
+1. Enter a description for the client secret in the **Description** box. For example, *clientsecret1*.
+1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**.
+1. Record the secret's **Value** for use in your client application code. This secret value is never displayed again after you leave this page. You use this value as the application secret in your application's code.
+
+## Step 3: Configure the Azure Static App
+
+Once the application is registered with Azure AD B2C, create the following application secrets in the Azure Static Web App's [application settings](../static-web-apps/application-settings.md). You can configure application settings via the Azure portal or with the Azure CLI. For more information, check out the [Configure application settings for Azure Static Web Apps](../static-web-apps/application-settings.md#configure-application-settings) article.
+
+Add the following keys to the app settings:
+
+| Setting Name | Value |
+| | |
+| `AADB2C_PROVIDER_CLIENT_ID` | The Web App (client) ID from [step 2.1](#step-21-register-the-app). |
+| `AADB2C_PROVIDER_CLIENT_SECRET` | The Web App (client) secret from [step 2.2](#step-22-create-a-client-secret). |
+
+> [!IMPORTANT]
+> Application secrets are sensitive security credentials. Don't share this secret with anyone, distribute it within a client application, or check into source control.
+
+### 3.1 Add an OpenID Connect identity provider
+
+Once you've added the app ID and secrete, use the following steps to add the Azure AD B2C as OpenId Connect identity provider.
+
+1. Add an `auth` section of the [configuration file](../static-web-apps/configuration.md) with a configuration block for the OIDC providers, and your provider definition.
+
+ ```json
+ {
+ "auth": {
+ "identityProviders": {
+ "customOpenIdConnectProviders": {
+ "aadb2c": {
+ "registration": {
+ "clientIdSettingName": "AADB2C_PROVIDER_CLIENT_ID",
+ "clientCredential": {
+ "clientSecretSettingName": "AADB2C_PROVIDER_CLIENT_SECRET"
+ },
+ "openIdConnectConfiguration": {
+ "wellKnownOpenIdConfiguration": "https://<TENANT_NAME>.b2clogin.com/<TENANT_NAME>.onmicrosoft.com/<POLICY_NAME>/v2.0/.well-known/openid-configuration"
+ }
+ },
+ "login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": []
+ }
+ }
+ }
+ }
+ }
+ }
+ ```
+
+1. Replace `<TENANT_NAME>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com/contoso.onmicrosoft.com`).
+1. Replace `<POLICY_NAME>` with the user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).
+
+## Step 4: Check the Azure Static Web APP
+
+1. Navigate to `/.auth/login/aadb2c`. The `/.auth/login` points the Azure Static app login endpoint. The `aadb2c` references to your [OpenID Connect identity provider](#31-add-an-openid-connect-identity-provider). The following URL demonstrates an Azure Static app login endpoint: `https://witty-island-11111111.azurestaticapps.net/.auth/login/aadb2c`.
+1. Complete the sign up or sign in process.
+1. In your browser debugger, [run the following JavaScript in the Console](/microsoft-edge/devtools-guide-chromium/console/console-javascript.md). The JavaScript code will present information about the sign in user.
+
+ ```javascript
+ async function getUserInfo() {
+ const response = await fetch('/.auth/me');
+ const payload = await response.json();
+ const { clientPrincipal } = payload;
+ return clientPrincipal;
+ }
+
+ await getUserInfo();
+ ```
++
+> [!TIP]
+> If you can't run the above JavaScript code in your browser, navigate to the following URL `https://<app-name>.azurewebsites.net/.auth/me`. Replace the `<app-name>` with your Azure Web App.
+
+## Next steps
+
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out [Accessing user information in Azure Static Web Apps](../static-web-apps/user-information.md).
+* Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-azure-static-app-options.md).
active-directory-b2c Configure Authentication In Azure Web App File Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app-file-based.md
+
+ Title: Configure authentication in an Azure Web App configuration file by using Azure Active Directory B2C
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Web App using configuration file.
++++++ Last updated : 06/28/2022+++++
+# Configure authentication in an Azure Web App configuration file by using Azure AD B2C
+
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [File-based configuration in Azure App Service authentication](/app-service/configure-authentication-file-based.md) article.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. Use the OIDC to securely sign users in to an Azure Web App. The sign-in flow involves the following steps:
+
+1. Users go to the Azure Web App and select **Sign-in**.
+1. The Azure Web App initiates an authentication request and redirects users to Azure AD B2C.
+1. Users [sign up or sign in](add-sign-up-and-sign-in-policy.md) and [reset the password](add-password-reset-policy.md). Alternatively, they can sign in with a [social account](add-identity-provider.md).
+1. After users sign in successfully, Azure AD B2C returns an ID token to the Azure Web App.
+1. Azure Web App validates the ID token, reads the claims, and returns a secure page to users.
+
+When the ID token expires or the app session is invalidated, Azure Web App initiates a new authentication request and redirects users to Azure AD B2C. If the Azure AD B2C [SSO session](session-behavior.md) is active, Azure AD B2C issues an access token without prompting users to sign in again. If the Azure AD B2C session expires or becomes invalid, users are prompted to sign in again.
+
+## Prerequisites
+
+- If you haven't created an app yet, follow the guidance how to create an [Azure Web App](../app-service/quickstart-dotnetcore.md).
+
+## Step 1: Configure your user flow
++
+## Step 2: Register a web application
+
+To enable your application to sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app that you register establishes a trust relationship between the app and Azure AD B2C.
+
+During app registration, you'll specify the *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID and the redirect URI to create authentication requests. You also create a client secret, which your app uses to securely acquire the tokens.
+
+### Step 2.1: Register the app
+
+To register your application, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. Under **Name**, enter a name for the application (for example, *My Azure web app*).
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://<YOUR_SITE>/.auth/login/aadb2c/callback`. Replace the `<YOUR_SITE>` with your Azure Web App name. For example: `https://contoso.azurewebsites.net/.auth/login/aadb2c/callback`. If you configured an [Azure Web App's custom domains](../app-service/app-service-web-tutorial-custom-domain.md), user the custom domain in the redirect URI. For example, `https://www.contoso.com/.auth/login/aadb2c/callback`
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
+1. Select **Register**.
+1. Select **Overview**.
+1. Record the **Application (client) ID** for later use, when you configure the web application.
+
+ ![Screenshot of the web app Overview page for recording your web application I D.](./media/configure-authentication-in-azure-web-app/get-azure-ad-b2c-app-id.png)
+
+### Step 2.2: Create a client secret
+
+1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *My Azure web app*.
+1. In the left menu, under **Manage**, select **Certificates & secrets**.
+1. Select **New client secret**.
+1. Enter a description for the client secret in the **Description** box. For example, *clientsecret1*.
+1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**.
+1. Record the secret's **Value** for use in your client application code. This secret value is never displayed again after you leave this page. You use this value as the application secret in your application's code.
+
+## Step 3: Configure the Azure Web App
+
+Once the application is registered with Azure AD B2C, create the following application secrets in the Azure Web App's [application settings](../app-service/configure-common.md#configure-app-settings). You can configure application settings via the Azure portal or with the Azure CLI. For more information, check out the [File-based configuration in Azure App Service authentication](../app-service/configure-authentication-file-based.md) article.
+
+Add the following keys to the app settings:
+
+| Setting Name | Value |
+| | |
+| `AADB2C_PROVIDER_CLIENT_ID` | The Web App (client) ID from [step 2.1](#step-21-register-the-app). |
+| `AADB2C_PROVIDER_CLIENT_SECRET` | The Web App (client) secret from [step 2.2](#step-22-create-a-client-secret). |
+
+> [!IMPORTANT]
+> Application secrets are sensitive security credentials. Do not share this secret with anyone. Don't distribute it within a client application, or check in into a source control.
+
+### 3.1 Add an OpenID Connect identity provider
+
+Once you've the added the app ID and secret, use the following steps to add the Azure AD B2C as OpenId Connect identity provider.
+
+1. Add an `auth` section of the [configuration file](../app-service/configure-authentication-file-based.md#configuration-file-reference) with a configuration block for the OIDC providers, and your provider definition.
+
+ ```json
+ {
+ "auth": {
+ "identityProviders": {
+ "customOpenIdConnectProviders": {
+ "aadb2c": {
+ "registration": {
+ "clientIdSettingName": "AADB2C_PROVIDER_CLIENT_ID",
+ "clientCredential": {
+ "clientSecretSettingName": "AADB2C_PROVIDER_CLIENT_SECRET"
+ },
+ "openIdConnectConfiguration": {
+ "wellKnownOpenIdConfiguration": "https://<TENANT_NAME>.b2clogin.com/<TENANT_NAME>.onmicrosoft.com/<POLICY_NAME>/v2.0/.well-known/openid-configuration"
+ }
+ },
+ "login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": []
+ }
+ }
+ }
+ }
+ }
+ }
+ ```
+
+1. Replace `<TENANT_NAME>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com/contoso.onmicrosoft.com`).
+1. Replace `<POLICY_NAME>` with the user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).
+
+## Step 4: Check the Azure Web app
+
+1. Navigate to your Azure Web App.
+1. Complete the sign up or sign in process.
+1. In your browser, navigate you the following URL `https://<app-name>.azurewebsites.net/.auth/me`. Replace the `<app-name>` with your Azure Web App
+
+## Retrieve tokens in app code
+
+From your server code, the provider-specific tokens are injected into the request header, so you can easily access them. The following table shows possible token header names:
++
+|Header name |Description |
+|||
+|X-MS-CLIENT-PRINCIPAL-NAME| The user's display name. |
+|X-MS-CLIENT-PRINCIPAL-ID| The ID token sub claim. |
+|X-MS-CLIENT-PRINCIPAL-IDP| The identity provider name, `aadb2c`.|
+|X-MS-TOKEN-AADB2C-ID-TOKEN| The ID token issued by Azure AD B2C|
+
+## Next steps
+
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/app-service/configure-authentication-user-identities).
+* Lear how to [Work with OAuth tokens in Azure App Service authentication](/app-service/configure-authentication-oauth-tokens).
+
active-directory-b2c Configure Authentication In Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app.md
+
+ Title: Configure authentication in an Azure Web App by using Azure Active Directory B2C
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Web App.
++++++ Last updated : 06/28/2022+++++
+# Configure authentication in an Azure Web App by using Azure AD B2C
+
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [configure your App Service or Azure Functions app to login using an OpenID Connect provider](/app-service/configure-authentication-provider-openid-connect.md) article.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. Use the OIDC to securely sign users in to an Azure Web App. The sign-in flow involves the following steps:
+
+1. Users go to the Azure Web App and select **Sign-in**.
+1. The Azure Web App initiates an authentication request and redirects users to Azure AD B2C.
+1. Users [sign up or sign in](add-sign-up-and-sign-in-policy.md) and [reset the password](add-password-reset-policy.md). Alternatively, they can sign in with a [social account](add-identity-provider.md).
+1. After users sign in successfully, Azure AD B2C returns an ID token to the Azure Web App.
+1. Azure Web App validates the ID token, reads the claims, and returns a secure page to users.
+
+When the ID token expires or the app session is invalidated, Azure Web App initiates a new authentication request and redirects users to Azure AD B2C. If the Azure AD B2C [SSO session](session-behavior.md) is active, Azure AD B2C issues an access token without prompting users to sign in again. If the Azure AD B2C session expires or becomes invalid, users are prompted to sign in again.
+
+## Prerequisites
+
+- If you haven't created an app yet, follow the guidance how to create an [Azure Web App](../app-service/quickstart-dotnetcore.md).
+
+## Step 1: Configure your user flow
++
+## Step 2: Register a web application
+
+To enable your application to sign in with Azure AD B2C, register your app in the Azure AD B2C directory. Registering your app establishes a trust relationship between the app and Azure AD B2C.
+
+During app registration, you'll specify the *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID and the redirect URI to create authentication requests. You also create a client secret, which your app uses to securely acquire the tokens.
+
+### Step 2.1: Register the app
+
+To register your application, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. Under **Name**, enter a name for the application (for example, *My Azure web app*).
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://<YOUR_SITE>/.auth/login/aadb2c/callback`. Replace the `<YOUR_SITE>` with your Azure Web App name. For example: `https://contoso.azurewebsites.net/.auth/login/aadb2c/callback`. If you configured an [Azure Web App's custom domains](../app-service/app-service-web-tutorial-custom-domain.md), user the custom domain in the redirect URI. For example, `https://www.contoso.com/.auth/login/aadb2c/callback`
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
+1. Select **Register**.
+1. Select **Overview**.
+1. Record the **Application (client) ID** for later use, when you configure the web application.
+
+ ![Screenshot of the web app Overview page for recording your web application I D.](./media/configure-authentication-in-azure-web-app/get-azure-ad-b2c-app-id.png)
+
+### Step 2.2: Create a client secret
+
+1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *My Azure web app*.
+1. In the left menu, under **Manage**, select **Certificates & secrets**.
+1. Select **New client secret**.
+1. Enter a description for the client secret in the **Description** box. For example, *clientsecret1*.
+1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**.
+1. Record the secret's **Value** for use in your client application code. This secret value is never displayed again after you leave this page. You use this value as the application secret in your application's code.
+
+## Step 3: Configure the Azure App
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD tenant (not the Azure AD B2C tenant). Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find the Azure AD directory in the **Directory name** list, and then select **Switch**.
+1. Navigate to your Azure web app.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+1. Select **OpenID Connect** in the identity provider dropdown.
+1. For **OpenID provider name** type `aadb2c`.
+1. For **Metadata entry**, select **Document URL**. Then for the **Document URL** provide the following URL:
+
+ ```http
+ https://<TENANT_NAME>.b2clogin.com/<TENANT_NAME>.onmicrosoft.com/<POLICY_NAME>/v2.0/.well-known/openid-configuration
+ ```
+
+ 1. Replace `<TENANT_NAME>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com/contoso.onmicrosoft.com`). If you have a [custom domains](custom-domain.md) configure, you can use that custom domain. Replace your B2C tenant name, contoso.onmicrosoft.com, in the authentication request URL with your tenant ID GUID. For example, you can change `https://fabrikamb2c.b2clogin.com/contoso.onmicrosoft.com/` to `https://account.contosobank.co.uk/<tenant ID GUID>/`.
+
+ 1. Replace the `<POLICY_NAME>` with the user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).
+
+1. For the **Client ID** provide the Web App (client) ID from [step 2.1](#step-21-register-the-app).
+1. For the **Client Secret** provide the Web App (client) secret from [step 2.2](#step-22-create-a-client-secret).
+
+ > [!TIP]
+ > Your client secret will be stored as an app setting to ensure secrets are stored in a secure fashion. You can update that setting later to use [Key Vault references](/app-service/app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
+
+1. Keep the rest of the settings with the default values.
+1. Press the **Add** button to finish setting up the identity provider.
+
+## Step 4: Check the Azure Web app
+
+1. In your browser, navigate to your Azure Web App using `https://<app-name>.azurewebsites.net` . Replace the `<app-name>` with your Azure Web App.
+1. Complete the sign up or sign in process.
+1. In your browser, navigate you the following URL `https://<app-name>.azurewebsites.net/.auth/me` to see the information about the signed in user. Replace the `<app-name>` with your Azure Web App.
+
+## Retrieve tokens in app code
+
+From your server code, the provider-specific tokens are injected into the request header, so you can easily access them. The following table shows possible token header names:
++
+|Header name |Description |
+|||
+|X-MS-CLIENT-PRINCIPAL-NAME| The user's display name. |
+|X-MS-CLIENT-PRINCIPAL-ID| The ID token sub claim. |
+|X-MS-CLIENT-PRINCIPAL-IDP| The identity provider name, `aadb2c`.|
+|X-MS-TOKEN-AADB2C-ID-TOKEN| The ID token issued by Azure AD B2C|
+
+## Next steps
+
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/app-service/configure-authentication-user-identities).
+* Lear how to [Work with OAuth tokens in Azure App Service authentication](/app-service/configure-authentication-oauth-tokens).
+
active-directory-b2c Configure Authentication Sample Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md
Previously updated : 06/08/2022 Last updated : 06/28/2022
The sign-in flow involves the following steps:
A computer that's running: * [Visual Studio Code](https://code.visualstudio.com/) or another code editor
-* [Python](https://nodejs.org/en/download/) 2.7+ or 3+
+* [Python](https://www.python.org/downloads/) 3.9 or above
## Step 1: Configure your user flow
During app registration, you'll specify the *Redirect URI*. The redirect URI is
### Step 2.1: Register the app
-To create the web app registration, do the following:
+To create the web app registration, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
To create the web app registration, do the following:
1. Select **Overview**. 1. Record the **Application (client) ID** for later use, when you configure the web application.
- ![Screenshot of the web app Overview page for recording your web app ID.](./media/configure-authentication-sample-python-web-app/get-azure-ad-b2c-app-id.png)
+ ![Screenshot of the web app Overview page for recording your web app I D.](./media/configure-authentication-sample-python-web-app/get-azure-ad-b2c-app-id.png)
### Step 2.2: Create a web app client secret
Extract the sample file to a folder where the total length of the path is 260 or
## Step 4: Configure the sample web app
-In the project's root directory, do the following:
+In the project's root directory, follow these steps:
1. Rename the *app_config.py* file to *app_config.py.OLD*. 1. Rename the *app_config_b2c.py* file to *app_config.py*.
CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxx" # Placeholder - for use ONLY during t
``` 1. Install the required packages from PyPi and run the web app on your local machine by running the following commands:
- ```console
- pip install -r requirements.txt
- flask run --host localhost --port 5000
+ # [Linux](#tab/linux)
+
+ ```bash
+ python -m pip install -r requirements.txt
+ python -m flask run --host localhost --port 5000
+ ```
+
+ # [macOS](#tab/macos)
+
+ ```bash
+ python -m pip install -r requirements.txt
+ python -m flask run --host localhost --port 5000
+ ```
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ py -m pip install -r requirements.txt
+ py -m flask run --host localhost --port 5000
```
+
+
The console window displays the port number of the locally running application:
CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxx" # Placeholder - for use ONLY during t
1. Select **Sign In**.
- ![Screenshot showing the sign-in with Azure AD B2C.](./media/configure-authentication-sample-python-web-app/web-app-sign-in.png)
+ ![Screenshot showing the sign-in flow.](./media/configure-authentication-sample-python-web-app/web-app-sign-in.png)
1. Complete the sign-up or sign-in process.
To enable your app to sign in with Azure AD B2C and call a web API, you must reg
The app registrations and the application architecture are described in the following diagrams:
-![Diagram describing a web app with web API, registrations, and tokens.](./media/configure-authentication-sample-python-web-app/web-app-with-api-architecture.png)
+![Diagram describing a web app with web A P I, registrations, and tokens.](./media/configure-authentication-sample-python-web-app/web-app-with-api-architecture.png)
[!INCLUDE [active-directory-b2c-app-integration-call-api](../../includes/active-directory-b2c-app-integration-call-api.md)]
SCOPE = ["https://contoso.onmicrosoft.com/api/demo.read", "https://contoso.onmic
1. Stop the app. and then rerun it. 1. Select **Call Microsoft Graph API**.
- ![Screenshot showing how to call a web API.](./media/configure-authentication-sample-python-web-app/call-web-api.png)
+ ![Screenshot showing how to call a web A P I.](./media/configure-authentication-sample-python-web-app/call-web-api.png)
## Step 7: Deploy your application
active-directory-b2c Enable Authentication Azure Static App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-azure-static-app-options.md
+
+ Title: Enable Azure Static Web App authentication options using Azure Active Directory B2C
+description: This article discusses several ways to enable Azure Static Web App authentication options.
++++++ Last updated : 06/28/2022+++++
+# Enable authentication options in an Azure Static Web App by using Azure AD B2C
+
+This article describes how to enable, customize, and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your Azure Static Web Apps.
+
+Before you start, it's important to familiarize yourself with the [Configure authentication in an Azure Static Web App by using Azure AD B2C](configure-authentication-in-azure-static-app.md) article.
++
+To use a custom domain and your tenant ID in the authentication URL, follow the guidance in [Enable custom domains](custom-domain.md). Open the [configuration file](../static-web-apps/configuration.md). This file contains information about your Azure AD B2C identity provider.
+
+In the configuration file, follow these steps:
+
+1. Under the `customOpenIdConnectProviders` locate the `wellKnownOpenIdConfiguration` element.
+1. Update the URL of your Azure AD B2C well-Known configuration endpoint with your custom domain and [tenant ID](tenant-management.md#get-your-tenant-id). For more information, see [Use tenant ID](custom-domain.md#optional-use-tenant-id).
+
+The following JSON shows the app settings before the change:
+
+```JSON
+"openIdConnectConfiguration": {
+ "wellKnownOpenIdConfiguration": "https://contoso.b2clogin.com/contoso.onmicrosoft.com/<POLICY_NAME>/v2.0/.well-known/openid-configuration"
+ }
+}
+```
+
+The following JSON shows the app settings after the change:
+
+```JSON
+"openIdConnectConfiguration": {
+ "wellKnownOpenIdConfiguration": "https://login.contoso.com/00000000-0000-0000-0000-000000000000/<POLICY_NAME>/v2.0/.well-known/openid-configuration"
+ }
+```
+++
+1. Check the domain name of your external identity provider. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider).
+1. Open the [configuration file](../static-web-apps/configuration.md).
+1. Under the `login` element, locate the `loginParameterNames`.
+1. Add the domain_hint parameter with its corresponding value, such as facebook.com.
+
+The following code snippets demonstrate how to pass the domain hint parameter. It uses facebook.com as the attribute value.
+
+```json
+"login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": ["domain_hint=facebook.com"]
+}
+```
+++
+1. [Configure language customization](language-customization.md).
+1. Open the [configuration file](../static-web-apps/configuration.md).
+1. Under the `login` element, locate the `loginParameterNames`.
+1. Add the ui_locales parameter with its corresponding value, such as `es-es`.
+
+The following code snippets demonstrate how to pass the `ui_locales` parameter. It uses `es-es` as the attribute value.
+
+```json
+"login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": ["ui_locales=es-es"]
+}
+```
++
+1. Configure the [ContentDefinitionParameters](customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) element.
+1. Open the [configuration file](../static-web-apps/configuration.md).
+1. Under the `login` element, locate the `loginParameterNames`.
+1. Add the custom parameter, such as `campaignId`.
+
+The following code snippets demonstrate how to pass the `campaignId` custom query string parameter. It uses `germany-promotion` as the attribute value.
+
+```json
+"login": {
+ "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "scopes": [],
+ "loginParameterNames": ["campaignId=germany-promotion"]
+}
+```
+
+## Next steps
+
+- Check out the [Azure Static App configuration overview](../static-web-apps/configuration-overview.md) article.
active-directory-b2c Enable Authentication Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md
+
+ Title: Enable authentication in your own Python web application using Azure Active Directory B2C
+description: This article explains how to enable authentication in your own Python web application using Azure AD B2C
+++++++ Last updated : 06/28/2022++++
+# Enable authentication in your own Python web application using Azure Active Directory B2C
+
+In this article, you'll learn how to add Azure Active Directory B2C (Azure AD B2C) authentication in your own Python web application. You'll enable users to sign in, sign out, update profile and reset password using Azure AD B2C user flows. This article uses [Microsoft Authentication Library (MSAL) for Python](https://github.com/AzureAD/microsoft-authentication-library-for-python/tree/main) to simplify adding authentication to your Python web application.
+
+The aim of this article is to substitute the sample application you used in [Configure authentication in a sample Python web application by using Azure AD B2C](configure-authentication-sample-python-web-app.md) with your own Python application.
+
+This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https://flask.palletsprojects.com/en/2.1.x/) to create a basic web app. The application's views uses [Jinja2 templates](https://flask.palletsprojects.com/en/2.1.x/templating/).
+
+## Prerequisites
+
+- Complete the steps in [Configure authentication in a sample Python web application by using Azure AD B2C](configure-authentication-sample-python-web-app.md). You'll create Azure AD B2C user flows and register a web application in Azure portal.
+- Install [Python](https://www.python.org/downloads/) 3.9 or above
+- [Visual Studio Code](https://code.visualstudio.com/) or another code editor
+- Install the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code
+
+## Step 1: Create the Python project
+
+1. On your file system, create a project folder for this tutorial, such as `my-python-web-app`.
+1. In your terminal, change directory into your Python app folder, such as `cd my-python-web-app`.
+1. Run the following command to create and activate a virtual environment named `.venv` based on your current interpreter.
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ sudo apt-get install python3-venv # If needed
+ python3 -m venv .venv
+ source .venv/bin/activate
+ ```
+
+ # [macOS](#tab/macos)
+
+ ```bash
+ python3 -m venv .venv
+ source .venv/bin/activate
+ ```
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ py -3 -m venv .venv
+ .venv\scripts\activate
+ ```
+
+
+1. Update pip in the virtual environment by running the following command in the terminal:
+
+ ```bash
+ python -m pip install --upgrade pip
+ ```
+
+1. To enable the Flask debug features, switch Flask to the development environment to `development` mode. For more information about debugging Flask apps, check out the [Flask documentation](https://flask.palletsprojects.com/en/2.1.x/config/#environment-and-debug-features).
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ export FLASK_ENV=development
+ ```
+
+ # [macOS](#tab/macos)
+
+ ```bash
+ export FLASK_ENV=development
+ ```
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ set FLASK_ENV=development
+ ```
+
+
+1. Open the project folder in VS Code by running the `code .` command, or by opening VS Code and selecting the **File** > **Open Folder**.
++
+## Step 2: Install app dependencies
+
+Under your web app root folder, create the `requirements.txt` file. The requirements file [lists the packages](https://pip.pypa.io/en/stable/user_guide/) to be installed using pip install. Add the following content to the requirements.txt file:
++
+```
+Flask>=2
+werkzeug>=2
+
+flask-session>=0.3.2,<0.5
+requests>=2,<3
+msal>=1.7,<2
+```
+
+In your terminal, install the dependencies by running the following commands:
+
+# [Linux](#tab/linux)
+
+```bash
+python -m pip install -r requirements.txt
+```
+
+# [macOS](#tab/macos)
+
+```bash
+python -m pip install -r requirements.txt
+```
+
+# [Windows](#tab/windows)
+
+```bash
+py -m pip install -r requirements.txt
+```
+++
+## Step 3: Build app UI components
+
+Flask is a lightweight Python framework for web applications that provides the basics for URL routing and page rendering. It leverages Jinja2 as its template engine to render the content of your app. For more information, check out the [template designer documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/). In this section, you add the required templates that provide the basic functionality of your web app.
+
+### Step 3.1 Create a base template
+
+A base page template in Flask contains all the shared parts of a set of pages, including references to CSS files, script files, and so forth. Base templates also define one or more block tags that other templates that extend the base are expected to override. A block tag is delineated by `{% block <name> %}` and `{% endblock %}` in both the base template and the extended template.
++
+In the root folder of your web app, create the `templates` folder. In the templates folder, create a file named `base.html`, and then add the contents below:
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+
+<head>
+ <meta charset="UTF-8">
+ {% block metadata %}{% endblock %}
+
+ <title>{% block title %}{% endblock %}</title>
+ <!-- Bootstrap CSS file reference -->
+ <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.0-beta1/dist/css/bootstrap.min.css" rel="stylesheet"
+ integrity="sha384-0evHe/X+R7YkIZDRvuzKMRqM+OrBnVFBL6DOitfPri4tjfHxaWutUpFmBp4vmVor" crossorigin="anonymous">
+</head>
+
+<body>
+ <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
+ <div class="container-fluid">
+ <a class="navbar-brand" href="{{ url_for('index')}}">Python Flask demo</a>
+ <button class="navbar-toggler" type="button" data-bs-toggle="collapse"
+ data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false"
+ aria-label="Toggle navigation">
+ <span class="navbar-toggler-icon"></span>
+ </button>
+ <div class="collapse navbar-collapse" id="navbarSupportedContent">
+ <ul class="navbar-nav me-auto mb-2 mb-lg-0">
+ <li class="nav-item">
+ <a class="nav-link active" aria-current="page" href="{{ url_for('index')}}">Home</a>
+ </li>
+ <li class="nav-item">
+ <a class="nav-link" href="{{ url_for('graphcall')}}">Graph API</a>
+ </li>
+ </ul>
+ </div>
+ </div>
+ </nav>
+
+ <div class="container body-content">
+ <br />
+ {% block content %}
+ {% endblock %}
+
+ <hr />
+ <footer>
+ <p>Powered by MSAL Python {{ version }}</p>
+ </footer>
+ </div>
+</body>
+
+</html>
+```
+
+### Step 3.2 Create the web app templates
+
+Add the following templates under the templates folder. These templates extend the `base.html` template:
+
+- **https://docsupdatetracker.net/index.html**: the home page of the web app. The templates use the following logic: if a user doesn't sign-in, it renders the sing-in button. If users sings-in, it renders the access token's claims, link to edit profile, and call a Graph API.
+
+ ```html
+ {% extends "base.html" %}
+ {% block title %}Home{% endblock %}
+ {% block content %}
+
+ <h1>Microsoft Identity Python Web App</h1>
+
+ {% if user %}
+ <h2>Claims:</h2>
+ <pre>{{ user |tojson(indent=4) }}</pre>
+
+
+ {% if config.get("ENDPOINT") %}
+ <li><a href='/graphcall'>Call Microsoft Graph API</a></li>
+ {% endif %}
+
+ {% if config.get("B2C_PROFILE_AUTHORITY") %}
+ <li><a href='{{_build_auth_code_flow(authority=config["B2C_PROFILE_AUTHORITY"])["auth_uri"]}}'>Edit Profile</a></li>
+ {% endif %}
+
+ <li><a href="/logout">Logout</a></li>
+
+ {% else %}
+ <li><a href='{{ auth_url }}'>Sign In</a></li>
+ {% endif %}
+
+ {% endblock %}
+ ```
+
+- **graph.html**: Demonstrates how to call a REST API.
+
+ ```html
+ {% extends "base.html" %}
+ {% block title %}Graph API{% endblock %}
+ {% block content %}
+ <a href="javascript:window.history.go(-1)">Back</a>
+ <!-- Displayed on top of a potentially large JSON response, so it will remain visible -->
+ <h1>Graph API Call Result</h1>
+ <pre>{{ result |tojson(indent=4) }}</pre> <!-- Just a generic json viewer -->
+ {% endblock %}
+ ```
+
+- **auth_error.html**: Handles authentication errors.
+
+ ```html
+ {% extends "base.html" %}
+ {% block title%}Error{% endblock%}
+
+ {% block metadata %}
+ {% if config.get("B2C_RESET_PASSWORD_AUTHORITY") and "AADB2C90118" in result.get("error_description") %}
+ <!-- See also https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-reference-policies#linking-user-flows -->
+ <meta http-equiv="refresh"
+ content='0;{{_build_auth_code_flow(authority=config["B2C_RESET_PASSWORD_AUTHORITY"])["auth_uri"]}}'>
+ {% endif %}
+ {% endblock %}
+
+ {% block content %}
+ <h2>Login Failure</h2>
+ <dl>
+ <dt>{{ result.get("error") }}</dt>
+ <dd>{{ result.get("error_description") }}</dd>
+ </dl>
+
+ <a href="{{ url_for('index') }}">Homepage</a>
+ {% endblock %}
+ ```
+
+## Step 4: Configure your web app
+
+In the root folder of your web app, create a file named `app_config.py`. This file contains information about your Azure AD B2C identity provider. The web app uses this information to establish a trust relationship with Azure AD B2C, sign users in and out, acquire tokens, and validate them. Add the following contents into the file:
+
+```python
+import os
+
+b2c_tenant = "fabrikamb2c"
+signupsignin_user_flow = "B2C_1_signupsignin1"
+editprofile_user_flow = "B2C_1_profileediting1"
+
+resetpassword_user_flow = "B2C_1_passwordreset1" # Note: Legacy setting.
+
+authority_template = "https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{user_flow}"
+
+CLIENT_ID = "Enter_the_Application_Id_here" # Application (client) ID of app registration
+
+CLIENT_SECRET = "Enter_the_Client_Secret_Here" # Application secret.
+
+AUTHORITY = authority_template.format(
+ tenant=b2c_tenant, user_flow=signupsignin_user_flow)
+B2C_PROFILE_AUTHORITY = authority_template.format(
+ tenant=b2c_tenant, user_flow=editprofile_user_flow)
+
+B2C_RESET_PASSWORD_AUTHORITY = authority_template.format(
+ tenant=b2c_tenant, user_flow=resetpassword_user_flow)
+
+REDIRECT_PATH = "/getAToken"
+
+# This is the API resource endpoint
+ENDPOINT = '' # Application ID URI of app registration in Azure portal
+
+# These are the scopes you've exposed in the web API app registration in the Azure portal
+SCOPE = [] # Example with two exposed scopes: ["demo.read", "demo.write"]
+
+SESSION_TYPE = "filesystem" # Specifies the token cache should be stored in server-side session
+```
+
+Update the code above with your Azure AD B2C environment settings as explained in the [Configure the sample web app](configure-authentication-sample-python-web-app.md#step-4-configure-the-sample-web-app) section of the [Configure authentication in a sample Python web app](configure-authentication-sample-python-web-app.md) article.
+
+## Step 5: Add the web app code
+
+In this section, you add the Flask view functions, and the MSAL library authentication methods. Under the root folder of your project, add a file named `app.py` with the following code:
+
+```python
+import uuid
+import requests
+from flask import Flask, render_template, session, request, redirect, url_for
+from flask_session import Session # https://pythonhosted.org/Flask-Session
+import msal
+import app_config
++
+app = Flask(__name__)
+app.config.from_object(app_config)
+Session(app)
+
+# This section is needed for url_for("foo", _external=True) to automatically
+# generate http scheme when this sample is running on localhost,
+# and to generate https scheme when it is deployed behind reversed proxy.
+# See also https://flask.palletsprojects.com/en/1.0.x/deploying/wsgi-standalone/#proxy-setups
+from werkzeug.middleware.proxy_fix import ProxyFix
+app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1, x_host=1)
++
+@app.route("/anonymous")
+def anonymous():
+ return "anonymous page"
+
+@app.route("/")
+def index():
+ #if not session.get("user"):
+ # return redirect(url_for("login"))
+
+ if not session.get("user"):
+ session["flow"] = _build_auth_code_flow(scopes=app_config.SCOPE)
+ return render_template('https://docsupdatetracker.net/index.html', auth_url=session["flow"]["auth_uri"], version=msal.__version__)
+ else:
+ return render_template('https://docsupdatetracker.net/index.html', user=session["user"], version=msal.__version__)
+
+@app.route("/login")
+def login():
+ # Technically we could use empty list [] as scopes to do just sign in,
+ # here we choose to also collect end user consent upfront
+ session["flow"] = _build_auth_code_flow(scopes=app_config.SCOPE)
+ return render_template("login.html", auth_url=session["flow"]["auth_uri"], version=msal.__version__)
+
+@app.route(app_config.REDIRECT_PATH) # Its absolute URL must match your app's redirect_uri set in AAD
+def authorized():
+ try:
+ cache = _load_cache()
+ result = _build_msal_app(cache=cache).acquire_token_by_auth_code_flow(
+ session.get("flow", {}), request.args)
+ if "error" in result:
+ return render_template("auth_error.html", result=result)
+ session["user"] = result.get("id_token_claims")
+ _save_cache(cache)
+ except ValueError: # Usually caused by CSRF
+ pass # Simply ignore them
+ return redirect(url_for("index"))
+
+@app.route("/logout")
+def logout():
+ session.clear() # Wipe out user and its token cache from session
+ return redirect( # Also logout from your tenant's web session
+ app_config.AUTHORITY + "/oauth2/v2.0/logout" +
+ "?post_logout_redirect_uri=" + url_for("index", _external=True))
+
+@app.route("/graphcall")
+def graphcall():
+ token = _get_token_from_cache(app_config.SCOPE)
+ if not token:
+ return redirect(url_for("login"))
+ graph_data = requests.get( # Use token to call downstream service
+ app_config.ENDPOINT,
+ headers={'Authorization': 'Bearer ' + token['access_token']},
+ ).json()
+ return render_template('graph.html', result=graph_data)
++
+def _load_cache():
+ cache = msal.SerializableTokenCache()
+ if session.get("token_cache"):
+ cache.deserialize(session["token_cache"])
+ return cache
+
+def _save_cache(cache):
+ if cache.has_state_changed:
+ session["token_cache"] = cache.serialize()
+
+def _build_msal_app(cache=None, authority=None):
+ return msal.ConfidentialClientApplication(
+ app_config.CLIENT_ID, authority=authority or app_config.AUTHORITY,
+ client_credential=app_config.CLIENT_SECRET, token_cache=cache)
+
+def _build_auth_code_flow(authority=None, scopes=None):
+ return _build_msal_app(authority=authority).initiate_auth_code_flow(
+ scopes or [],
+ redirect_uri=url_for("authorized", _external=True))
+
+def _get_token_from_cache(scope=None):
+ cache = _load_cache() # This web app maintains one cache per session
+ cca = _build_msal_app(cache=cache)
+ accounts = cca.get_accounts()
+ if accounts: # So all account(s) belong to the current signed-in user
+ result = cca.acquire_token_silent(scope, account=accounts[0])
+ _save_cache(cache)
+ return result
+
+app.jinja_env.globals.update(_build_auth_code_flow=_build_auth_code_flow) # Used in template
+
+if __name__ == "__main__":
+ app.run()
+
+```
+
+## Step 6: Run your web app
+
+In the Terminal, run the app by entering the following command, which runs the Flask development server. The development server looks for `app.py` by default. Then, open your browser and navigate to the web app URL: <http://localhost:5000>.
+
+# [Linux](#tab/linux)
+
+```bash
+python -m flask run --host localhost --port 5000
+```
+
+# [macOS](#tab/macos)
+
+```bash
+python -m flask run --host localhost --port 5000
+```
+
+# [Windows](#tab/windows)
+
+```bash
+py -m flask run --host localhost --port 5000
+```
+++
+## [Optional] Debug your app
+
+The debugging feature gives you the opportunity to pause a running program on a particular line of code. When you pause the program, you can examine variables, run code in the Debug Console panel, and otherwise take advantage of the features described on [Debugging](https://code.visualstudio.com/docs/python/debugging). To use the Visual Studio Code debugger, check out the [VS Code documentation](https://code.visualstudio.com/docs/python/tutorial-flask#_create-multiple-templates-that-extend-a-base-template).
+
+To change the host name and/or port number, use the `args` array of the `launch.json` file. The following example demonstrates how to configure the host name to `localhost` and port number to `5001`. Note, if you change the host name, or the port number, you must update the redirect URI or your application. For more information, check out the [Register a web application](configure-authentication-sample-python-web-app.md#step-2-register-a-web-application) step.
+
+```json
+{
+ // Use IntelliSense to learn about possible attributes.
+ // Hover to view descriptions of existing attributes.
+ // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "name": "Python: Flask",
+ "type": "python",
+ "request": "launch",
+ "module": "flask",
+ "env": {
+ "FLASK_APP": "app.py",
+ "FLASK_ENV": "development"
+ },
+ "args": [
+ "run",
+ "--host=localhost",
+ "--port=5001"
+ ],
+ "jinja": true,
+ "justMyCode": true
+ }
+ ]
+}
+```
+++
+## Next steps
+
+- Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-python-web-app-options.md)
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
There are multiple scenarios that organizations can now enable using filter for
- Policy 2: Select users and groups and include group that contains service accounts only, accessing all cloud apps, excluding a filter for devices using rule expression device.extensionAttribute2 not equals TeamsPhoneDevice and for Access controls, Block. > [!NOTE]
-> Azure AD uses device authentication to evaluate device filter rules. For devices that are unregistered with Azure AD, all device properties are considered as null values.
+> Azure AD uses device authentication to evaluate device filter rules. For a device that is unregistered with Azure AD, all device properties are considered as null values and the device attributes cannot be determined since the device does not exist in the directory. The best way to target policies for unregistered devices is by using the negative operator since the configured filter rule would apply. If you were to use a positive operator, the filter rule would only apply when a device exists in the directory and the configured rule matches the attribute on the device.
## Create a Conditional Access policy
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
# Configure an application's publisher domain
-An applicationΓÇÖs publisher domain is displayed to users on the [applicationΓÇÖs consent prompt](application-consent-experience.md) to let users know where their information is being sent. Multi-tenant applications that are registered after May 21, 2019 that don't have a publisher domain show up as **unverified**. Multi-tenant applications are applications that support accounts outside of a single organizational directory; for example, support all Azure AD accounts, or support all Azure AD accounts and personal Microsoft accounts.
+An applicationΓÇÖs publisher domain informs the users where their information is being sent and acts as an input/prerequisite for [publisher verification](publisher-verification-overview.md). Depending on when the app was registered and it's verified publisher status, publisher domain may be displayed directly to the user on the [application's consent prompt](application-consent-experience.md). [Multi-tenant applications](/azure/architecture/guide/multitenant/overview) that are registered after May 21, 2019, that don't have a publisher domain show up asΓÇ»**unverified**. Multi-tenant applications are applications that support accounts outside of a single organizational directory; for example, support all Azure AD accounts, or support all Azure AD accounts and personal Microsoft accounts.
## New applications
The following table summarizes the default behavior of the publisher domain valu
| *.onmicrosoft.com | *.onmicrosoft.com | | - *.onmicrosoft.com<br/>- domain1.com<br/>- domain2.com (primary) | domain2.com |
-If a multi-tenant application's publisher domain isn't set, or if it's set to a domain that ends in .onmicrosoft.com, the app's consent prompt will show **unverified** in place of the publisher domain.
-
+1. If your multi-tenant was registered between **May 21, 2019 and November 30, 2020**:
+ - If the application's publisher domain isn't set, or if it's set to a domain that ends in .onmicrosoft.com, the app's consent prompt will show **unverified** in place of the publisher domain.
+ - If the application has a verified app domain, the consent prompt will show the verified domain.
+ - If the application is publisher verified, it will show a [blue "verified" badge] (publisher-verification-overview.md) indicating the same
+2. If your multi-tenant was registered after **November 30, 2020**:
+ - If the application is not publisher verified, the app will show as "**unverified**" in the consent prompt (i.e, no publisher domain related info is shown)
+ - If the application is publisher verified, it will show a [blue "verified" badge] (publisher-verification-overview.md) indicating the same
## Grandfathered applications
-If your app was registered before May 21, 2019, your application's consent prompt will not show **unverified** if you have not set a publisher domain. We recommend that you set the publisher domain value so that users can see this information on your app's consent prompt.
+If your app was registered before May 21, 2019, your application's consent prompt will not show **unverified** even if you have not set a publisher domain. We recommend that you set the publisher domain value so that users can see this information on your app's consent prompt.
## Configure publisher domain using the Azure portal
Configuring the publisher domain has an impact on what users see on the app cons
The following table describes the behavior for applications created before May 21, 2019.
-![Consent prompt for apps created before May 21, 2019](./media/howto-configure-publisher-domain/old-app-behavior-table.png)
+![Table that shows consent prompt behavior for apps created before May 21, 2019.](./media/howto-configure-publisher-domain/old-app-behavior-table.png)
+
+The behavior for applications created between May 21, 2019 and November 30, 2020 will depend on the publisher domain and the type of application. The following table describes what is shown on the consent prompt with the different combinations of configurations.
+
+![Table that shows consent prompt behavior for apps created betweeb May 21, 2019 and Nov 30, 2020.](./media/howto-configure-publisher-domain/new-app-behavior-table.png)
-The behavior for new applications created after May 21, 2019 will depend on the publisher domain and the type of application. The following table describes the changes you should expect to see with the different combinations of configurations.
+For multi-tenant applications created after November 30, 2020, only publisher verification status is surfaced in the consent prompt. The following table describes what is shown on the consent prompt depending on whether an app is verified or not. Consent prompt for single tenant applications will remain the same as above.
-![Consent prompt for apps created after May 21, 2019](./media/howto-configure-publisher-domain/new-app-behavior-table.png)
+![Table that shows consent prompt behavior for apps created after Nov 30, 2020.](./media/howto-configure-publisher-domain/new-app-behavior-publisher-verification-table.png)
## Implications on redirect URIs
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Publisher verification provides the following benefits:
## Requirements There are a few pre-requisites for publisher verification, some of which will have already been completed by many Microsoft partners. They are: -- An MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. This MPN account must be the [Partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for your organization.
+- An MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. This MPN account must be the [Partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for your organization. (**NOTE**: It can't be the Partner Location MPN ID. Location MPN IDs aren't currently supported)
- The application to be publisher verified must be registered using a Azure AD account. Applications registered using a Microsoft personal account aren't supported for publisher verification.
There are a few pre-requisites for publisher verification, some of which will ha
- In Azure AD this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Admin.
- - In Partner Center this user must have of the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Admin (this is a shared role mastered in Azure AD).
+ - In Partner Center this user must have of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or a Global Admin (this is a shared role mastered in Azure AD).
- The user performing verification must sign in using [multi-factor authentication](../authentication/howto-mfa-getstarted.md).
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
To add authentication with the Microsoft identity platform (formerly Azure AD v2
}).AddMicrosoftIdentityUI(); ```
-3. In the `Configure` method in *Startup.cs*, enable authentication with a call to `app.UseAuthentication();`
+3. In the `Configure` method in *Startup.cs*, enable authentication with a call to `app.UseAuthentication();` and `app.MapControllers();`.
```c# // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
To add authentication with the Microsoft identity platform (formerly Azure AD v2
// more code here app.UseAuthentication(); app.UseAuthorization();
+
+ app.MapRazorPages();
+ app.MapControllers();
// more code here } ```
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Possible reasons for failure:
| Error code | Reason | Resolution | | | | |
-| **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611) | Unable to read the service connection point (SCP) object and get the Azure AD tenant information. | Refer to the [Configure a service connection point](hybrid-azuread-join-federated-domains.md#configure-hybrid-azure-ad-join) section. |
+| **DSREG_AUTOJOIN_ADCONFIG_READ_FAILED** (0x801c001d/-2145648611) | Unable to read the service connection point (SCP) object and get the Azure AD tenant information. | Refer to the [Configure a service connection point](hybrid-azuread-join-manual.md#configure-a-service-connection-point) section. |
| **DSREG_AUTOJOIN_DISC_FAILED** (0x801c0021/-2145648607) | Generic discovery failure. Failed to get the discovery metadata from the data replication service (DRS). | To investigate further, find the sub-error in the next sections. | | **DSREG_AUTOJOIN_DISC_WAIT_TIMEOUT** (0x801c001f/-2145648609) | Operation timed out while performing discovery. | Ensure that `https://enterpriseregistration.windows.net` is accessible in the system context. For more information, see the [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites) section. | | **DSREG_AUTOJOIN_USERREALM_DISCOVERY_FAILED** (0x801c003d/-2145648579) | Generic realm discovery failure. Failed to determine domain type (managed/federated) from STS. | To investigate further, find the sub-error in the next sections. |
active-directory Clean Up Unmanaged Azure Ad Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-unmanaged-azure-ad-accounts.md
+
+ Title: Clean up unmanaged Azure AD accounts - Azure Active Directory | Microsoft Docs
+description: Clean up unmanaged accounts using email OTP and PowerShell modules in Azure Active Directory
++++ Last updated : 06/28/2022++++++++
+# Clean up unmanaged Azure Active Directory accounts
+
+Azure Active Directory (Azure AD) supports self-service sign-up for
+email-verified users. Users can create Azure AD accounts if they can
+verify email ownership. To learn more, see, [What is self-service
+sign-up for Azure Active
+Directory?](https://docs.microsoft.com/azure/active-directory/enterprise-users/directory-self-service-signup)
+
+However, if a user creates an account, and the domain isn't verified in
+an Azure AD tenant, the user is created in an unmanaged, or viral
+tenant. The user can create an account with an organization's domain,
+not under the lifecycle management of the organization's IT. Access can
+persist after the user leaves the organization.
+
+## Remove unmanaged Azure AD accounts
+
+You can remove unmanaged Azure AD accounts from your Azure AD tenants
+and prevent these types of accounts from redeeming future invitations.
+
+1. Read how to enable [one-time
+ passcodes](https://docs.microsoft.com/azure/active-directory/external-identities/one-time-passcode#enable-email-one-time-passcode)
+ (OTP)
+
+2. Use the sample application in [Azure-samples/Remove-unmanaged-guests](https://github.com/Azure-Samples/Remove-Unmanaged-Guests) or
+ go to
+ [AzureAD/MSIdentityTools](https://github.com/AzureAD/MSIdentityTools/wiki/)
+ PowerShell module to identify viral users in an Azure AD tenant and
+ reset user redemption status.
+
+Once the above steps are complete, when users with unmanaged Azure AD accounts try to access your tenant, they'll re-redeem their invitations. However, because Email OTP is enabled, Azure AD will prevent users from redeeming with an existing unmanaged Azure AD account and theyΓÇÖll redeem with another account type. Google Federation and SAML/WS-Fed aren't enabled by default. So by default, these users will redeem with either an MSA or Email OTP, with MSA taking precedence. For a full explanation on the B2B redemption precedence, refer to the [redemption precedence flow chart](https://docs.microsoft.com/azure/active-directory/external-identities/redemption-experience#invitation-redemption-flow).
+
+## Overtaken tenants and domains
+
+Some tenants created as unmanaged tenants can be taken over and
+converted to a managed tenant. See, [take over an unmanaged directory as
+administrator in Azure AD](https://docs.microsoft.com/azure/active-directory/enterprise-users/domains-admin-takeover).
+
+In some cases, overtaken domains might not be updated, for example, missing a DNS TXT record and therefore become flagged as unmanaged. Implications are:
+
+- For guest users who belong to formerly unmanaged tenants, redemption status is reset and one consent prompt appears. Redemption occurs with same account as before.
+
+- After unmanaged user redemption status is reset, the tool might identify unmanaged users that are false positives.
+
+## Reset redemption using a sample application
+
+Before you begin, to identify and reset unmanaged Azure AD account redemption:
+
+1. Ensure email OTP is enabled.
+
+2. Use the sample application on
+ [Azure-Samples/Remove-Unmanaged-Guests](https://github.com/Azure-Samples/Remove-Unmanaged-Guests).
+
+## Reset redemption using MSIdentityTools PowerShell Module
+
+MSIdentityTools PowerShell Module is a collection of cmdlets and
+scripts. They are for use in the Microsoft identity platform and Azure
+AD; they augment capabilities in the PowerShell SDK. See, [Microsoft
+Graph PowerShell
+SDK](https://github.com/microsoftgraph/msgraph-sdk-powershell).
+
+Run the following cmdlets:
+
+- `Install-Module Microsoft.Graph -Scope CurrentUser`
+
+- `Install-Module MSIdentityTools`
+
+- `Import-Module msidentitytools,microsoft.graph`
+
+To identify unmanaged Azure AD accounts, run:
+
+- `Connect-MgGraph --Scope User.Read.All`
+
+- `Get-MsIdUnmanagedExternalUser`
+
+To reset unmanaged Azure AD account redemption status, run:
+
+- `Connect-MgGraph --Scope User.Readwrite.All`
+
+- `Get-MsIdUnmanagedExternalUser | Reset-MsIdExternalUser`
+
+To delete unmanaged Azure AD accounts, run:
+
+- `Connect-MgGraph --Scope User.Readwrite.All`
+
+- `Get-MsIdUnmanagedExternalUser | Remove-MgUser`
+
+## Next steps
+
+Examples of using
+[Get-MSIdUnmanagedExternalUser](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MsIdUnmanagedExternalUser)
active-directory 1 Secure Access Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/1-secure-access-posture.md
Title: Determine your security posture for external collaboration with Azure Active Directory description: Before you can execute an external access security plan, you must determine what you are trying to achieve. -+ Last updated 12/18/2020-+
active-directory 4 Secure Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/4-secure-access-groups.md
Title: Secure external access with groups in Azure Active Directory and Microsoft 365 description: Azure Active Directory and Microsoft 365 Groups can be used to increase security when external users access your resources. -+ Last updated 12/18/2020-+
active-directory 6 Secure Access Entitlement Managment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/6-secure-access-entitlement-managment.md
Title: Manage external access with Azure Active Directory Entitlement Management description: How to use Azure Active Directory Entitlement Management as a part of your overall external access security plan. -+ Last updated 12/18/2020-+
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
Title: Manage external access with Azure Active Directory Conditional Access description: How to use Azure Active Directory Conditional Access policies to secure external access to resources. -+ Last updated 01/25/2022-+
active-directory 8 Secure Access Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md
Title: Control external access to resources in Azure Active Directory with sensitivity labels. description: Use sensitivity labels as a part of your overall security plan for external access. -+ Last updated 12/18/2020-+
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
Title: Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure Active Directory description: Secure access to Microsoft 365 services as a part of your overall external access security. -+ Last updated 12/18/2020-+
active-directory Auth Header Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-header-based.md
Title: Header-based authentication with Azure Active Directory description: Architectural guidance on achieving header-based authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-kcd.md
Title: Kerberos constrained delegation with Azure Active Directory description: Architectural guidance on achieving Kerberos constrained delegation with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ldap.md
Title: LDAP authentication with Azure Active Directory description: Architectural guidance on achieving LDAP authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oauth2.md
Title: OAUTH 2.0 authentication with Azure Active Directory description: Architectural guidance on achieving OAUTH 2.0 authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oidc.md
Title: OpenID Connect authentication with Azure Active Directory description: Architectural guidance on achieving OpenID Connect authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-password-based-sso.md
Title: Password-based authentication with Azure Active Directory description: Architectural guidance on achieving password-based authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-radius.md
Title: RADIUS authentication with Azure Active Directory description: Architectural guidance on achieving RADIUS authentication with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-remote-desktop-gateway.md
Title: Remote Desktop Gateway Services with Azure Active Directory description: Architectural guidance on achieving Remote Desktop Gateway Services with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Auth Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-saml.md
Title: SAML authentication with Azure Active Directory description: Architectural guidance on achieving SAML authentication with Azure Active Directory -+
Last updated 10/10/2020-+
active-directory Auth Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ssh.md
Title: SSH authentication with Azure Active Directory description: Architectural guidance on achieving SSH integration with Azure Active Directory -+
Last updated 06/22/2022-+
active-directory Auth Sync Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-sync-overview.md
Title: Azure Active Directory authentication and synchronization protocol overview description: Architectural guidance on integrating Azure AD with legacy authentication protocols and sync patterns -+
Last updated 10/10/2020-+
active-directory Certificate Authorities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/certificate-authorities.md
Title: Azure Active Directory certificate authorities description: Listing of trusted certificates used in Azure -+
Last updated 10/10/2020-+
active-directory Monitor Sign In Health For Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/monitor-sign-in-health-for-resilience.md
Title: Monitor application sign-in health for resilience in Azure Active Directory description: Create queries and notifications to monitor the sign-in health of your applications. -+ Last updated 03/17/2021-+
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-considerations.md
Title: Common considerations for multi-tenant user management in Azure Active Directory description: Learn about the common design considerations for user access across Azure Active Directory tenants with guest accounts -+ Last updated 10/19/2021-+
active-directory Multi Tenant Common Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-solutions.md
Title: Common solutions for multi-tenant user management in Azure Active Directory description: Learn about common solutions used to configure user access across Azure Active Directory tenants with guest accounts -+ Last updated 09/25/2021-+
active-directory Multi Tenant User Management Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md
Title: Configuring multi-tenant user management in Azure Active Directory description: Learn about the different patterns used to configure user access across Azure Active Directory tenants with guest accounts -+ Last updated 09/25/2021-+
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-scenarios.md
Title: Common scenarios for using multi-tenant user management in Azure Active Directory description: Learn about common scenarios where guest accounts can be used to configure user access across Azure Active Directory tenants -+ Last updated 09/25/2021-+
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
Title: Protecting Microsoft 365 from on-premises attacks description: Learn how to configure your systems to help protect your Microsoft 365 cloud environment from on-premises compromise. -+ Last updated 04/29/2022-+ - it-pro
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
Title: Recover from deletions in Azure Active Directory description: Learn how to recover from unintended deletions. -+ Last updated 04/20/2022-+
active-directory Recover From Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-misconfigurations.md
Title: Recover from misconfigurations in Azure Active Directory description: Learn how to recover from misconfigurations. -+ Last updated 04/20/2022-+
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recoverability-overview.md
Title: Recoverability best practices in Azure Active Directory description: Learn the best practices for increasing recoverability. -+ Last updated 04/20/2022-+
active-directory Resilience B2b Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-b2b-authentication.md
Title: Build resilience in external user authentication with Azure Active Directory description: A guide for IT admins and architects to building resilient authentication for external users -+ Last updated 11/30/2020-+
active-directory Resilience In Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-in-credentials.md
Title: Build resilience with credential management in Azure Active Directory
description: A guide for architects and IT administrators on building a resilient credential strategy. -+ Last updated 11/30/2020-+
active-directory Resilience In Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-in-hybrid.md
Title: Build more resilient hybrid authentication in Azure Active Directory description: A guide for architects and IT administrators on building a resilient hybrid infrastructure. -+ Last updated 11/30/2020-+
active-directory Resilience In Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-in-infrastructure.md
Title: Build resilience in your IAM infrastructure with Azure Active Directory description: A guide for architects and IT administrators on building resilience to disruption of their IAM infrastructure. -+ Last updated 11/30/2020-+
active-directory Resilience On Premises Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-on-premises-access.md
Title: Build resilience in application access with Application Proxy description: A guide for architects and IT administrators on using Application Proxy for resilient access to on-premises applications -+ Last updated 11/30/2020-+
active-directory Resilience Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-overview.md
Title: Resilience in identity and access management with Azure Active Directory description: Learn how to build resilience into identity and access management. Resilience helps endure disruption to system components and recover with minimal effort. -+
Last updated 04/29/2022-+ - it-pro
active-directory Resilience With Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-with-continuous-access-evaluation.md
Title: Build resilience by using Continuous Access Evaluation in Azure Active Directory description: A guide for architects and IT administrators on using CAE -+
Last updated 11/30/2020-+
active-directory Resilience With Device States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-with-device-states.md
Title: Build resilience by using device states in Azure Active Directory description: A guide for architects and IT administrators to building resilience by using device states -+ Last updated 11/30/2020-+
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
Title: Azure Active Directory security operations for applications description: Learn how to monitor and alert on applications to identify security threats. -+ Last updated 07/15/2021-+
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
Title: Azure Active Directory security operations for devices description: Learn to establish baselines, and monitor and report on devices to identity potential security risks with devices. -+ Last updated 07/15/2021-+
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-infrastructure.md
Title: Azure Active Directory security operations for infrastructure description: Learn how to monitor and alert on infrastructure components to identify security threats. -+ Last updated 07/15/2021-+
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
Title: Azure Active Directory security operations guide description: Learn to monitor, identify, and alert on security issues with accounts, applications, devices, and infrastructure in Azure Active Directory. -+ Last updated 04/29/2022-+ - it-pro - seodec18
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-identity-management.md
Title: Azure Active Directory security operations for Privileged Identity Management description: Guidance to establish baselines and use Azure Active Directory Privileged Identity Management (PIM) to monitor and alert on potential issues with accounts that are governed by PIM. -+ Last updated 07/15/2021-+
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
Title: Azure Active Directory security operations for user accounts description: Guidance to establish baselines and how to monitor and alert on potential security issues with user accounts. -+ Last updated 07/15/2021-+
active-directory Service Accounts Computer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-computer.md
Title: Secure computer accounts | Azure Active Directory description: A guide to helping secure on-premises computer accounts. -+ Last updated 2/15/2021-+
active-directory Service Accounts Govern On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-govern-on-premises.md
Title: Govern on-premises service accounts | Azure Active Directory description: Use this guide to create and run an account lifecycle process for service accounts. -+ Last updated 2/15/2021-+
active-directory Service Accounts Governing Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-governing-azure.md
Title: Governing Azure Active Directory service accounts description: Principles and procedures for managing the lifecycle of service accounts in Azure Active Directory. -+ Last updated 3/1/2021-+
active-directory Service Accounts Group Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-group-managed.md
Title: Secure group managed service accounts | Azure Active Directory description: A guide to securing group managed service account (gMSA) computer accounts. -+ Last updated 2/15/2021-+
active-directory Service Accounts Introduction Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-introduction-azure.md
Title: Introduction to securing Azure Active Directory service accounts description: Explanation of the types of service accounts available in Azure Active Directory. -+ Last updated 04/21/2022-+
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-managed-identities.md
Title: Securing managed identities in Azure Active Directory description: Explanation of how to find, assess, and increase the security of managed identities. -+ Last updated 3/1/2021-+
active-directory Service Accounts On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-on-premises.md
Title: Introduction to Active Directory service accounts description: An introduction to the types of service accounts in Active Directory, and how to secure them. -+ Last updated 04/21/2022-+
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-principal.md
Title: Securing service principals in Azure Active Directory description: Find, assess, and secure service principals. -+ Last updated 2/15/2021-+
active-directory Service Accounts Standalone Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-standalone-managed.md
Title: Secure standalone managed service accounts | Azure Active Directory description: A guide to securing standalone managed service accounts. -+ Last updated 2/15/2021-+
active-directory Service Accounts User On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-user-on-premises.md
Title: Secure user-based service accounts | Azure Active Directory description: A guide to securing user-based service accounts. -+ Last updated 2/15/2021-+
active-directory Sync Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-directory.md
Title: Directory synchronization with Azure Active Directory description: Architectural guidance on achieving directory synchronization with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Sync Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-ldap.md
Title: LDAP synchronization with Azure Active Directory description: Architectural guidance on achieving LDAP synchronization with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Sync Scim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-scim.md
Title: SCIM synchronization with Azure Active Directory description: Architectural guidance on achieving SCIM synchronization with Azure Active Directory. -+
Last updated 10/10/2020-+
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Last updated 04/15/2022 --++
Although this deployment changes no other relying parties in your AD FS farm, yo
## Plan the project
-When technology projects fail, itΓÇÖs typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that youΓÇÖre engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md#include-the-right-stakeholders) and that stakeholder roles in the project are well understood.
+When technology projects fail, it's typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md#include-the-right-stakeholders) and that stakeholder roles in the project are well understood.
### Plan communications
Proactively communicate with your users how their experience will change, when i
After the domain conversion, Azure AD might continue to send some legacy authentication requests from Exchange Online to your AD FS servers for up to four hours. The delay is because the Exchange Online cache for [legacy applications authentication](../fundamentals/concept-fundamentals-block-legacy-authentication.md) can take up to 4 hours to be aware of the cutover from federation to cloud authentication.
-During this four-hour window, you may prompt users for credentials repeatedly when reauthenticating to applications that use legacy authentication. Although the user can still successfully authenticate against AD FS, Azure AD no longer accepts the userΓÇÖs issued token because that federation trust is now removed.
+During this four-hour window, you may prompt users for credentials repeatedly when reauthenticating to applications that use legacy authentication. Although the user can still successfully authenticate against AD FS, Azure AD no longer accepts the user's issued token because that federation trust is now removed.
-Existing Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't affected because Exchange Online keeps a cache of their credentials for a set period of time. The cache is used to silently reauthenticate the user. The user doesnΓÇÖt have to return to AD FS. Credentials stored on the device for these clients are used to silently reauthenticate themselves after the cached is cleared. Users arenΓÇÖt expected to receive any password prompts as a result of the domain conversion process.
+Existing Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't affected because Exchange Online keeps a cache of their credentials for a set period of time. The cache is used to silently reauthenticate the user. The user doesn't have to return to AD FS. Credentials stored on the device for these clients are used to silently reauthenticate themselves after the cached is cleared. Users aren't expected to receive any password prompts as a result of the domain conversion process.
Modern authentication clients (Office 2016 and Office 2013, iOS, and Android apps) use a valid refresh token to obtain new access tokens for continued access to resources instead of returning to AD FS. These clients are immune to any password prompts resulting from the domain conversion process. The clients will continue to function without extra configuration.
You can [customize the Azure AD sign-in page](../fundamentals/customize-branding
### Plan for conditional access policies
-Evaluate if youΓÇÖre currently using conditional access for authentication, or if you use access control policies in AD FS.
+Evaluate if you're currently using conditional access for authentication, or if you use access control policies in AD FS.
Consider replacing AD FS access control policies with the equivalent Azure AD [Conditional Access policies](../conditional-access/overview.md) and [Exchange Online Client Access Rules](/exchange/clients-and-mobile-in-exchange-online/client-access-rules/client-access-rules). You can use either Azure AD or on-premises groups for conditional access.
You have two options for enabling this change:
- **Option B:** Switch using Azure AD Connect and PowerShell
- *Available if you didnΓÇÖt initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services*.
+ *Available if you didn't initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services*.
To choose one of these options, you must know what your current settings are.
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
![View AD FS configuration](media/deploy-cloud-user-authentication/federation-configuration.png)
- If AD FS isnΓÇÖt listed in the current settings, you must manually convert your domains from federated identity to managed identity by using PowerShell.
+ If AD FS isn't listed in the current settings, you must manually convert your domains from federated identity to managed identity by using PowerShell.
#### Option A
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
Domain Administrator account credentials are required to enable seamless SSO. The process completes the following actions, which require these elevated permissions: - A computer account named AZUREADSSO (which represents Azure AD) is created in your on-premises Active Directory instance.
- - The computer accountΓÇÖs Kerberos decryption key is securely shared with Azure AD.
+ - The computer account's Kerberos decryption key is securely shared with Azure AD.
- Two Kerberos service principal names (SPNs) are created to represent two URLs that are used during Azure AD sign-in. The domain administrator credentials are not stored in Azure AD Connect or Azure AD and get discarded when the process successfully finishes. They are used to turn ON this feature.
Sign in to the [Azure AD portal](https://aad.portal.azure.com/), select **Azure
##### Deploy more authentication agents for PTA >[!NOTE]
-> PTA requires deploying lightweight agents on the Azure AD Connect server and on your on-premises computer thatΓÇÖs running Windows server. To reduce latency, install the agents as close as possible to your Active Directory domain controllers.
+> PTA requires deploying lightweight agents on the Azure AD Connect server and on your on-premises computer that's running Windows server. To reduce latency, install the agents as close as possible to your Active Directory domain controllers.
For most customers, two or three authentication agents are sufficient to provide high availability and the required capacity. A tenant can have a maximum of 12 agents registered. The first agent is always installed on the Azure AD Connect server itself. To learn about agent limitations and agent deployment options, see [Azure AD pass-through authentication: Current limitations](how-to-connect-pta-current-limitations.md).
For most customers, two or three authentication agents are sufficient to provide
**Switch from federation to the new sign-in method by using Azure AD Connect and PowerShell**
-*Available if you didnΓÇÖt initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services.*
+*Available if you didn't initially configure your federated domains by using Azure AD Connect or if you're using third-party federation services.*
On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a). You will notice that on the User sign-in page, the **Do not configure** option is pre-selected.
On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a).
![ Pass-through authentication settings](media/deploy-cloud-user-authentication/pass-through-authentication-settings.png)
- If the authentication agent isnΓÇÖt active, complete these [troubleshooting steps](tshoot-connect-pass-through-authentication.md) before you continue with the domain conversion process in the next step. You risk causing an authentication outage if you convert your domains before you validate that your PTA agents are successfully installed and that their status is **Active** in the Azure portal.
+ If the authentication agent isn't active, complete these [troubleshooting steps](tshoot-connect-pass-through-authentication.md) before you continue with the domain conversion process in the next step. You risk causing an authentication outage if you convert your domains before you validate that your PTA agents are successfully installed and that their status is **Active** in the Azure portal.
3. [Deploy more authentication agents](#deploy-more-authentication-agents-for-pta).
On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a).
**At this point, federated authentication is still active and operational for your domains**. To continue with the deployment, you must convert each domain from federated identity to managed identity. >[!IMPORTANT]
-> You donΓÇÖt have to convert all domains at the same time. You might choose to start with a test domain on your production tenant or start with your domain that has the lowest number of users.
+> You don't have to convert all domains at the same time. You might choose to start with a test domain on your production tenant or start with your domain that has the lowest number of users.
**Complete the conversion by using the Azure AD PowerShell module:**
Complete the following tasks to verify the sign-up method and to finish the conv
### Test the new sign-in method
-When your tenant used federated identity, users were redirected from the Azure AD sign-in page to your AD FS environment. Now that the tenant is configured to use the new sign-in method instead of federated authentication, users arenΓÇÖt redirected to AD FS.
+When your tenant used federated identity, users were redirected from the Azure AD sign-in page to your AD FS environment. Now that the tenant is configured to use the new sign-in method instead of federated authentication, users aren't redirected to AD FS.
**Instead, users sign in directly on the Azure AD sign-in page.**
If you used staged rollout, you should remember to turn off the staged rollout f
Historically, updates to the **UserPrincipalName** attribute, which uses the sync service from the on-premises environment, are blocked unless both of these conditions are true: - The user is in a managed (non-federated) identity domain.
- - The user hasnΓÇÖt been assigned a license.
+ - The user hasn't been assigned a license.
To learn how to verify or turn on this feature, see [Sync userPrincipalName updates](how-to-connect-syncservice-features.md).
Your support team should understand how to troubleshoot any authentication issue
Migration requires assessing how the application is configured on-premises, and then mapping that configuration to Azure AD.
-If you plan to keep using AD FS with on-premises & SaaS Applications using SAML / WS-FED or Oauth protocol, youΓÇÖll use both AD FS and Azure AD after you convert the domains for user authentication. In this case, you can protect your on-premises applications and resources with Secure Hybrid Access (SHA) through [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) or one of [Azure AD partner integrations](../manage-apps/secure-hybrid-access.md). Using Application Proxy or one of our partners can provide secure remote access to your on-premises applications. Users benefit by easily connecting to their applications from any device after a [single sign-on](../manage-apps/add-application-portal-setup-sso.md).
+If you plan to keep using AD FS with on-premises & SaaS Applications using SAML / WS-FED or Oauth protocol, you'll use both AD FS and Azure AD after you convert the domains for user authentication. In this case, you can protect your on-premises applications and resources with Secure Hybrid Access (SHA) through [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) or one of [Azure AD partner integrations](../manage-apps/secure-hybrid-access.md). Using Application Proxy or one of our partners can provide secure remote access to your on-premises applications. Users benefit by easily connecting to their applications from any device after a [single sign-on](../manage-apps/add-application-portal-setup-sso.md).
You can move SaaS applications that are currently federated with ADFS to Azure AD. Reconfigure to authenticate with Azure AD either via a built-in connector from the [Azure App gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps), or by [registering the application in Azure AD](../develop/quickstart-register-app.md).
For more information, see ΓÇô
### Remove relying party trust
-If you have Azure AD Connect Health, you can [monitor usage](how-to-connect-health-adfs.md) from the Azure portal. In case the usage shows no new auth req and you validate that all users and clients are successfully authenticating via Azure AD, itΓÇÖs safe to remove the Microsoft 365 relying party trust.
+If you have Azure AD Connect Health, you can [monitor usage](how-to-connect-health-adfs.md) from the Azure portal. In case the usage shows no new auth req and you validate that all users and clients are successfully authenticating via Azure AD, it's safe to remove the Microsoft 365 relying party trust.
-If you donΓÇÖt use AD FS for other purposes (that is, for other relying party trusts), you can decommission AD FS at this point.
+If you don't use AD FS for other purposes (that is, for other relying party trusts), you can decommission AD FS at this point.
## Next steps
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
Title: Manage user-assigned managed identities - Azure AD
description: Create user-assigned managed identities. -+ editor:
active-directory How Managed Identities Work Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md
description: Description of managed identities for Azure resources work with Azu
documentationcenter: -+ editor: ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
active-directory How To Managed Identity Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-managed-identity-regional-move.md
Title: Move managed identities to another region - Azure AD
description: Steps involved in getting a managed identity recreated in another region -+
active-directory How To Use Vm Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md
description: Code samples for using Azure SDKs with an Azure VM that has managed
documentationcenter: -+ editor:
active-directory How To Use Vm Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md
description: Step-by-step instructions and examples for using an Azure VM-manage
documentationcenter: -+ editor:
active-directory How To Use Vm Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md
description: Step-by-step instructions and examples for using managed identities
documentationcenter: -+ editor:
active-directory How To View Managed Identity Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-portal.md
description: Step-by-step instructions for viewing the service principal of a ma
documentationcenter: '' -+ editor: ''
active-directory How To View Managed Identity Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-powershell.md
description: Step-by-step instructions for viewing the service principal of a ma
documentationcenter: '' -+ editor: ''
active-directory Howto Assign Access Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-cli.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Howto Assign Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-portal.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Howto Assign Access Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/known-issues.md
description: Known issues with managed identities for Azure resources.
documentationcenter: -+ editor: ms.assetid: 2097381a-a7ec-4e3b-b4ff-5d2fb17403b6
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
description: Frequently asked questions about managed identities
documentationcenter: -+ editor:
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
Last updated 01/10/2022
-+
active-directory Managed Identity Best Practice Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md
description: Recommendations on when to use user-assigned versus system-assigned
documentationcenter: -+ editor:
active-directory Msi Tutorial Linux Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/msi-tutorial-linux-vm-access-arm.md
description: A tutorial that walks you through the process of using a user-assig
documentationcenter: '' -+ editor: daveba
active-directory Overview For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview-for-developers.md
description: An overview how developers can use managed identities for Azure res
documentationcenter: -+ editor: ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
active-directory Qs Configure Cli Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md
Title: Configure managed identities on Azure VM using Azure CLI - Azure AD description: Step-by-step instructions for configuring system and user-assigned managed identities on an Azure VM using Azure CLI. -+
active-directory Qs Configure Cli Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss.md
description: Step-by-step instructions for configuring system and user-assigned
documentationcenter: -+ editor:
active-directory Qs Configure Portal Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Portal Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Rest Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vm.md
description: Step-by-step instructions for configuring a system and user-assigne
documentationcenter: -+ editor:
active-directory Qs Configure Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vmss.md
description: Step-by-step instructions for configuring a system and user-assigne
documentationcenter: -+ editor:
active-directory Qs Configure Sdk Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md
description: Step-by-step instructions for configuring and using managed identit
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Template Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Template Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vmss.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Services Azure Active Directory Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md
Last updated 02/01/2022
-+ # Azure services that support Azure AD authentication
active-directory Tutorial Linux Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-arm.md
description: A quickstart that walks you through the process of using a Linux VM
documentationcenter: '' -+ editor: bryanla
active-directory Tutorial Linux Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-cosmos-db.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: -+ editor:
active-directory Tutorial Linux Vm Access Datalake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-datalake.md
description: A tutorial that shows you how to use a Linux VM system-assigned man
documentationcenter: -+ editor:
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Linux Vm Access Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-access-key.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Linux Vm Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: -+ editor:
active-directory Tutorial Vm Windows Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-cosmos-db.md
description: A tutorial that walks you through the process of using a system-ass
documentationcenter: '' -+ editor:
active-directory Tutorial Windows Vm Access Datalake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-datalake.md
description: A tutorial that shows you how to use a Windows VM system-assigned m
documentationcenter: -+ editor:
active-directory Tutorial Windows Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+
active-directory Tutorial Windows Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-storage-sas.md
description: A tutorial that shows you how to use a Windows VM system-assigned m
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Ua Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-ua-arm.md
description: A tutorial that walks you through the process of using a user-assig
documentationcenter: '' -+ editor:
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> | Read all configuration | [Security Reader](permissions-reference.md#security-reader) | | > | Read users flagged for risk | [Security Reader](permissions-reference.md#security-reader) | |
-## Temporary Access Pass (Preview)
+## Temporary Access Pass
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles |
You can further restrict permissions by assigning roles at smaller scopes or by
- [Assign Azure AD roles to users](manage-roles-portal.md) - [Assign Azure AD roles at different scopes](assign-roles-different-scopes.md) - [Create and assign a custom role in Azure Active Directory](custom-create.md)-- [Azure AD built-in roles](permissions-reference.md)
+- [Azure AD built-in roles](permissions-reference.md)
active-directory Articulate360 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/articulate360-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Articulate 360'
+description: Learn how to configure single sign-on between Azure Active Directory and Articulate 360.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with Articulate 360
+
+In this tutorial, you'll learn how to integrate Articulate 360 with Azure Active Directory (Azure AD). When you integrate Articulate 360 with Azure AD, you can:
+
+* Control in Azure AD who has access to Articulate 360.
+* Enable your users to be automatically signed-in to Articulate 360 with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Articulate 360 single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Articulate 360 supports **SP** and **IDP** initiated SSO.
+
+## Add Articulate 360 from the gallery
+
+To configure the integration of Articulate 360 into Azure AD, you need to add Articulate 360 from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Articulate 360** in the search box.
+1. Select **Articulate 360** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Articulate 360
+
+Configure and test Azure AD SSO with Articulate 360 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Articulate 360.
+
+To configure and test Azure AD SSO with Articulate 360, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Articulate 360 SSO](#configure-articulate-360-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Articulate 360 test user](#create-articulate-360-test-user)** - to have a counterpart of B.Simon in Articulate 360 that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Articulate 360** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<SAMPLE>`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://id.articulate.com/sso/saml2`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://id.articulate.com/`
+
+ > [!Note]
+ > The Identifier value is not real. Update this value with the actual Identifier. Contact [Articulate 360 support team](mailto:enterprise@articulate.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Articulate 360 application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Articulate 360 application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Articulate 360** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Articulate 360.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Articulate 360**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Articulate 360 SSO
+
+To configure single sign-on on **Articulate 360** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Articulate 360 support team](mailto:enterprise@articulate.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Articulate 360 test user
+
+In this section, a user called B.Simon is created in Articulate 360. Articulate 360 supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Articulate 360, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Articulate 360 Sign-On URL where you can initiate the login flow.
+
+* Go to Articulate 360 Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Articulate 360 for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Articulate 360 tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Articulate 360 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Articulate 360 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Infrascale Cloud Backup Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infrascale-cloud-backup-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Infrascale Cloud Backup'
+description: Learn how to configure single sign-on between Azure Active Directory and Infrascale Cloud Backup.
++++++++ Last updated : 06/24/2022++++
+# Tutorial: Azure AD SSO integration with Infrascale Cloud Backup
+
+In this tutorial, you'll learn how to integrate Infrascale Cloud Backup with Azure Active Directory (Azure AD). When you integrate Infrascale Cloud Backup with Azure AD, you can:
+
+* Control in Azure AD who has access to Infrascale Cloud Backup.
+* Enable your users to be automatically signed-in to Infrascale Cloud Backup with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Infrascale Cloud Backup single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Infrascale Cloud Backup supports **SP** initiated SSO.
+
+## Add Infrascale Cloud Backup from the gallery
+
+To configure the integration of Infrascale Cloud Backup into Azure AD, you need to add Infrascale Cloud Backup from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Infrascale Cloud Backup** in the search box.
+1. Select **Infrascale Cloud Backup** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Infrascale Cloud Backup
+
+Configure and test Azure AD SSO with Infrascale Cloud Backup using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Infrascale Cloud Backup.
+
+To configure and test Azure AD SSO with Infrascale Cloud Backup, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Infrascale Cloud Backup SSO](#configure-infrascale-cloud-backup-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Infrascale Cloud Backup test user](#create-infrascale-cloud-backup-test-user)** - to have a counterpart of B.Simon in Infrascale Cloud Backup that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Infrascale Cloud Backup** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://dashboard.sosonlinebackup.com/<ID>`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://dashboard.managedoffsitebackup.net/Account/AssertionConsumerService`
+
+ c. In the **Sign-on URL** text box, type one of the following URLs:
+
+ | **Sign-on URL** |
+ ||
+ | `https://dashboard.avgonlinebackup.com/Account/SingleSignOn` |
+ | `https://dashboard.infrascale.com/Account/SingleSignOn` |
+ | `https://dashboard.managedoffsitebackup.net/Account/SingleSignOn` |
+ | `https://dashboard.sosonlinebackup.com/Account/SingleSignOn` |
+ |`https://dashboard.trustboxbackup.com/Account/SingleSignOn` |
+ | `https://radialpoint-dashboard.managedoffsitebackup.net/Account/SingleSignOn` |
+ | `https://dashboard-cw.infrascale.com/Account/SingleSignOn` |
+ | `https://dashboard.digicelcloudbackup.com/Account/SingleSignOn` |
+ | `https://dashboard-cw.sosonlinebackup.com/Account/SingleSignOn` |
+ |`https://dashboard.my-data.dk/Account/SingleSignOn` |
+ |`https://dashboard.beesafe.nu/Account/SingleSignOn` |
+ |`https://dashboard.bekcloud.com/Account/SingleSignOn` |
+ | `https://dashboard.alltimesecure.com/Account/SingleSignOn` |
+ | `https://dashboard-ec1.sosonlinebackup.com/Account/SingleSignOn` |
+ | `https://dashboard.glcsecurecloud.com/Account/SingleSignOninfrascalecloudbackup.com/infrascalecloudbackup.com/` |
+
+ > [!Note]
+ > The Identifier value is not real. Update this value with the actual Identifier URL. Contact [Infrascale Cloud Backup support team](mailto:support@infrascale.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Infrascale Cloud Backup.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Infrascale Cloud Backup**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Infrascale Cloud Backup SSO
+
+1. Log in to your Infrascale Cloud Backup company site as an administrator.
+
+1. Go to **Settings** > **Single Sign-On** and select **Enable Single Sign-On (SSO)**.
+
+1. In the **Single Sign-On Settings** page, perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/infrascale-cloud-backup-tutorial/settings.png "Configuration")
+
+ a. Copy **Service Provider EntityID** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ b. Copy **Reply URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ c. Select **Via metadata URL** button under Identity Provider Settings section.
+
+ d. Copy **App Federation Metadata Url** from the Azure portal and paste it in the **Metadata URL** textbox.
+
+ e. Click **Save**.
+
+### Create Infrascale Cloud Backup test user
+
+In this section, you create a user called Britta Simon in Infrascale Cloud Backup. Work with [Infrascale Cloud Backup support team](mailto:support@infrascale.com) to add the users in the Infrascale Cloud Backup platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Infrascale Cloud Backup Sign-On URL where you can initiate the login flow.
+
+* Go to Infrascale Cloud Backup Sign-On URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Infrascale Cloud Backup tile in the My Apps, this will redirect to Infrascale Cloud Backup Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Infrascale Cloud Backup you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Lines Elibrary Advance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lines-elibrary-advance-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Lines eLibrary Advance'
+description: Learn how to configure single sign-on between Azure Active Directory and Lines eLibrary Advance.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with Lines eLibrary Advance
+
+In this tutorial, you'll learn how to integrate Lines eLibrary Advance with Azure Active Directory (Azure AD). When you integrate Lines eLibrary Advance with Azure AD, you can:
+
+* Control in Azure AD who has access to Lines eLibrary Advance.
+* Enable your users to be automatically signed-in to Lines eLibrary Advance with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Lines eLibrary Advance single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Lines eLibrary Advance supports **SP** and **IDP** initiated SSO.
+
+## Add Lines eLibrary Advance from the gallery
+
+To configure the integration of Lines eLibrary Advance into Azure AD, you need to add Lines eLibrary Advance from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Lines eLibrary Advance** in the search box.
+1. Select **Lines eLibrary Advance** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Lines eLibrary Advance
+
+Configure and test Azure AD SSO with Lines eLibrary Advance using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at Lines eLibrary Advance.
+
+To configure and test Azure AD SSO with Lines eLibrary Advance, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Lines eLibrary Advance SSO](#configure-lines-elibrary-advance-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Lines eLibrary Advance test user](#create-lines-elibrary-advance-test-user)** - to have a counterpart of B.Simon in Lines eLibrary Advance that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Lines eLibrary Advance** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://ela.education.ne.jp/students/gsso/metadata/gsuite/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/students/gsso/metadata/gsuite/<SSOID>` |
+ | `https://ela.education.ne.jp/teachers/gsso/metadata/gsuite/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/teachers/gsso/metadata/gsuite/<SSOID` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |-|
+ | `https://ela.education.ne.jp/students/gsso/acs/gsuite/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/students/gsso/acs/gsuite/<SSOID>` |
+ | `https://ela.education.ne.jp/teachers/gsso/acs/gsuite/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/teachers/gsso/acs/gsuite/<SSOID>` |
+
+1. Click **Set additional URLs** and perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ |--|
+ | `https://fms.live.fm.ks.irdeto.com/` |
+ | `https://ela.education.ne.jp/students/gsso/login/azure/<SSOID>` |
+ | `https://ela.education.ne.jp/teachers/gsso/login/azure/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/students/gsso/login/azure/<SSOID>` |
+ | `https://ela.kodomo.ne.jp/teachers/gsso/login/azure/<SSOID>` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Lines eLibrary Advance support team](mailto:tech@education.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Lines eLibrary Advance** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lines eLibrary Advance.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Lines eLibrary Advance**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Lines eLibrary Advance SSO
+
+To configure single sign-on on **Lines eLibrary Advance** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Lines eLibrary Advance support team](mailto:tech@education.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Lines eLibrary Advance test user
+
+In this section, you create a user called Britta Simon at Lines eLibrary Advance. Work with [Lines eLibrary Advance support team](mailto:tech@education.jp) to add the users in the Lines eLibrary Advance platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Lines eLibrary Advance Sign-On URL where you can initiate the login flow.
+
+* Go to Lines eLibrary Advance Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lines eLibrary Advance for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Lines eLibrary Advance tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lines eLibrary Advance for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Lines eLibrary Advance you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Linkedin Learning Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedin-learning-provisioning-tutorial.md
- Title: 'Tutorial: Configure LinkedIn Learning for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to LinkedIn Learning.
--
-writer: twimmers
----- Previously updated : 06/30/2020---
-# Tutorial: Configure LinkedIn Learning for automatic user provisioning
-
-This tutorial describes the steps you need to perform in both LinkedIn Learning and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [LinkedIn Learning](https://learning.linkedin.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
--
-## Capabilities supported
-> [!div class="checklist"]
-> * Create users in LinkedIn Learning
-> * Remove users in LinkedIn Learning when they do not require access anymore
-> * Keep user attributes synchronized between Azure AD and LinkedIn Learning
-> * Provision groups and group memberships in LinkedIn Learning
-> * [Single sign-on](linkedinlearning-tutorial.md) to LinkedIn Learning (recommended)
-
-## Prerequisites
-
-The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* Approval and SCIM enabled for LinkedIn Learning (contact by email).
-
-## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and LinkedIn Learning](../app-provisioning/customize-application-attributes.md).
-
-## Step 2. Configure LinkedIn Learning to support provisioning with Azure AD
-1. Sign into [LinkedIn Learning Settings](https://www.linkedin.com/learning-admin/settings/global). Select **SCIM Setup** then select **Add new SCIM configuration**.
-
- ![SCIM Setup configuration](./media/linkedin-learning-provisioning-tutorial/learning-scim-settings.png)
-
-2. Enter a name for the configuration, and set **Auto-assign licenses** to On. Then click **Generate token**.
-
- ![SCIM configuration name](./media/linkedin-learning-provisioning-tutorial/learning-scim-configuration.png)
-
-3. After the configuration is created, an **Access token** should be generated. Keep this copied for later.
-
- ![SCIM access token](./media/linkedin-learning-provisioning-tutorial/learning-scim-token.png)
-
-4. You may reissue any existing configurations (which will generate a new token) or remove them.
-
-## Step 3. Add LinkedIn Learning from the Azure AD application gallery
-
-Add LinkedIn Learning from the Azure AD application gallery to start managing provisioning to LinkedIn Learning. If you have previously setup LinkedIn Learning for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-
-## Step 4. Define who will be in scope for provisioning
-
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
--
-## Step 5. Configure automatic user provisioning to LinkedIn Learning
-
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
-
-### To configure automatic user provisioning for LinkedIn Learning in Azure AD:
-
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **LinkedIn Learning**.
-
- ![The LinkedIn Learning link in the Applications list](common/all-applications.png)
-
-3. Select the **Provisioning** tab.
-
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-
-4. Set the **Provisioning Mode** to **Automatic**.
-
- ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-
-5. Under the **Admin Credentials** section, input `https://api.linkedin.com/scim` in **Tenant URL**. Input the access token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to LinkedIn Learning. If the connection fails, ensure your LinkedIn Learning account has Admin permissions and try again.
-
- ![Screenshot shows the Admin Credentials dialog box, where you can enter your Tenant U R L and Secret Token.](./media/linkedin-learning-provisioning-tutorial/provisioning.png)
-
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
-
- ![Notification Email](common/provisioning-notification-email.png)
-
-7. Select **Save**.
-
-8. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
-
-9. Review the user attributes that are synchronized from Azure AD to LinkedIn Learning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LinkedIn Learning for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the LinkedIn Learning API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
-
- |Attribute|Type|Supported for filtering|
- ||||
- |externalId|String|&check;|
- |userName|String|
- |name.givenName|String|
- |name.familyName|String|
- |displayName|String|
- |addresses[type eq "work"].locality|String|
- |title|String|
- |emails[type eq "work"].value|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
-
-10. Under the **Mappings** section, select **Provision Azure Active Directory Groups**.
-
-11. Review the group attributes that are synchronized from Azure AD to LinkedIn Learning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in LinkedIn Learning for update operations. Select the **Save** button to commit any changes.
-
- |Attribute|Type|Supported for filtering|
- ||||
- |displayName|String|&check;|
- |members|Reference|
- |externalId|String|
-
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-13. To enable the Azure AD provisioning service for LinkedIn Learning, change the **Provisioning Status** to **On** in the **Settings** section.
-
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-
-14. Define the users and/or groups that you would like to provision to LinkedIn Learning by choosing the desired values in **Scope** in the **Settings** section.
-
- ![Provisioning Scope](common/provisioning-scope.png)
-
-15. When you are ready to provision, click **Save**.
-
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
-
-## Step 6. Monitor your deployment
-Once you've configured provisioning, use the following resources to monitor your deployment:
-
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-
-## Additional resources
-
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-
-## Next steps
-
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Linkedinlearning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinlearning-tutorial.md
In this tutorial, you configure and test Azure AD SSO in a test environment.
* LinkedIn Learning supports **SP and IDP** initiated SSO. * LinkedIn Learning supports **Just In Time** user provisioning.
-* LinkedIn Learning supports [Automated user provisioning](linkedin-learning-provisioning-tutorial.md).
## Add LinkedIn Learning from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > These values are not real. You will update these values with the actual Identifier, Reply URL and Sign on URL which is explained later in the **Configure LinkedIn Learning SSO** section of tutorial.
-1. LinkedIn Learning application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. LinkedIn Learning application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
+1. LinkedIn Learning application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, whereas **nameidentifier** is mapped with **user.userprincipalname**. LinkedIn Learning application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
![image](common/edit-attribute.png)
active-directory Lms And Education Management System Leaf Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lms-and-education-management-system-leaf-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with LMS and Education Management System Leaf'
+description: Learn how to configure single sign-on between Azure Active Directory and LMS and Education Management System Leaf.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with LMS and Education Management System Leaf
+
+In this tutorial, you'll learn how to integrate LMS and Education Management System Leaf with Azure Active Directory (Azure AD). When you integrate LMS and Education Management System Leaf with Azure AD, you can:
+
+* Control in Azure AD who has access to LMS and Education Management System Leaf.
+* Enable your users to be automatically signed-in to LMS and Education Management System Leaf with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* LMS and Education Management System Leaf single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* LMS and Education Management System Leaf supports **SP** initiated SSO.
+* LMS and Education Management System Leaf supports **Just In Time** user provisioning.
+
+## Add LMS and Education Management System Leaf from the gallery
+
+To configure the integration of LMS and Education Management System Leaf into Azure AD, you need to add LMS and Education Management System Leaf from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **LMS and Education Management System Leaf** in the search box.
+1. Select **LMS and Education Management System Leaf** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for LMS and Education Management System Leaf
+
+Configure and test Azure AD SSO with LMS and Education Management System Leaf using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in LMS and Education Management System Leaf.
+
+To configure and test Azure AD SSO with LMS and Education Management System Leaf, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure LMS and Education Management System Leaf SSO](#configure-lms-and-education-management-system-leaf-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create LMS and Education Management System Leaf test user](#create-lms-and-education-management-system-leaf-test-user)** - to have a counterpart of B.Simon in LMS and Education Management System Leaf that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **LMS and Education Management System Leaf** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.leaf-hrm.jp/`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.leaf-hrm.jp/loginusers/acs`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.leaf-hrm.jp/`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [LMS and Education Management System Leaf support team](mailto:leaf-jimukyoku@insource.co.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up LMS and Education Management System Leaf** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to LMS and Education Management System Leaf.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **LMS and Education Management System Leaf**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure LMS and Education Management System Leaf SSO
+
+To configure single sign-on on **LMS and Education Management System Leaf** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [LMS and Education Management System Leaf support team](mailto:leaf-jimukyoku@insource.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create LMS and Education Management System Leaf test user
+
+In this section, a user called B.Simon is created in LMS and Education Management System Leaf. LMS and Education Management System Leaf supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in LMS and Education Management System Leaf, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to LMS and Education Management System Leaf Sign-on URL where you can initiate the login flow.
+
+* Go to LMS and Education Management System Leaf Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the LMS and Education Management System Leaf tile in the My Apps, this will redirect to LMS and Education Management System Leaf Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure LMS and Education Management System Leaf you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Risecom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/risecom-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Rise.com'
+description: Learn how to configure single sign-on between Azure Active Directory and Rise.com.
++++++++ Last updated : 06/24/2022++++
+# Tutorial: Azure AD SSO integration with Rise.com
+
+In this tutorial, you'll learn how to integrate Rise.com with Azure Active Directory (Azure AD). When you integrate Rise.com with Azure AD, you can:
+
+* Control in Azure AD who has access to Rise.com.
+* Enable your users to be automatically signed-in to Rise.com with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Rise.com single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Rise.com supports **SP** and **IDP** initiated SSO.
+* Rise.com supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Rise.com from the gallery
+
+To configure the integration of Rise.com into Azure AD, you need to add Rise.com from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Rise.com** in the search box.
+1. Select **Rise.com** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Rise.com
+
+Configure and test Azure AD SSO with Rise.com using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Rise.com.
+
+To configure and test Azure AD SSO with Rise.com, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Rise.com SSO](#configure-risecom-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Rise.com test user](#create-risecom-test-user)** - to have a counterpart of B.Simon in Rise.com that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Rise.com** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type the URL:
+ `https://id.rise.com/sso/saml2`
+
+ b. In the **Relay State** text box, type a URL using the following pattern:
+ `https://<CustomerDomainName>.rise.com`
+
+ > [!Note]
+ > This value is not real. Update this value with the actual Relay State URL. Contact [Rise.com support team](mailto:Enterprise@rise.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Rise.com application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Rise.com application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Rise.com** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Rise.com.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Rise.com**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Rise.com SSO
+
+To configure single sign-on on **Rise.com** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Rise.com support team](mailto:Enterprise@rise.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Rise.com test user
+
+In this section, a user called B.Simon is created in Rise.com. Rise.com supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Rise.com, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Rise.com Sign-On URL where you can initiate the login flow.
+
+* Go to Rise.com Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Rise.com for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Rise.com tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Rise.com for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Rise.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Rootly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rootly-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Rootly'
+description: Learn how to configure single sign-on between Azure Active Directory and Rootly.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with Rootly
+
+In this tutorial, you'll learn how to integrate Rootly with Azure Active Directory (Azure AD). When you integrate Rootly with Azure AD, you can:
+
+* Control in Azure AD who has access to Rootly.
+* Enable your users to be automatically signed-in to Rootly with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Rootly single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Rootly supports **SP** and **IDP** initiated SSO.
+* Rootly supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Rootly from the gallery
+
+To configure the integration of Rootly into Azure AD, you need to add Rootly from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Rootly** in the search box.
+1. Select **Rootly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Rootly
+
+Configure and test Azure AD SSO with Rootly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Rootly.
+
+To configure and test Azure AD SSO with Rootly, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Rootly SSO](#configure-rootly-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Rootly test user](#create-rootly-test-user)** - to have a counterpart of B.Simon in Rootly that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Rootly** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://rootly.com/sso`
+
+1. Rootly application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of Rootly application.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Rootly application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstname | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+1. On the **Set up Rootly** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Rootly.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Rootly**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Rootly SSO
+
+To configure single sign-on on **Rootly** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Rootly support team](mailto:support@rootly.com). They set this setting to have the SAML SSO connection set properly on both sides. For more information, refer [this](https://docs.rootly.com/integrations/sso#sv-installation) link.
+
+### Create Rootly test user
+
+In this section, a user called B.Simon is created in Rootly. Rootly supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Rootly, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Rootly Sign-On URL where you can initiate the login flow.
+
+* Go to Rootly Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Rootly for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Rootly tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Rootly for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Rootly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tableau Online Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableau-online-provisioning-tutorial.md
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+### Recommendations
+Tableau Cloud will only store the highest privileged role that is assigned to a user. In other words, if a user is assigned to two groups, the userΓÇÖs role will reflect the highest privileged role.
++
+To keep track of role assignments, you can create two purpose-specific groups for role assignments. For example, you can create groups such as Tableau ΓÇô Creator, and Tableau ΓÇô Explorer, etc. Assignment would then look like:
+* Tableau ΓÇô Creator: Creator
+* Tableau ΓÇô Explorer: Explorer
+* Etc.
+
+Once provisioning is set up, you will want to edit role changes directly in Azure Active Directory. Otherwise, you may end up with role inconsistencies between Tableau Cloud and Azure Active Directory.
+
+### Valid Tableau site role values
+On the **Select a Role** page in your Azure Active Directory portal, the Tableau Site Role values that are valid include the following: **Creator, SiteAdministratorCreator, Explorer, SiteAdministratorExplorer, ExplorerCanPublish, Viewer, or Unlicensed**.
++
+If you select a role that is not in the above list, such as a legacy (pre-v2018.1) role, you will experience an error.
+ ## Step 5. Configure automatic user provisioning to Tableau Cloud
This section guides you through the steps to configure the Azure AD provisioning
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
-### Recommendations
-Tableau Cloud will only store the highest privileged role that is assigned to a user. In other words, if a user is assigned to two groups, the userΓÇÖs role will reflect the highest privileged role.
--
-To keep track of role assignments, you can create two purpose-specific groups for role assignments. For example, you can create groups such as Tableau ΓÇô Creator, and Tableau ΓÇô Explorer, etc. Assignment would then look like:
-* Tableau ΓÇô Creator: Creator
-* Tableau ΓÇô Explorer: Explorer
-* Etc.
-
-Once provisioning is set up, you will want to edit role changes directly in Azure Active Directory. Otherwise, you may end up with role inconsistencies between Tableau Cloud and Azure Active Directory.
-
-### Valid Tableau site role values
-On the **Select a Role** page in your Azure Active Directory portal, the Tableau Site Role values that are valid include the following: **Creator, SiteAdministratorCreator, Explorer, SiteAdministratorExplorer, ExplorerCanPublish, Viewer, or Unlicensed**.
--
-If you select a role that is not in the above list, such as a legacy (pre-v2018.1) role, you will experience an error.
### Update a Tableau Cloud application to use the Tableau Cloud SCIM 2.0 endpoint
active-directory Zdiscovery Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zdiscovery-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with ZDiscovery'
+description: Learn how to configure single sign-on between Azure Active Directory and ZDiscovery.
++++++++ Last updated : 06/27/2022++++
+# Tutorial: Azure AD SSO integration with ZDiscovery
+
+In this tutorial, you'll learn how to integrate ZDiscovery with Azure Active Directory (Azure AD). When you integrate ZDiscovery with Azure AD, you can:
+
+* Control in Azure AD who has access to ZDiscovery.
+* Enable your users to be automatically signed-in to ZDiscovery with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ZDiscovery single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* ZDiscovery supports **SP** and **IDP** initiated SSO.
+
+## Add ZDiscovery from the gallery
+
+To configure the integration of ZDiscovery into Azure AD, you need to add ZDiscovery from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ZDiscovery** in the search box.
+1. Select **ZDiscovery** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for ZDiscovery
+
+Configure and test Azure AD SSO with ZDiscovery using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ZDiscovery.
+
+To configure and test Azure AD SSO with ZDiscovery, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ZDiscovery SSO](#configure-zdiscovery-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ZDiscovery test user](#create-zdiscovery-test-user)** - to have a counterpart of B.Simon in ZDiscovery that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **ZDiscovery** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:<AUTH0_TENANT>:<CONNECTION_NAME>`
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |--|
+ | `https://zapproved.auth0.com/login/callback?connection=<YOUR_AUTH0_CONNECTION_NAME>` |
+ | `https://zapproved-sandbox.auth0.com/login/callback?connection=<YOUR_AUTH0_CONNECTION_NAME>` |
+ | `https://zapproved-preview.us.auth0.com/login/callback?connection=<YOUR_AUTH0_CONNECTION_NAME>` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ ||
+ | `https://zdiscovery.io/<CustomerName>/` |
+ | `https://zdiscovery-sandbox.io/<CustomerName>` |
+ | `https://zdiscovery-preview.io/<CustomerName>` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Rise.com support team](mailto:support@zapproved.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+1. On the **Set up ZDiscovery** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ZDiscovery.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ZDiscovery**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure ZDiscovery SSO
+
+To configure single sign-on on **ZDiscovery** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [ZDiscovery support team](mailto:support@zapproved.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create ZDiscovery test user
+
+In this section, you create a user called Britta Simon at ZDiscovery. Work with [ZDiscovery support team](mailto:support@zapproved.com) to add the users in the ZDiscovery platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to ZDiscovery Sign-On URL where you can initiate the login flow.
+
+* Go to ZDiscovery Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the ZDiscovery for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the ZDiscovery tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the ZDiscovery for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure ZDiscovery you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an AKS cluster by using Azure CLI'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI. Previously updated : 04/29/2022 Last updated : 06/28/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
To learn more about creating a Windows Server node pool, see [Create an AKS clus
- This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].
- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
-[az account](/cli/azure/account) command.
+[az account][az-account] command.
-- Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
+- Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* providers are registered on your subscription. These are Azure resource providers required to support [Container insights][azure-monitor-containers]. To check the registration status, run the following commands:
- ```azurecli-interactive
+ ```azurecli
az provider show -n Microsoft.OperationsManagement -o table az provider show -n Microsoft.OperationalInsights -o table ```
- If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
+ If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using the following commands:
- ```azurecli-interactive
+ ```azurecli
az provider register --namespace Microsoft.OperationsManagement az provider register --namespace Microsoft.OperationalInsights ```
To learn more about creating a Windows Server node pool, see [Create an AKS clus
## Create a resource group
-An [Azure resource group](../../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is:
+An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is:
* The storage location of your resource group metadata. * Where your resources will run in Azure if you don't specify another region during resource creation.
The following output example resembles successful creation of the resource group
## Create AKS cluster
-Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Container insights][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node:
+Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Container insights][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity:
```azurecli-interactive
-az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
+az aks create -g myResourceGroup -n myManagedCluster --enable-managed-identity --node-count 1 --enable-addons monitoring
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
az aks install-cli ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
- * Downloads credentials and configures the Kubernetes CLI to use them.
- * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file* argument.
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
+
+ * Downloads credentials and configures the Kubernetes CLI to use them.
+ * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file* argument.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
Two [Kubernetes Services][kubernetes-service] are also created:
* An internal service for the Redis instance. * An external service to access the Azure Vote application from the internet.
-1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
-1. Copy in the following YAML definition:
+1. Create a file named `azure-vote.yaml` and copy in the following manifest.
+
+ * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system.
```yaml apiVersion: apps/v1
This quickstart is for introductory purposes. For guidance on a creating full so
<!-- LINKS - internal --> [kubernetes-concepts]: ../concepts-clusters-workloads.md [aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
+[aks-identity-concepts]: ../concepts-identity.md
[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[azure-resource-group]: ../../azure-resource-manager/management/overview.md
+[az-account]: /cli/azure/account
[az-aks-browse]: /cli/azure/aks#az-aks-browse [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
This article shows you how to create an RDP connection with an AKS node using th
## Before you begin
+### [Azure CLI](#tab/azure-cli)
+ This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure CLI][aks-quickstart-windows-cli]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac]. If you need to reset the password you can use `az aks update` to change the password.
If you need to reset both the username and password, see [Reset Remote Desktop S
You also need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+### [Azure PowerShell](#tab/azure-powershell)
+
+This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure PowerShell][aks-quickstart-windows-powershell]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac].
+
+If you need to reset the password you can use `Set-AzAksCluster` to change the password.
+
+```azurepowershell-interactive
+$cluster = Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster
+$cluster.WindowsProfile.AdminPassword = $WINDOWS_ADMIN_PASSWORD
+$cluster | Set-AzAksCluster
+```
+
+If you need to reset both the username and password, see [Reset Remote Desktop Services or its administrator password in a Windows VM
+](/troubleshoot/azure/virtual-machines/reset-rdp).
+
+You also need the Azure PowerShell version 7.5.0 or later installed and configured. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][install-azure-powershell].
+++ ## Deploy a virtual machine to the same subnet as your cluster The Windows Server nodes of your AKS cluster don't have externally accessible IP addresses. To make an RDP connection, you can deploy a virtual machine with a publicly accessible IP address to the same subnet as your Windows Server nodes. The following example creates a virtual machine named *myVM* in the *myResourceGroup* resource group.
-First, get the subnet used by your Windows Server node pool. To get the subnet id, you need the name of the subnet. To get the name of the subnet, you need the name of the vnet. Get the vnet name by querying your cluster for its list of networks. To query the cluster, you need its name. You can get all of these by running the following in the Azure Cloud Shell:
+### [Azure CLI](#tab/azure-cli)
+
+First, get the subnet used by your Windows Server node pool. To get the subnet ID, you need the name of the subnet. To get the name of the subnet, you need the name of the VNet. Get the VNet name by querying your cluster for its list of networks. To query the cluster, you need its name. You can get all of these by running the following in the Azure Cloud Shell:
```azurecli-interactive CLUSTER_RG=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
The following example output shows the VM has been successfully created and disp
Record the public IP address of the virtual machine. You will use this address in a later step.
+### [Azure PowerShell](#tab/azure-powershell)
+
+First, get the subnet used by your Windows Server node pool. You need the name of the subnet and its address prefix. To get the name of the subnet, you need the name of the VNet. Get the VNet name by querying your cluster for its list of networks. To query the cluster, you need its name. You can get all of these by running the following in the Azure Cloud Shell:
+
+```azurepowershell-interactive
+$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+$VNET_NAME = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Name
+$ADDRESS_PREFIX = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).AddressSpace | Select-Object -ExpandProperty AddressPrefixes
+$SUBNET_NAME = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Subnets[0].Name
+$SUBNET_ADDRESS_PREFIX = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Subnets[0] | Select-Object -ExpandProperty AddressPrefix
+```
+
+Now that you have the VNet and subnet details, run the following commands in the same Azure Cloud Shell window to create the public IP address and VM:
+
+```azurepowershell-interactive
+$ipParams = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus'
+ AllocationMethod = 'Dynamic'
+ IpAddressVersion = 'IPv4'
+}
+New-AzPublicIpAddress @ipParams
+
+$vmParams = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myVM'
+ Image = 'win2019datacenter'
+ Credential = Get-Credential azureuser
+ VirtualNetworkName = $VNET_NAME
+ AddressPrefix = $ADDRESS_PREFIX
+ SubnetName = $SUBNET_NAME
+ SubnetAddressPrefix = $SUBNET_ADDRESS_PREFIX
+ PublicIpAddressName = 'myPublicIP'
+}
+New-AzVM @vmParams
+
+(Get-AzPublicIpAddress -ResourceGroupName myResourceGroup -Name myPublicIP).IpAddress
+```
+
+The following example output shows the VM has been successfully created and displays the public IP address of the virtual machine.
+
+```console
+13.62.204.18
+```
+
+Record the public IP address of the virtual machine. You will use this address in a later step.
+++ ## Allow access to the virtual machine AKS node pool subnets are protected with NSGs (Network Security Groups) by default. To get access to the virtual machine, you'll have to enabled access in the NSG.
AKS node pool subnets are protected with NSGs (Network Security Groups) by defau
> The NSGs are controlled by the AKS service. Any change you make to the NSG will be overwritten at any time by the control plane. >
-First, get the resource group and nsg name of the nsg to add the rule to:
+### [Azure CLI](#tab/azure-cli)
+
+First, get the resource group and name of the NSG to add the rule to:
```azurecli-interactive CLUSTER_RG=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
Then, create the NSG rule:
az network nsg rule create --name tempRDPAccess --resource-group $CLUSTER_RG --nsg-name $NSG_NAME --priority 100 --destination-port-range 3389 --protocol Tcp --description "Temporary RDP access to Windows nodes" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+First, get the resource group and name of the NSG to add the rule to:
+
+```azurepowershell-interactive
+$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+$NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name
+```
+
+Then, create the NSG rule:
+
+```azurepowershell-interactive
+$nsgRuleParams = @{
+ Name = 'tempRDPAccess'
+ Access = 'Allow'
+ Direction = 'Inbound'
+ Priority = 100
+ SourceAddressPrefix = 'Internet'
+ SourcePortRange = '*'
+ DestinationAddressPrefix = '*'
+ DestinationPortRange = '3389'
+ Protocol = 'Tcp'
+ Description = 'Temporary RDP access to Windows nodes'
+}
+Get-AzNetworkSecurityGroup -Name $NSG_NAME -ResourceGroupName $CLUSTER_RG | Add-AzNetworkSecurityRuleConfig @nsgRuleParams | Set-AzNetworkSecurityGroup
+```
+++ ## Get the node address
+### [Azure CLI](#tab/azure-cli)
+ To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli][az-aks-install-cli] command:
-```azurecli-interactive
+```azurecli
az aks install-cli ```
To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks ge
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
+
+```azurepowershell
+Install-AzAksKubectl
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [Import-AzAksCredential][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurepowershell-interactive
+Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+```
+++ List the internal IP address of the Windows Server nodes using the [kubectl get][kubectl-get] command: ```console kubectl get nodes -o wide ```
-The follow example output shows the internal IP addresses of all the nodes in the cluster, including the Windows Server nodes.
+The following example output shows the internal IP addresses of all the nodes in the cluster, including the Windows Server nodes.
```console $ kubectl get nodes -o wide
You can now run any troubleshooting commands in the *cmd* window. Since Windows
## Remove RDP access
+### [Azure CLI](#tab/azure-cli)
+ When done, exit the RDP connection to the Windows Server node then exit the RDP session to the virtual machine. After you exit both RDP sessions, delete the virtual machine with the [az vm delete][az-vm-delete] command: ```azurecli-interactive
NSG_NAME=$(az network nsg list -g $CLUSTER_RG --query [].name -o tsv)
az network nsg rule delete --resource-group $CLUSTER_RG --nsg-name $NSG_NAME --name tempRDPAccess ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+When done, exit the RDP connection to the Windows Server node then exit the RDP session to the virtual machine. After you exit both RDP sessions, delete the virtual machine with the [Remove-AzVM][remove-azvm] command:
+
+```azurepowershell-interactive
+Remove-AzVM -ResourceGroupName myResourceGroup -Name myVM
+```
+
+And the NSG rule:
+
+```azurepowershell-interactive
+$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+$NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name
+```
+
+```azurepowershell-interactive
+Get-AzNetworkSecurityGroup -Name $NSG_NAME -ResourceGroupName $CLUSTER_RG | Remove-AzNetworkSecurityRuleConfig -Name tempRDPAccess | Set-AzNetworkSecurityGroup
+```
+++ ## Next steps If you need additional troubleshooting data, you can [view the Kubernetes master node logs][view-master-logs] or [Azure Monitor][azure-monitor-containers].
If you need additional troubleshooting data, you can [view the Kubernetes master
<!-- INTERNAL LINKS --> [aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md
+[aks-quickstart-windows-powershell]: ./learn/quick-windows-container-deploy-powershell.md
[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
+[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
[az-vm-delete]: /cli/azure/vm#az_vm_delete
+[remove-azvm]: /powershell/module/az.compute/remove-azvm
[azure-monitor-containers]: ../azure-monitor/containers/container-insights-overview.md [install-azure-cli]: /cli/azure/install-azure-cli
+[install-azure-powershell]: /powershell/azure/install-az-ps
[ssh-steps]: ssh.md
-[view-master-logs]: view-master-logs.md
+[view-master-logs]: view-master-logs.md
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
Instead of service principals, use managed identities, which are essentially wra
## How do I change the administrator password for Windows Server nodes on my cluster?
+### [Azure CLI](#tab/azure-cli)
+ When you create your AKS cluster, you specify the `--windows-admin-password` and `--windows-admin-username` parameters to set the administrator credentials for any Windows Server nodes on the cluster. If you didn't specify administrator credentials when you created a cluster by using the Azure portal or when setting `--vm-set-type VirtualMachineScaleSets` and `--network-plugin azure` by using the Azure CLI, the username defaults to *azureuser* and a randomized password. To change the administrator password, use the `az aks update` command:
az aks update \
> > When you're changing `--windows-admin-password`, the new password must be at least 14 characters and meet [Windows Server password requirements][windows-server-password].
+### [Azure PowerShell](#tab/azure-powershell)
+
+When you create your AKS cluster, you specify the `-WindowsProfileAdminUserPassword` and `-WindowsProfileAdminUserName` parameters to set the administrator credentials for any Windows Server nodes on the cluster. If you didn't specify administrator credentials when you created a cluster by using the Azure portal or when setting `-NodeVmSetType VirtualMachineScaleSets` and `-NetworkPlugin azure` by using the Azure PowerShell, the username defaults to *azureuser* and a randomized password.
+
+To change the administrator password, use the `Set-AzAksCluster` command:
+
+```azurepowershell
+$cluster = Get-AzAksCluster -ResourceGroupName $RESOURCE_GROUP -Name $CLUSTER_NAME
+$cluster.WindowsProfile.AdminPassword = $NEW_PW
+$cluster | Set-AzAksCluster
+```
+
+> [!IMPORTANT]
+> Performing the `Set-AzAksCluster` operation upgrades only Windows Server node pools. Linux node pools are not affected.
+>
+> When you're changing the Windows administrator password, the new password must be at least 14 characters and meet [Windows Server password requirements][windows-server-password].
+++ ## How many node pools can I create? The AKS cluster can have a maximum of 100 node pools. You can have a maximum of 1,000 nodes across those node pools. For more information, see [Node pool limitations][nodepool-limitations].
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
To configure an access policy using the portal:
### <a name="use-ssl-tls-certificate-from-azure-key-vault"></a>Obtain a custom TLS/SSL certificate for the API Management instance from Azure Key Vault You can use the system-assigned identity of an API Management instance to retrieve custom TLS/SSL certificates stored in Azure Key Vault. You can then assign these certificates to custom domains in the API Management instance. Keep these considerations in mind: -- The content type of the secret must be *application/x-pkcs12*.
+- The content type of the secret must be *application/x-pkcs12*. Learn more about custom domain [certificate requirements](configure-custom-domain.md?tabs=key-vault#domain-certificate-options).
- Use the Key Vault certificate secret endpoint, which contains the secret. > [!Important] > If you don't provide the object version of the certificate, API Management will automatically obtain the newer version of the certificate within four hours after it's updated in Key Vault.
-The following example shows an Azure Resource Manager template that contains the following steps:
+The following example shows an Azure Resource Manager template that uses the system-assigned managed identity of an API Management service instance to retrieve a custom domain certificate from Key Vault.
+
+#### Prerequisites
+
+* An API Management service instance configured with a system-assigned managed identity. To create the instance, you can use an [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/api-management-create-with-msi/).
+* An Azure Key Vault instance in the same resource group, hosting a certificate that will be used as a custom domain certificate in API Management.
+
+The following template contains the following steps.
+
+1. Update the access policies of the Azure Key Vault instance and allow the API Management instance to obtain secrets from it.
+1. Update the API Management instance by setting a custom domain name through the certificate from the Key Vault instance.
-1. Create an API Management instance with a managed identity.
-2. Update the access policies of an Azure Key Vault instance and allow the API Management instance to obtain secrets from it.
-3. Update the API Management instance by setting a custom domain name through a certificate from the Key Vault instance.
+When you run the template, provide parameter values appropriate for your environment.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "publisherEmail": {
- "type": "string",
- "minLength": 1,
- "metadata": {
- "description": "The email address of the owner of the service"
- }
- },
- "publisherName": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "apiManagementServiceName": {
"type": "string",
- "defaultValue": "Contoso",
- "minLength": 1,
- "metadata": {
- "description": "The name of the owner of the service"
- }
- },
- "sku": {
- "type": "string",
- "allowedValues": ["Developer",
- "Standard",
- "Premium"],
- "defaultValue": "Developer",
- "metadata": {
- "description": "The pricing tier of this API Management instance"
- }
- },
- "skuCount": {
- "type": "int",
- "defaultValue": 1,
- "metadata": {
- "description": "The instance size of this API Management instance."
+ "minLength": 8,
+ "metadata":{
+ "description": "The name of the API Management service"
} },
+ "publisherEmail": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "The email address of the owner of the service"
+ }
+ },
+ "publisherName": {
+ "type": "string",
+ "minLength": 1,
+ "metadata": {
+ "description": "The name of the owner of the service"
+ }
+ },
+ "sku": {
+ "type": "string",
+ "allowedValues": ["Developer",
+ "Standard",
+ "Premium"],
+ "defaultValue": "Developer",
+ "metadata": {
+ "description": "The pricing tier of this API Management service"
+ }
+ },
+ "skuCount": {
+ "type": "int",
+ "defaultValue": 1,
+ "metadata": {
+ "description": "The instance size of this API Management service."
+ }
+ },
"keyVaultName": { "type": "string", "metadata": {
- "description": "Name of the vault"
- }
- },
- "proxyCustomHostname1": {
- "type": "string",
- "metadata": {
- "description": "Gateway custom hostname."
+ "description": "Name of the key vault"
} },
- "keyVaultIdToCertificate": {
- "type": "string",
- "metadata": {
- "description": "Reference to the Key Vault certificate. https://contoso.vault.azure.net/secrets/contosogatewaycertificate."
- }
- }
- },
- "variables": {
- "apiManagementServiceName": "[concat('apiservice', uniqueString(resourceGroup().id))]",
- "apimServiceIdentityResourceId": "[concat(resourceId('Microsoft.ApiManagement/service', variables('apiManagementServiceName')),'/providers/Microsoft.ManagedIdentity/Identities/default')]"
- },
- "resources": [{
+ "proxyCustomHostname1": {
+ "type": "string",
+ "metadata": {
+ "description": "Gateway custom hostname 1. Example: api.contoso.com"
+ }
+ },
+ "keyVaultIdToCertificate": {
+ "type": "string",
+ "metadata": {
+ "description": "Reference to the key vault certificate. Example: https://contoso.vault.azure.net/secrets/contosogatewaycertificate"
+ }
+ }
+ },
+ "variables": {
+ "apimServiceIdentityResourceId": "[concat(resourceId('Microsoft.ApiManagement/service', parameters('apiManagementServiceName')),'/providers/Microsoft.ManagedIdentity/Identities/default')]"
+ },
+ "resources": [
+ {
"apiVersion": "2021-08-01",
- "name": "[variables('apiManagementServiceName')]",
+ "name": "[parameters('apiManagementServiceName')]",
"type": "Microsoft.ApiManagement/service", "location": "[resourceGroup().location]", "tags": {
The following example shows an Azure Resource Manager template that contains the
{ "type": "Microsoft.KeyVault/vaults/accessPolicies", "name": "[concat(parameters('keyVaultName'), '/add')]",
- "apiVersion": "2015-06-01",
- "dependsOn": [
- "[resourceId('Microsoft.ApiManagement/service', variables('apiManagementServiceName'))]"
- ],
+ "apiVersion": "2018-02-14",
"properties": { "accessPolicies": [{
- "tenantId": "[reference(variables('apimServiceIdentityResourceId'), '2015-08-31-PREVIEW').tenantId]",
- "objectId": "[reference(variables('apimServiceIdentityResourceId'), '2015-08-31-PREVIEW').principalId]",
+ "tenantId": "[reference(variables('apimServiceIdentityResourceId'), '2018-11-30').tenantId]",
+ "objectId": "[reference(variables('apimServiceIdentityResourceId'), '2018-11-30').principalId]",
"permissions": { "secrets": ["get", "list"] } }] } },
- {
- "apiVersion": "2017-05-10",
+ {
+ "apiVersion": "2021-04-01",
+ "type": "Microsoft.Resources/deployments",
"name": "apimWithKeyVault",
- "type": "Microsoft.Resources/deployments",
- "dependsOn": [
- "[resourceId('Microsoft.ApiManagement/service', variables('apiManagementServiceName'))]"
+ "dependsOn": [
+ "[resourceId('Microsoft.ApiManagement/service', parameters('apiManagementServiceName'))]"
], "properties": { "mode": "incremental",
- "templateLink": {
- "uri": "https://raw.githubusercontent.com/solankisamir/arm-templates/master/basicapim.keyvault.json",
- "contentVersion": "1.0.0.0"
- },
- "parameters": {
- "publisherEmail": { "value": "[parameters('publisherEmail')]"},
- "publisherName": { "value": "[parameters('publisherName')]"},
- "sku": { "value": "[parameters('sku')]"},
- "skuCount": { "value": "[parameters('skuCount')]"},
- "proxyCustomHostname1": {"value" : "[parameters('proxyCustomHostname1')]"},
- "keyVaultIdToCertificate": {"value" : "[parameters('keyVaultIdToCertificate')]"}
- }
- }
- }]
+ "template": {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "resources": [{
+ "apiVersion": "2021-08-01",
+ "name": "[parameters('apiManagementServiceName')]",
+ "type": "Microsoft.ApiManagement/service",
+ "location": "[resourceGroup().location]",
+ "tags": {
+ },
+ "sku": {
+ "name": "[parameters('sku')]",
+ "capacity": "[parameters('skuCount')]"
+ },
+ "properties": {
+ "publisherEmail": "[parameters('publisherEmail')]",
+ "publisherName": "[parameters('publisherName')]",
+ "hostnameConfigurations": [{
+ "type": "Proxy",
+ "hostName": "[parameters('proxyCustomHostname1')]",
+ "keyVaultId": "[parameters('keyVaultIdToCertificate')]"
+ }]
+ },
+ "identity": {
+ "type": "systemAssigned"
+ }
+ }]
+ }
+ }
+ }
+]
} ```
API Management is a trusted Microsoft service to the following resources. This a
|Azure Key Vault | [Trusted-access-to-azure-key-vault](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services)| |Azure Storage | [Trusted-access-to-azure-storage](../storage/common/storage-network-security.md?tabs=azure-portal#trusted-access-based-on-system-assigned-managed-identity)| |Azure Service Bus | [Trusted-access-to-azure-service-bus](../service-bus-messaging/service-bus-ip-filtering.md#trusted-microsoft-services)|
-|Azure Event Hub | [Trused-access-to-azure-event-hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)|
+|Azure Event Hubs | [Trused-access-to-azure-event-hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)|
## Create a user-assigned managed identity
Keep these considerations in mind:
For the complete template, see [API Management with Key Vault based SSL using User Assigned Identity](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.apimanagement/api-management-key-vault-create/azuredeploy.json).
-In this template, you will deploy:
+In this template, you'll deploy:
* Azure API Management instance * Azure user-assigned managed identity * Azure Key Vault for storing the SSL/TLS certificate
-To run the deployment automatically, click the following button:
+To run the deployment automatically, select the following button:
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-key-vault-create%2Fazuredeploy.json)
api-management Graphql Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-policies.md
This article provides a reference for API Management policies to validate and re
The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths. An invalid query is a "request error". Authorization is only done for valid requests. -- **Permissions** Because GraphQL queries use a flattened schema: * Permissions may be applied at any leaf node of an output type:
Because GraphQL queries use a flattened schema:
* Fragments * Unions * Interfaces
- * The schema element
+ * The schema element
**Authorize element** Configure the `authorize` element to set an appropriate authorization rule for one or more paths.
Configure the `authorize` element to set an appropriate authorization rule for o
**Introspection system** The policy for path=`/__*` is the [introspection](https://graphql.org/learn/introspection/) system. You can use it to reject introspection requests (`__schema`, `__type`, etc.). + ### Policy statement ```xml
The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in
<set-graphql-resolver parent-type="type" field="field"> <http-data-source> <http-request>
- <set-method>HTTP method</set-method>
+ <set-method>...set-method policy configuration...</set-method>
<set-url>URL</set-url>
- [...]
+ <set-header>...set-header policy configuration...</set-header>
+ <set-body>...set-body policy configuration...</set-body>
+ <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate>
</http-request> <http-response>
- [...]
+ <json-to-xml>...json-to-xml policy configuration...</json-to-xml>
+ <xml-to-json>...xml-to-json policy configuration...</xml-to-json>
+ <find-and-replace>...find-and-replace policy configuration...</find-and-replace>
</http-response> </http-data-source> </set-graphql-resolver>
type User {
| | | -- | | `set-graphql-resolver` | Root element. | Yes | | `http-data-source` | Configures the HTTP request and optionally the HTTP response that are used to resolve data for the given `parent-type` and `field`. | Yes |
-| `http-request` | Specifies a URL and child policies to configure the resolver's HTTP request. Each of the following policies can be specified at most once in the element. <br/><br/>Required policy: [set-method](api-management-advanced-policies.md#SetRequestMethod)<br/><br/>Optional policies: [set-header](api-management-transformation-policies.md#SetHTTPheader), [set-body](api-management-transformation-policies.md#SetBody), [authentication-certificate](api-management-authentication-policies.md#ClientCertificate) | Yes |
-| `set-url` | The URL of the resolver's HTTP request. | Yes |
-| `http-response` | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each of the following policies can be specified at most once. <br/><br/>Optional policies: [set-body](api-management-transformation-policies.md#SetBody), [json-to-xml](api-management-transformation-policies.md#ConvertJSONtoXML), [xml-to-json](api-management-transformation-policies.md#ConvertXMLtoJSON), [find-and-replace](api-management-transformation-policies.md#Findandreplacestringinbody) | No |
+| `http-request` | Specifies a URL and child policies to configure the resolver's HTTP request. Each child element can be specified at most once. | Yes |
+| `set-method`| Method of the resolver's HTTP request, configured using the [set-method](api-management-advanced-policies.md#SetRequestMethod) policy. | Yes |
+| `set-url` | URL of the resolver's HTTP request. | Yes |
+| `set-header` | Header set in the resolver's HTTP request, configured using the [set-header](api-management-transformation-policies.md#SetHTTPheader) policy. | No |
+| `set-body` | Body set in the resolver's HTTP request, configured using the [set-body](api-management-transformation-policies.md#SetBody) policy. | No |
+| `authentication-certificate` | Client certificate presented in the resolver's HTTP request, configured using the [authentication-certificate](api-management-authentication-policies.md#ClientCertificate) policy. | No |
+| `http-response` | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each child element can be specified at most once. |
+| `json-to-xml` | Transforms the resolver's HTTP response using the [json-to-xml](api-management-transformation-policies.md#ConvertJSONtoXML) policy. | No |
+| `xml-to-json` | Transforms the resolver's HTTP response using the [xml-to-json](api-management-transformation-policies.md#ConvertJSONtoXML) policy. | No |
+| `find-and-replace` | Transforms the resolver's HTTP response using the [find-and-replace](api-management-transformation-policies.md#Findandreplacestringinbody) policy. | No |
+ ### Attributes
api-management How To Deploy Self Hosted Gateway Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-docker.md
This article provides the steps for deploying self-hosted gateway component of A
docker run -d -p 80:8080 -p 443:8081 --name <gateway-name> --env-file env.conf mcr.microsoft.com/azure-api-management/gateway:<tag> ```
-9. Execute the command. The command instructs your Docker environment to run the container using a [container image](https://aka.ms/apim/sputnik/registry-portal) from the Microsoft Artifact Registry, and to map the container's HTTP (8080) and HTTPS (8081) ports to ports 80 and 443 on the host.
+9. Execute the command. The command instructs your Docker environment to run the container using a [container image](https://aka.ms/apim/shgw/registry-portal) from the Microsoft Artifact Registry, and to map the container's HTTP (8080) and HTTPS (8081) ports to ports 80 and 443 on the host.
10. Run the below command to check if the gateway container is running: ```console docker ps
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
This article describes the steps for deploying the self-hosted gateway component
6. Select the **\<gateway-name\>.yml** file link and download the YAML file. 7. Select the **copy** icon at the lower-right corner of the **Deploy** text box to save the `kubectl` commands to the clipboard. 8. Paste commands to the terminal (or command) window. The first command creates a Kubernetes secret that contains the access token generated in step 4. The second command applies the configuration file downloaded in step 6 to the Kubernetes cluster and expects the file to be in the current directory.
-9. Run the commands to create the necessary Kubernetes objects in the [default namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) and start self-hosted gateway pods from the [container image](https://aka.ms/apim/sputnik/registry-portal) downloaded from the Microsoft Artifact Registry.
+9. Run the commands to create the necessary Kubernetes objects in the [default namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) and start self-hosted gateway pods from the [container image](https://aka.ms/apim/shgw/registry-portal) downloaded from the Microsoft Artifact Registry.
10. Run the following command to check if the deployment succeeded. Note that it might take a little time for all the objects to be created and for the pods to initialize. ```console
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Deploying self-hosted gateways into the same environments where the backend API
## Packaging and features
-The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/sputnik/registry-portal) from the Microsoft Artifact Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
+The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/shgw/registry-portal) from the Microsoft Artifact Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
### Known limitations
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
To configure a policy:
</on-error> </policies> ```
+ > [!NOTE]
+ > Set a policy's elements and child elements in the order provided in the policy statement.
+ 1. Select **Save** to propagate changes to the API Management gateway immediately. The **ip-filter** policy now appears in the **Inbound processing** section.
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
There are two types of backups in App Service. Automatic backups made for your a
| [Storage account](../storage/index.yml) required | No. | Yes. | | Backup frequency | Hourly, not configurable. | Configurable. | | Retention | 30 days, not configurable. | 0-30 days or indefinite. |
-| Donwloadable | No. | Yes, as Azure Storage blobs. |
+| Downloadable | No. | Yes, as Azure Storage blobs. |
| Partial backups | Not supported. | Supported. | <!-
app-service Manage Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-move-across-regions.md
Certain resources, such as imported certificates or hybrid connections, contain
1. [Create a back up of the source app](manage-backup.md). 1. [Create an app in a new App Service plan, in the target region](app-service-plan-manage.md#create-an-app-service-plan). 2. [Restore the back up in the target app](manage-backup.md)
-2. If you use a custom domain, [bind it preemptively to the target app](manage-custom-dns-migrate-domain.md#bind-the-domain-name-preemptively) with `awverify.` and [enable the domain in the target app](manage-custom-dns-migrate-domain.md#enable-the-domain-for-your-app).
+2. If you use a custom domain, [bind it preemptively to the target app](manage-custom-dns-migrate-domain.md#bind-the-domain-name-preemptively) with `asuid.` and [enable the domain in the target app](manage-custom-dns-migrate-domain.md#enable-the-domain-for-your-app).
3. Configure everything else in your target app to be the same as the source app and verify your configuration. 4. When you're ready for the custom domain to point to the target app, [remap the domain name](manage-custom-dns-migrate-domain.md#remap-the-active-dns-name).
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
To host your application in Azure, you need to create Azure App Service web app in Azure. You can create a web app using the [Azure portal](https://portal.azure.com/), [VS Code](https://code.visualstudio.com/) using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), or the Azure CLI.
-### [Azure portal](#tab/azure-portal)
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+### [Azure CLI](#tab/azure-cli)
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot of how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
### [VS Code](#tab/vscode-aztools)
code .
| [!INCLUDE [Create app service step 8](<./includes/quickstart-python/create-app-service-visual-studio-code-8.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-8-240-px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-8.png"::: | | [!INCLUDE [Create app service step 9](<./includes/quickstart-python/create-app-service-visual-studio-code-9.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-9-240-px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-9.png"::: |
-### [Azure CLI](#tab/azure-cli)
+### [Azure portal](#tab/azure-portal)
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot of how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Train a custom model to analyze and extract data from forms and documents specif
### Prerequisites for training a custom form model
-* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip). If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md).
+* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip).
+
+* If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md).
* Configure CORS
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
In the following example, we use the General Documents feature. The steps to use
1. This step is a one-time process unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
-1. Select the Analyze command to run analysis on the sample document or try your document by using the Add command.
+1. Select the Analyze button to run analysis on the sample document or try your document by using the Add command.
1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view.
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Previously updated : 06/06/2022 Last updated : 06/28/2022
To learn more about Form Recognizer features and development options, visit our
**Document Analysis**
-* 🆕 Read—Analyze and extract printed (typeface) and handwritten text lines, words, locations, and detected languages.
+* 🆕 Read—Analyze and extract printed (typeface) and handwritten text lines, words, locations, and detected languages.
* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities. * Layout—Analyze and extract tables, lines, words, and selection marks from documents, without the need to train a model. **Prebuilt Models**
-* 🆕 W-2—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
+* 🆕 W-2—Analyze and extract fields from US W-2 tax documents (used to report income), using a pre-trained W-2 model.
* InvoicesΓÇöAnalyze and extract common fields from invoices, using a pre-trained invoice model. * ReceiptsΓÇöAnalyze and extract common fields from receipts, using a pre-trained receipt model. * ID documentsΓÇöAnalyze and extract common fields from ID documents like passports or driver's licenses, using a pre-trained ID documents model.
To learn more about Form Recognizer features and development options, visit our
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* [cURL](https://curl.haxx.se/windows/) installed.
+* curl command line tool installed.
+
+ * [Windows](https://curl.haxx.se/windows/)
+ * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows)
* [PowerShell version 7.*+](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true), or a similar command-line application. To check your PowerShell version, type `Get-Host | Select-Object Version`.
-* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* A Form Recognizer (single-service) or Cognitive Services (multi-service) resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
> [!TIP] > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
To learn more about Form Recognizer features and development options, visit our
* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart: :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
+ ## Analyze documents and get results
- Form Recognizer v3.0 consolidates the analyze document (POST) and get result (GET) requests into single operations. The `modelId` is used for POST and `resultId` for GET operations.
+ A POST request is used to analyze documents with a prebuilt or custom model. A GET request is used to retrieve the result of a document analysis call. The `modelId` is used with POST and `resultId` with GET operations.
### Analyze document (POST Request)
curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-
| ID Documents | prebuilt-idDocument | [Sample ID document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/identity_documents.png) | | Business Cards | prebuilt-businessCard | [Sample business card](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/de5e0d8982ab754823c54de47a47e8e499351523/curl/form-recognizer/rest-api/business_card.jpg) |
-#### Operation-Location
+#### POST response
-You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a `resultID` that can be queried to get the status of the asynchronous operation:
+You'll receive a `202 (Success)` response that includes an **Operation-location** header. The value of this header contains a `resultID` that can be queried to get the status of the asynchronous operation:
:::image type="content" source="../media/quickstarts/operation-location-result-id.png" alt-text="{alt-text}":::
You'll receive a `202 (Success)` response that includes an **Operation-Location*
After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+1. Replace `{POST response}` Operation-location header from the [POST response](#post-response).
-1. Replace `{endpoint}` with the endpoint value from your Form Recognizer instance in the Azure portal.
1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal.
-1. Replace `{modelID}` with the same modelID you used to analyze your document.
-1. Replace `{resultID}` with the result ID from the [Operation-Location](#operation-location) header.
+ <!-- markdownlint-disable MD024 --> #### GET request ```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
+curl -v -X GET "{POST response}" -H "Ocp-Apim-Subscription-Key: {key}"
``` #### Examine the response
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Use this procedure to configure source control using the Azure portal.
|Publish Runbook | Setting of On if runbooks are automatically published after synchronization from source control, and Off otherwise. | |Description | Text specifying additional details about the source control. |
- <sup>1</sup> To enable Auto Sync when configuring source control integration with Azure DevOps, you must be a Project Administrator.
+ <sup>1</sup> To enable Auto Sync when configuring source control integration with Azure DevOps, you must be a Project Administrator.</br>
+ Auto Sync does not work with Automation Private Link. If you enable the Private Link, the source control webhook invocations will fail as it is outside the network.
:::image type="content" source="./media/source-control-integration/source-control-summary-inline.png" alt-text="Screenshot that describes the Source control summary." lightbox="./media/source-control-integration/source-control-summary-expanded.png"::: > [!NOTE]
-> The login for your source control repository might be different from your login for the Azure portal. Ensure that you are logged in with the correct account for your source control repository when configuring source control. If there is a doubt, open a new tab in your browser, log out from **dev.azure.com**, **visualstudio.com**, or **github.com**, and try reconnecting to source control.
+> The login for your source control repository might be different from your login for the Azure portal. Ensure that you are logged in with the correct account for your source control repository when configuring source control. If there is a doubt, open a new tab in your browser, log out from **dev.azure.com**, **visualstudio.com**, or **github.com**, and try reconnecting to source control.
### Configure source control in PowerShell
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Title: Create a Data Controller using Kubernetes tools
-description: Create a Data Controller using Kubernetes tools
+ Title: Create a data controller using Kubernetes tools
+description: Create a data controller using Kubernetes tools
Last updated 11/03/2021
-# Create Azure Arc data controller using Kubernetes tools
+# Create Azure Arc-enabled data controller using Kubernetes tools
+A data controller manages Azure Arc-enabled data services for a Kubernetes cluster. This article describes how to use Kubernetes tools to create a data controller.
## Prerequisites Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information.
-To create the Azure Arc data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
+To create the data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/) > [!NOTE]
-> Some of the steps to create the Azure Arc data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
+> Some of the steps to create the data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
### Cleanup from past installations
-If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted.
+If you installed the data controller in the past on the same cluster and deleted the data controller, there may be some cluster level objects that would still need to be deleted.
For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`.
-Run the following commands to delete the Azure Arc data controller cluster level objects:
+Run the following commands to delete the data controller cluster level objects:
```console # Cleanup azure arc data service artifacts
kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{names
## Overview
-Creating the Azure Arc data controller has the following high level steps:
+Creating the data controller has the following high level steps:
- > [!IMPORTANT]
- > Some of the steps below require Kubernetes cluster administrator permissions.
-
-1. Create the custom resource definitions for the Arc data controller, Azure SQL managed instance, and PostgreSQL Hyperscale.
-1. Create a namespace in which the data controller will be created.
+1. Create a namespace in which the data controller will be created.
+1. Create the deployer service account.
1. Create the bootstrapper service including the replica set, service account, role, and role binding. 1. Create a secret for the data controller administrator username and password.
-1. Create the webhook deployment job, cluster role and cluster role binding.
1. Create the data controller.
-## Create the custom resource definitions
-
-Run the following command to create the custom resource definitions.
-
- > [!IMPORTANT]
- > Requires Kubernetes cluster administrator permissions.
-
-```console
-kubectl create -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
-```
- ## Create a namespace in which the data controller will be created Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created. In this example and the remainder of the examples in this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout.
openshift.io/sa.scc.supplemental-groups: 1000700001/10000
openshift.io/sa.scc.uid-range: 1000700001/10000 ```
-If other people will be using this namespace that are not cluster administrators, we recommend creating a namespace admin role and granting that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
+If other people who are not cluster administrators will be using this namespace, create a namespace admin role and grant that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
++
+## Create the deployer service account
+
+ > [!IMPORTANT]
+ > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
+
+Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
+
+```console
+kubectl apply --namespace arc -f arcdata-deployer.yaml
+```
+ ## Create the bootstrapper service
-The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller, SQL managed instances, or PostgreSQL Hyperscale server groups.
+The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller.
-Run the following command to create a bootstrapper service, a service account for the bootstrapper service, and a role and role binding for the bootstrapper service account.
+Run the following command to create a "bootstrap" job to install the bootstrapper along with related cluster-scope and namespaced objects, such as custom resource definitions (CRDs), the service account and bootstrapper role.
```console
-kubectl create --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/bootstrap.yaml
```
-Verify that the bootstrapper pod is running using the following command. You may need to run it a few times until the status changes to `Running`.
+The [uninstall.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/uninstall.yaml) is for uninstalling the bootstrapper and related Kubernetes objects, except the CRDs.
+
+Verify that the bootstrapper pod is running using the following command.
```console
-kubectl get pod --namespace arc
+kubectl get pod --namespace arc -l app=bootstrapper
```
-The bootstrapper.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment does not have access directly to the Microsoft Container Registry, you can do the following:
+If the status is not _Running_, run the command a few times until the status is _Running_.
+
+The bootstrap.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following:
- Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md).-- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-lin) for your private container registry.-- Add an image pull secret to the bootstrapper container. See example below.-- Change the image location for the bootstrapper image. See example below.-
-The example below assumes that you created a image pull secret name `arc-private-registry`.
-
-```yaml
-#Just showing only the relevant part of the bootstrapper.yaml template file here
- spec:
- serviceAccountName: sa-bootstrapper
- nodeSelector:
- kubernetes.io/os: linux
- imagePullSecrets:
- - name: arc-private-registry #Create this image pull secret if you are using a private container registry
- containers:
- - name: bootstrapper
- image: mcr.microsoft.com/arcdata/arc-bootstrapper:v1.1.0_2021-11-02 #Change this registry location if you are using a private container registry.
- imagePullPolicy: Always
-```
+- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry.
+- Change the image URL for the bootstrapper image in the bootstrap.yaml file.
+- Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret.
## Create secrets for the metrics and logs dashboards
kubectl create --namespace arc -f C:\arc-data-services\controller-login-secret.y
Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify during Kubernetes native tools deployment](monitor-certificates.md).
-## Create the webhook deployment job, cluster role and cluster role binding
-
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/web-hook.yaml) locally on your computer so that you can modify some of the settings.
-
-Edit the file and replace `{{namespace}}` in all places with the name of the namespace you created in the previous step. **Save the file.**
-
-Run the following command to create the cluster role and cluster role bindings.
-
- > [!IMPORTANT]
- > Requires Kubernetes cluster administrator permissions.
-
-```console
-kubectl create -n arc -f <path to the edited template file on your computer>
-```
- ## Create the data controller Now you are ready to create the data controller itself.
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
Edit the following as needed:
Edit the following as needed:
- **name**: The default name of the data controller is `arc`, but you can change it if you want. - **displayName**: Set this to the same value as the name attribute at the top of the file. - **registry**: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and [pushing them to a private container registry](offline-deployment.md), enter the IP address or DNS name of your registry here.-- **dockerRegistry**: The image pull secret to use to pull the images from a private container registry if required.
+- **dockerRegistry**: The secret to use to pull the images from a private container registry if required.
- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images. - **imageTag**: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version. - **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
Currently, the following Azure Arc-enabled data services are available:
For an introduction to how Azure Arc-enabled data services supports your hybrid work environment, see this introductory video:
-> [!VIDEO https://docs.microsoft.com/Shows//Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
## Always current
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
+
+ Title: Azure Arc-enabled data services - Pre-release testing
+description: Experience pre-release versions of Azure Arc-enabled data services
++++++ Last updated : 06/28/2022++
+#Customer intent: As a data professional, I want to validate upcoming releases.
++
+# Pre-release testing
+
+To provide an opportunity for customers and partners to provide pre-release feedback, pre-release versions of Azure Arc-enabled data services are made available on a predictable schedule. This article describes how to install pre-release versions of Azure Arc-enabled data services and provide feedback to Microsoft.
+
+## Pre-release testing schedule
+
+Each month, Azure Arc-enabled data services is released on the second Tuesday of the month, commonly known as "Patch Tuesday". The pre-release versions are made available on a predictable schedule in alignment with that release date.
+
+- 14 days before the release date, the *test* pre-release version is made available.
+- 7 days before the release date, the *preview* pre-release version is made available.
+
+The main difference between the test and preview pre-release versions is usually just quality and stability, but in some exceptional cases there may be new features introduced in between the test and preview releases.
+
+Normally, pre-release version binaries are available around 10:00 AM Pacific Time. Documentation follows later in the day.
+
+## Artifacts for a pre-release version
+
+Pre-release versions simultaneously release with artifacts, which are designed to work together:
+
+- Container images hosted on the Microsoft Container Registry (MCR)
+ - `mcr.microsoft.com/arcdata/preview` is the repository that hosts the **preview** pre-release builds
+ - `mcr.microsoft.com/arcdata/test` is the repository that hosts the **test** pre-release builds
+
+ > [!NOTE]
+ > `mcr.microsoft.com/arcdata/` will continue to be the repository that hosts the final release builds.
+
+ - Azure CLI extension hosted on Azure Blob Storage
+ - Azure Data Studio extension hosted on Azure Blob Storage
+
+In addition to the above installable artifacts, the following are updated in Azure as needed:
+
+- New version of ARM API (occasionally)
+- New Azure portal accessible via a special URL query string parameter (see below for details)
+- New Arc-enabled Kubernetes extension version for Arc-enabled data services (applies to direct connectivity mode only)
+- Documentation updates on this page describing the location and details of the above artifacts and the new features available and any pre-release "read me" documentation
+
+## Installing pre-release versions
+
+### Install prerequisite tools
+
+To install a pre-release version, follow these pre-requisite instructions:
+
+If you use the Azure CLI extension:
+
+- Uninstall the Azure CLI extension (`az extension remove -n arcdata`).
+- Download the latest pre-release Azure CLI extension `.whl` file from [https://aka.ms/az-cli-arcdata-ext](https://aka.ms/az-cli-arcdata-ext).
+- Install the latest pre-release Azure CLI extension (`az extension add -s <location of downloaded .whl file>`).
+
+If you use the Azure Data Studio extension to install:
+
+- Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**.
+- Download the latest pre-release Azure Data Studio extension .vsix file from [https://aka.ms/ads-arcdata-ext](https://aka.ms/ads-arcdata-ext).
+- Install the extension by choosing File -> Install Extension from VSIX package and then browsing to the download location of the .vsix file.
+
+### Install using Azure CLI
+
+> [!NOTE]
+> Deploying pre-release builds using direct connectivity mode from Azure CLI is not supported.
+
+#### Indirect connectivity mode
+
+If you install using the Azure CLI, follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md). Once created, edit this custom configuration profile file enter the `docker` property values as required based on the information provided in the version history table on this page.
+
+For example:
+
+```json
+
+ "docker": {
+ "registry": "mcr.microsoft.com",
+ "repository": "arcdata/test",
+ "imageTag": "v1.8.0_2022-06-07_5ba6b837",
+ "imagePullPolicy": "Always"
+ },
+```
+
+Once the file is edited, use the command `az arcdata dc create` as explained in [create a custom configuration profile](create-custom-configuration-template.md).
+
+### Install using Azure Data Studio
+
+> [!NOTE]
+> Deploying pre-release builds using direct connectivity mode from Azure Data Studio is not supported.
+
+#### Indirect connectivity mode
+
+If you use Azure Data Studio to install, complete the data controller deployment wizard as normal except click on **Script to notebook** at the end instead of **Deploy**. In the generated notebook, edit the `Set variables` cell to *add* the following lines:
+
+```python
+# choose between arcdata/test or arcdata/preview as appropriate
+os.environ["AZDATA_DOCKER_REPOSITORY"] = "arcdata/test"
+os.environ["AZDATA_DOCKER_TAG"] = "v1.8.0_2022-06-07_5ba6b837"
+```
+
+Run the notebook by clicking **Run All**.
+
+### Install using Azure portal
+
+Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal.
+
+Open the Azure portal by using this special URL: [https://portal.azure.com/?Microsoft_Azure_HybridData_Platform=BugBash](https://portal.azure.com/?Microsoft_Azure_HybridData_Platform=BugBash).
+
+Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate and enter the desired tag in the **Image tag** field. Fill out the rest of the custom cluster configuration template fields as normal.
+
+Complete the rest of the wizard as normal.
+
+When you deploy with this method, the most recent pre-release version will always be used.
+
+## Current preview release information
++
+## Provide feedback
+
+At this time, pre-release testing is supported for certain customers and partners that have established agreements with Microsoft. Participants have points of contact on the product engineering team. Email your points of contact with any issues that are found during pre-release testing.
+
+## Next steps
+
+[Release notes - Azure Arc-enabled data services](release-notes.md)
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
Title: Upgrade indirectly connected Azure Arc data controller using Kubernetes tools
-description: Article describes how to upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
+ Title: Upgrade indirectly connected data controller for Azure Arc using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected data controller for Azure Arc using Kubernetes tools
Last updated 05/27/2022
-# Upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
+# Upgrade an indirectly connected Azure Arc-enabled data controller using Kubernetes tools
This article explains how to upgrade an indirectly connected Azure Arc-enabled data controller with Kubernetes tools.
During a data controller upgrade, portions of the data control plane such as Cus
In this article, you'll apply a .yaml file to:
-1. Specify a service account.
-1. Set the cluster roles.
-1. Set the cluster role bindings.
-1. Set the job.
+1. Create the service account for running upgrade.
+1. Upgrade the bootstrapper.
+1. Upgrade the data controller.
> [!NOTE] > Some of the data services tiers and modes are generally available and some are in preview.
In this article, you'll apply a .yaml file to:
## Prerequisites
-Prior to beginning the upgrade of the Azure Arc data controller, you'll need:
+Prior to beginning the upgrade of the data controller, you'll need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected
You need an indirectly connected data controller with the `imageTag: v1.0.0_2021
## Install tools
-To upgrade the Azure Arc data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
+To upgrade the data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
The examples in this article use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or helm if you're familiar with those tools and Kubernetes yaml/json.
Found 2 valid versions. The current datacontroller version is <current-version>
... ```
-## Create or download .yaml file
-
-To upgrade the data controller, you'll apply a yaml file to the Kubernetes cluster. The example file for the upgrade is available in GitHub at <https://github.com/microsoft/azure_arc/blob/main/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml>.
-
-You can download the file - and other Azure Arc related demonstration files - by cloning the repository. For example:
-
-```azurecli
-git clone https://github.com/microsoft/azure-arc
-```
-
-For more information, see [Cloning a repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) in the GitHub docs.
-
-The following steps use files from the repository.
-
-In the yaml file, you'll replace ```{{namespace}}``` with your namespace.
- ## Upgrade data controller This section shows how to upgrade an indirectly connected data controller.
This section shows how to upgrade an indirectly connected data controller.
### Upgrade
-You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
-
-### Specify the service account
-
-The upgrade requires an elevated service account for the upgrade job.
-
-To specify the service account:
-
-1. Describe the service account in a .yaml file. The following example sets a name for `ServiceAccount` as `sa-arc-upgrade-worker`:
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="2-4":::
-
-1. Edit the file as needed.
+You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the data controller.
-### Set the cluster roles
-A cluster role (`ClusterRole`) grants the service account permission to perform the upgrade.
+### Create the service account for running upgrade
-1. Describe the cluster role and rules in a .yaml file. The following example defines a cluster role for `arc:cr-upgrade-worker` and allows all API groups, resources, and verbs.
+ > [!IMPORTANT]
+ > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="7-9":::
+Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
-1. Edit the file as needed.
-
-### Set the cluster role binding
-
-A cluster role binding (`ClusterRoleBinding`) links the service account and the cluster role.
-
-1. Describe the cluster role binding in a .yaml file. The following example describes a cluster role binding for the service account.
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="20-21":::
-
-1. Edit the file as needed.
-
-### Specify the job
+```console
+kubectl apply --namespace arc -f arcdata-deployer.yaml
+```
-A job creates a pod to execute the upgrade.
-1. Describe the job in a .yaml file. The following example creates a job called `arc-bootstrapper-upgrade-job`.
+### Upgrade the bootstrapper
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="31-48":::
+The following command creates a job for upgrading the bootstrapper and related Kubernetes objects.
-1. Edit the file for your environment.
+```console
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/upgrade/yaml/bootstrapper-upgrade-job.yaml
+```
### Upgrade the data controller
-Specify the image tag to upgrade the data controller to.
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="50-56":::
-
-### Apply the resources
+The following command patches the image tag to upgrade the data controller.
-Run the following kubectl command to apply the resources to your cluster.
-
-``` bash
-kubectl apply -n <namespace> -f upgrade-indirect-k8s.yaml
+```console
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/release-arc-data/arc_data_services/upgrade/yaml/data-controller-upgrade.yaml
``` + ## Monitor the upgrade status You can monitor the progress of the upgrade with kubectl.
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 06/21/2022 Last updated : 06/27/2022
This article provides information on troubleshooting and resolving issues that may occur while attempting to deploy, use, or remove the Azure Arc resource bridge (preview). The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster. For general information, see [Azure Arc resource bridge (preview) overview](./overview.md).
-## Logs
+## General issues
+
+### Logs
For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the client machine from which you've deployed the Azure Arc resource bridge.
The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge
$HOME\.KVA\.ssh\logkey.pub $HOME\.KVA\.ssh\logkey ```+
+To run the `az arcappliance logs` command, the path to the kubeconfig must be provided. The kubeconfig is generated after successful completion of the `az arcappliance deploy` command and is placed in the same directory as the CLI command in ./kubeconfig or as specified in `--outfile` (if the parameter was passed).
+
+If `az arcappliance deploy` was not completed, then the kubeconfig file may exist but may be empty or missing data, so it can't be used for logs collection. In this case, the Appliance VM IP address can be used to collect logs instead. The Appliance VM IP is assigned when the `az arcappliance deploy` command is run, after Control Plane Endpoint reconciliation. For example, if the message displayed in the command window reads "Appliance IP is 10.97.176.27", the command to use for logs collection would be:
+
+```azurecli
+az arcappliance logs hci --out-dir c:\logs --ip 10.97.176.27
+```
+ To view the logs, run the following command: ```azurecli
To specify the IP address of the Azure Arc resource bridge virtual machine, run
az arcappliance logs <provider> --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX ```
-## `az arcappliance prepare` fails when deploying to VMware
+### Remote PowerShell is not supported
-The `arcappliance` extension for Azure CLI enables a [prepare](/cli/azure/arcappliance/prepare) command, which enables you to download an OVA template to your vSphere environment. This OVA file is used to deploy the Azure Arc resource bridge. The `az arcappliance prepare` command uses the vSphere SDK and can result in the following error:
+If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you may experience various problems. For instance, you might see an [EOF error when using the `logs` command](#logs-command-fails-with-eof-error), or an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure).
+
+Using `az arcappliance` commands from remote PowerShell is not currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
+
+### Resource bridge cannot be updated
+
+In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again.
+
+For example, if you specified the wrong location, or subscription during deployment, later the resource creation fails. If you only try to recreate the resource without redeploying the resource bridge VM, you'll see the status stuck at `WaitForHeartBeat`.
+
+To resolve this issue, delete the appliance and update the appliance YAML file. Then redeploy and create the resource bridge.
+
+### Failure due to previous failed deployments
+
+If an Arc resource bridge deployment fails, subsequent deployments may fail due to residual cached folders remaining on the machine.
+
+To prevent this from happening, be sure to run the `az arcappliance delete` command after any failed deployment. This command must be run with the latest `arcappliance` Azure CLI extension. To ensure that you have the latest version installed on your machine, run the following command:
```azurecli
-$ az arcappliance prepare vmware --config-file <path to config>
+az extension update --name arcappliance
+```
-Error: Error in reading OVA file: failed to parse ovf: strconv.ParseInt: parsing "3670409216":
-value out of range.
+If the failed deployment is not successfully removed, residual cached folders may cause future Arc resource bridge deployments to fail. This may cause the error message `Unavailable desc = connection closed before server preface received` to surface when various `az arcappliance` commands are run, including `prepare` and `delete`.
+
+To resolve this error, the .wssd\python and .wssd\kva folders in the user profile directory need to be deleted on the machine where the Arc resource bridge CLI commands are being run. You can delete these manually by navigating to the user profile directory (typically C:\Users\<username>), then deleting the .wssd\python and/or .wssd\kva folders. After they are deleted, try the command again.
+
+### Token refresh error
+
+When you run the Azure CLI commands, the following error may be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign in to Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again by using the `az login` command.
+
+### `logs` command fails with EOF error
+
+When running the `az arcappliance logs` Azure CLI command, you may see an error: `Appliance logs command failed with error: EOF when reading a line.` This may occur in scenarios similar to the following:
+
+```azurecli
+az arcappliance logs hci --kubeconfig .\kubeconfig --out-dir c:\temp --ip 192.168.200.127
++ CategoryInfo : NotSpecified: (WARNING: Comman...s/CLI_refstatus:String) [], RemoteException++ FullyQualifiedErrorId : NativeCommandError+
+Please enter cloudservice FQDN/IP: Appliance logs command failed with error: EOF when reading a line[v-Host1]: PS C:\Users\AzureStackAdminD\Documents> az arcappliance logs hci --kubeconfig .\kubeconfig --out-dir c:\temp --ip 192.168.200.127
++ CategoryInfo : NotSpecified: (WARNING: Comman...s/CLI_refstatus:String) [], RemoteException++ FullyQualifiedErrorId : NativeCommandError+
+Please enter cloudservice FQDN/IP: Appliance logs command failed with error: EOF when reading a line
```
-### Cause
+The `az arcappliance logs` CLI command runs in interactive mode, meaning that it prompts the user for parameters. If the command is run in a scenario where it can't prompt the user for parameters, this error will occur. This is especially common when trying to use remote PowerShell to run the command.
-This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. When you install the Azure CLI, it's a 32-bit Windows Installer package. However, the Azure CLI `az arcappliance` extension needs to run in a 64-bit context.
+To avoid this error, use Remote Desktop Protocol (RDP) or a console session to sign directly in to the node and locally run the `logs` command (or any `az arcappliance` command). Remote PowerShell is not currently supported by Azure Arc resource bridge.
-### Resolution
+You can also avoid this error by pre-populating the values that the `logs` command prompts for, thus avoiding the prompt. The example below provides these values into a variable which is then passed to the `logs` command. Be sure to replace `$loginValues` with your cloudservice IP address and the full path to your token credentials.
-Perform the following steps to configure your client machine with the Azure CLI 64-bit version.
+```azurecli
+$loginValues="192.168.200.2
+C:\kvatoken.tok"
-1. Uninstall the current version of the Azure CLI on Windows following these [steps](/cli/azure/install-azure-cli-windows#uninstall).
-1. Install version 3.6 or higher of [Python](https://www.python.org/downloads/windows/) (64-bit).
+$user_in = ""
+foreach ($val in $loginValues) { $user_in = $user_in + $val + "`n" }
- > [!NOTE]
- > It is important after installing Python to confirm that its path is added to the PATH environmental variable.
+$user_in | az arcappliance logs hci --kubeconfig C:\Users\AzureStackAdminD\.kube\config
+```
-1. Install the [pip](https://pypi.org/project/pip/) package installer for Python.
-1. Verify Python is installed correctly by running `py` in a Command Prompt.
-1. From an elevated PowerShell console, run `pip install azure-cli` to install the Azure CLI from PyPI.
+### Default host resource pools are unavailable for deployment
+
+When using the `az arcappliance createConfig` or `az arcappliance run` command, there will be an interactive experience which shows the list of the VMware entities where user can select to deploy the virtual appliance. This list will show all user-created resource pools along with default cluster resource pools, but the default host resource pools aren't listed.
+
+When the appliance is deployed to a host resource pool, there is no high availability if the host hardware fails. Because of this, we recommend that you don't try to deploy the appliance in a host resource pool.
-After you complete these steps, in a new PowerShell console you can get started using the Azure Arc appliance CLI extension.
+## Networking issues
-## Azure Arc resource bridge (preview) is unreachable
+### Azure Arc resource bridge is unreachable
Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if not reserved. Rebooting the Azure Arc resource bridge (preview) or VM can trigger an IP address change, resulting in failing services. Intermittently, the resource bridge (preview) can lose the reserved IP configuration. This is due to the behavior described in [loss of VIPs when systemd-networkd is restarted](https://github.com/acassen/keepalived/issues/1385). When the IP address isn't assigned to the Azure Arc resource bridge (preview) VM, any call to the resource bridge API server will fail. As a result, you can't create any new resource through the resource bridge (preview), ranging from connecting to Azure Arc private cloud, create a custom location, create a VM, etc.
-Another possible cause is slow disk access. Azure Arc resource bridge uses etcd which requires 10ms latency or less per [recommendation](https://docs.openshift.com/container-platform/4.6/scalability_and_performance/recommended-host-practices.html#recommended-etcd-practices_). If the underlying disk has low performance, it can impact the operations, and causing failures.
+Another possible cause is slow disk access. Azure Arc resource bridge uses etcd which requires 10 ms latency or less per [recommendation](https://docs.openshift.com/container-platform/4.6/scalability_and_performance/recommended-host-practices.html#recommended-etcd-practices_). If the underlying disk has low performance, it can impact the operations, and causing failures.
-### Resolution
+To resolve this issue, reboot the resource bridge (preview) VM, and it should recover its IP address. If the address is assigned from a DHCP server, reserve the IP address associated with the resource bridge (preview).
-Reboot the resource bridge (preview) VM and it should recover its IP address. If the address is assigned from a DHCP server, reserve the IP address associated with the resource bridge (preview).
+### SSL proxy configuration issues
-## Resource bridge cannot be updated
+Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. This configuration is handled automatically. However, proxy configuration of the client machine isn't configured by the Azure Arc resource bridge.
-In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again.
+There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the host and guest trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
-For example, if you specified the wrong location, or subscription during deployment, later the resource creation fails. If you only try to recreate the resource without redeploying the resource bridge VM, you'll see the status stuck at `WaitForHeartBeat`.
+## Azure-Arc enabled VMs on Azure Stack HCI issues
+
+For general help resolving issues related to Azure-Arc enabled VMs on Azure Stack HCI, see [Troubleshoot Azure Arc-enabled virtual machines](/azure-stack/hci/manage/troubleshoot-arc-enabled-vms).
+
+### Authentication handshake failure
+
+When running an `az arcappliance` command, you may see a connection error: `authentication handshake failed: x509: certificate signed by unknown authority`
+
+This is usually caused when trying to run commands from remote PowerShell, which is not supported by Azure Arc resource bridge.
+
+To install Azure Arc resource bridge on an Azure Stack HCI cluster, `az arcappliance` commands must be run locally on a node in the cluster. Sign in to the node through Remote Desktop Protocol (RDP) or use a console session to run these commands.
+
+## Azure Arc-enabled VMWare VCenter issues
+
+### `az arcappliance prepare` failure
+
+The `arcappliance` extension for Azure CLI enables a [prepare](/cli/azure/arcappliance/prepare) command, which enables you to download an OVA template to your vSphere environment. This OVA file is used to deploy the Azure Arc resource bridge. The `az arcappliance prepare` command uses the vSphere SDK and can result in the following error:
+
+```azurecli
+$ az arcappliance prepare vmware --config-file <path to config>
+
+Error: Error in reading OVA file: failed to parse ovf: strconv.ParseInt: parsing "3670409216":
+value out of range.
+```
+
+This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. When you install the Azure CLI, it's a 32-bit Windows Installer package. However, the Azure CLI `az arcappliance` extension needs to run in a 64-bit context.
+
+To resolve this issue, perform the following steps to configure your client machine with the Azure CLI 64-bit version:
-### Resolution
+1. Uninstall the current version of the Azure CLI on Windows following these [steps](/cli/azure/install-azure-cli-windows#uninstall).
+1. Install version 3.6 or higher of [Python](https://www.python.org/downloads/windows/) (64-bit).
-Delete the appliance, update the appliance YAML file, then redeploy and create the resource bridge.
+ > [!IMPORTANT]
+ > After you install Python, make sure to confirm that its path is added to the PATH environmental variable.
-## Token refresh error
+1. Install the [pip](https://pypi.org/project/pip/) package installer for Python.
+1. Verify Python is installed correctly by running `py` in a Command Prompt.
+1. From an elevated PowerShell console, run `pip install azure-cli` to install the Azure CLI from PyPI.
-When you run the Azure CLI commands, the following error may be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign into Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again.
+After you complete these steps, you can get started using the Azure Arc appliance CLI extension in a new PowerShell console.
-### Resolution
+### Error during host configuration
-Sign into Azure again using the `az login` command.
+When you deploy the resource bridge on VMware vCenter, if you have been using the same template to deploy and delete the appliance multiple times, you may encounter the following error:
+
+`Appliance cluster deployment failed with error:
+Error: An error occurred during host configuration`
+
+To resolve this issue, delete the existing template manually. Then run [`az arcappliance prepare`](/cli/azure/arcappliance/prepare) to download a new template for deployment.
+
+### Unable to find folders
+
+When deploying the resource bridge on VMware vCenter, you specify the folder in which the template and VM will be created. The folder must be VM and template folder type. Other types of folder, such as storage folders, network folders, or host and cluster folders, can't be used by the resource bridge deployment.
+
+### Insufficient permissions
+
+When deploying the resource bridge on VMWare Vcenter, you may get an error saying that you have insufficient permission. To resolve this issue, make sure that your user account has all of the following privileges in VMware vCenter and then try again.
+
+```
+"Datastore.AllocateSpace"
+"Datastore.Browse"
+"Datastore.DeleteFile"
+"Datastore.FileManagement"
+"Folder.Create"
+"Folder.Delete"
+"Folder.Move"
+"Folder.Rename"
+"InventoryService.Tagging.CreateTag"
+"Sessions.ValidateSession"
+"Network.Assign"
+"Resource.ApplyRecommendation"
+"Resource.AssignVMToPool"
+"Resource.HotMigrate"
+"Resource.ColdMigrate"
+"StorageViews.View"
+"System.Anonymous"
+"System.Read"
+"System.View"
+"VirtualMachine.Config.AddExistingDisk"
+"VirtualMachine.Config.AddNewDisk"
+"VirtualMachine.Config.AddRemoveDevice"
+"VirtualMachine.Config.AdvancedConfig"
+"VirtualMachine.Config.Annotation"
+"VirtualMachine.Config.CPUCount"
+"VirtualMachine.Config.ChangeTracking"
+"VirtualMachine.Config.DiskExtend"
+"VirtualMachine.Config.DiskLease"
+"VirtualMachine.Config.EditDevice"
+"VirtualMachine.Config.HostUSBDevice"
+"VirtualMachine.Config.ManagedBy"
+"VirtualMachine.Config.Memory"
+"VirtualMachine.Config.MksControl"
+"VirtualMachine.Config.QueryFTCompatibility"
+"VirtualMachine.Config.QueryUnownedFiles"
+"VirtualMachine.Config.RawDevice"
+"VirtualMachine.Config.ReloadFromPath"
+"VirtualMachine.Config.RemoveDisk"
+"VirtualMachine.Config.Rename"
+"VirtualMachine.Config.ResetGuestInfo"
+"VirtualMachine.Config.Resource"
+"VirtualMachine.Config.Settings"
+"VirtualMachine.Config.SwapPlacement"
+"VirtualMachine.Config.ToggleForkParent"
+"VirtualMachine.Config.UpgradeVirtualHardware"
+"VirtualMachine.GuestOperations.Execute"
+"VirtualMachine.GuestOperations.Modify"
+"VirtualMachine.GuestOperations.ModifyAliases"
+"VirtualMachine.GuestOperations.Query"
+"VirtualMachine.GuestOperations.QueryAliases"
+"VirtualMachine.Hbr.ConfigureReplication"
+"VirtualMachine.Hbr.MonitorReplication"
+"VirtualMachine.Hbr.ReplicaManagement"
+"VirtualMachine.Interact.AnswerQuestion"
+"VirtualMachine.Interact.Backup"
+"VirtualMachine.Interact.ConsoleInteract"
+"VirtualMachine.Interact.CreateScreenshot"
+"VirtualMachine.Interact.CreateSecondary"
+"VirtualMachine.Interact.DefragmentAllDisks"
+"VirtualMachine.Interact.DeviceConnection"
+"VirtualMachine.Interact.DisableSecondary"
+"VirtualMachine.Interact.DnD"
+"VirtualMachine.Interact.EnableSecondary"
+"VirtualMachine.Interact.GuestControl"
+"VirtualMachine.Interact.MakePrimary"
+"VirtualMachine.Interact.Pause"
+"VirtualMachine.Interact.PowerOff"
+"VirtualMachine.Interact.PowerOn"
+"VirtualMachine.Interact.PutUsbScanCodes"
+"VirtualMachine.Interact.Record"
+"VirtualMachine.Interact.Replay"
+"VirtualMachine.Interact.Reset"
+"VirtualMachine.Interact.SESparseMaintenance"
+"VirtualMachine.Interact.SetCDMedia"
+"VirtualMachine.Interact.SetFloppyMedia"
+"VirtualMachine.Interact.Suspend"
+"VirtualMachine.Interact.TerminateFaultTolerantVM"
+"VirtualMachine.Interact.ToolsInstall"
+"VirtualMachine.Interact.TurnOffFaultTolerance"
+"VirtualMachine.Inventory.Create"
+"VirtualMachine.Inventory.CreateFromExisting"
+"VirtualMachine.Inventory.Delete"
+"VirtualMachine.Inventory.Move"
+"VirtualMachine.Inventory.Register"
+"VirtualMachine.Inventory.Unregister"
+"VirtualMachine.Namespace.Event"
+"VirtualMachine.Namespace.EventNotify"
+"VirtualMachine.Namespace.Management"
+"VirtualMachine.Namespace.ModifyContent"
+"VirtualMachine.Namespace.Query"
+"VirtualMachine.Namespace.ReadContent"
+"VirtualMachine.Provisioning.Clone"
+"VirtualMachine.Provisioning.CloneTemplate"
+"VirtualMachine.Provisioning.CreateTemplateFromVM"
+"VirtualMachine.Provisioning.Customize"
+"VirtualMachine.Provisioning.DeployTemplate"
+"VirtualMachine.Provisioning.DiskRandomAccess"
+"VirtualMachine.Provisioning.DiskRandomRead"
+"VirtualMachine.Provisioning.FileRandomAccess"
+"VirtualMachine.Provisioning.GetVmFiles"
+"VirtualMachine.Provisioning.MarkAsTemplate"
+"VirtualMachine.Provisioning.MarkAsVM"
+"VirtualMachine.Provisioning.ModifyCustSpecs"
+"VirtualMachine.Provisioning.PromoteDisks"
+"VirtualMachine.Provisioning.PutVmFiles"
+"VirtualMachine.Provisioning.ReadCustSpecs"
+"VirtualMachine.State.CreateSnapshot"
+"VirtualMachine.State.RemoveSnapshot"
+"VirtualMachine.State.RenameSnapshot"
+"VirtualMachine.State.RevertToSnapshot"
+```
## Next steps
azure-fluid-relay Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/customer-managed-keys.md
+
+ Title: Customer-managed keys for Azure Fluid Relay encryption
+description: Better understand the data encryption with CMK
++ Last updated : 10/08/2021++++
+# Customer-managed keys for Azure Fluid Relay encryption
+
+You can use your own encryption key to protect the data in your Azure Fluid Relay resource. When you specify a customer-managed key (CMK), that key is used to protect and control access to the key that encrypts your data. CMK offers greater flexibility to manage access controls.
+
+You must use one of the following Azure key stores to store your CMK:
+- [Azure Key Vault](../../key-vault/general/overview.md)
+- [Azure Key Vault Managed Hardware Security Module (HSM)](../../key-vault/managed-hsm/overview.md)
+
+You must create a new Azure Fluid Relay resource to enable CMK. You cannot change the CMK enablement/disablement on an existing Fluid Relay resource.
+
+Also, CMK of Fluid Relay relies on Managed Identity, and you need to assign a managed identity to the Fluid Relay resource when enabling CMK. Only user-assigned identity is allowed for Fluid Relay resource CMK. For more information about managed identities, see [here](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+
+Configuring a Fluid Relay resource with CMK can't be done through Azure portal yet.
+
+When you configure the Fluid Relay resource with CMK, the Azure Fluid Relay service configures the appropriate CMK encrypted settings on the Azure Storage account scope where your Fluid session artifacts are stored. For more information about CMK in Azure Storage, see [here](../../storage/common/customer-managed-keys-overview.md).
+
+To verify a Fluid Relay resource is using CMK, you can check the property of the resource by sending GET and see if it has valid, non-empty property of encryption.customerManagedKeyEncryption.
+
+## Prerequisites:
+
+Before configuring CMK on your Azure Fluid Relay resource, the following prerequisites must be met:
+- Keys must be stored in an Azure Key Vault.
+- Keys must be RSA key and not EC key since EC key doesnΓÇÖt support WRAP and UNWRAP.
+- A user assigned managed identity must be created with necessary permission (GET, WRAP and UNWRAP) to the key vault in step 1. More information [here](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md). Please grant GET, WRAP and UNWRAP under Key Permissions in AKV.
+- Azure Key Vault, user assigned identity, and the Fluid Relay resource must be in the same region and in the same Azure Active Directory (Azure AD) tenant.
+
+## Create a Fluid Relay resource with CMK
+
+```
+PUT https://management.azure.com/subscriptions/<subscription ID>/resourceGroups/<resource group name> /providers/Microsoft.FluidRelay/fluidRelayServers/< Fluid Relay resource name>?api-version=2022-06-01 @"<path to request payload>"
+```
+
+Request payload format:
+
+```
+{
+ "location": "<the region you selected for Fluid Relay resource>",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ ΓÇ£<User assigned identity resource ID>": {}
+ }
+ },
+ "properties": {
+ "encryption": {
+ "customerManagedKeyEncryption": {
+ "keyEncryptionKeyIdentity": {
+ "identityType": "UserAssigned",
+ "userAssignedIdentityResourceId": "<User assigned identity resource ID>"
+ },
+ "keyEncryptionKeyUrl": "<key identifier>"
+ }
+ }
+ }
+}
+```
+
+Example userAssignedIdentities and userAssignedIdentityResourceId:
+/subscriptions/ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testUserAssignedIdentity
+
+Example keyEncryptionKeyUrl: https://test-key-vault.vault.azure.net/keys/testKey/testKeyVersionGuid
+
+Notes:
+- Identity.type must be UserAssigned. It is the identity type of the managed identity that is assigned to the Fluid Relay resource.
+- Properties.encryption.customerManagedKeyEncryption.keyEncryptionKeyIdentity.identityType must be UserAssigned. It is the identity type of the managed identity that should be used for CMK.
+- Although you can specify more than one in Identity.userAssignedIdentities, only one user identity assigned to Fluid Relay resource specified will be used for CMK access the key vault for encryption.
+- Properties.encryption.customerManagedKeyEncryption.keyEncryptionKeyIdentity.userAssignedIdentityResourceId is the resource ID of the user assigned identity that should be used for CMK. Notice that it should be one of the identities in Identity.userAssignedIdentities (You must assign the identity to Fluid Relay resource before it can use it for CMK). Also, it should have necessary permissions on the key (provided by keyEncryptionKeyUrl).
+- Properties.encryption.customerManagedKeyEncryption.keyEncryptionKeyUrl is the key identifier used for CMK.
+
+## Update CMK settings of an existing Fluid Relay resource
+
+You can update the following CMK settings on existing Fluid Relay resource:
+- Change the identity that is used for accessing the key encryption key.
+- Change the key encryption key identifier (key URL).
+- Change the key version of the key encryption key.
+
+Note that you cannot disable CMK on existing Fluid Relay resource once it is enabled.
+
+Request URL:
+
+```
+PATCH https://management.azure.com/subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.FluidRelay/fluidRelayServers/<fluid relay server name>?api-version=2022-06-01 @"path to request payload"
+```
+
+Request payload example for updating key encryption key URL:
+
+```
+{
+ "properties": {
+ "encryption": {
+ "customerManagedKeyEncryption": {
+ "keyEncryptionKeyUrl": "https://test_key_vault.vault.azure.net/keys/testKey /xxxxxxxxxxxxxxxx"
+ }
+ }
+ }
+}
+```
+
+## See also
+
+- [Overview of Azure Fluid Relay architecture](architecture.md)
+- [Data storage in Azure Fluid Relay](../concepts/data-storage.md)
+- [Data encryption in Azure Fluid Relay](../concepts/data-encryption.md)
azure-fluid-relay Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/data-encryption.md
Microsoft has a set of internal guidelines for encryption key rotation which Azu
### Can I use my own encryption keys?
-No, this feature is not available yet. Keep an eye out for more updates on this.
+Yes. For more information, refer to [Customer-managed keys for Azure Fluid Relay encryption](../concepts/customer-managed-keys.md).
### What regions have encryption turned on?
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md
Before you use Core Tools to deploy your project to Azure, you create a producti
1. Use the following command to prepare your TypeScript project for deployment: ```console
- npm run build:production
+ npm run build
``` 1. With the necessary resources in place, you're now ready to deploy your local functions project to the function app in Azure by using the [func azure functionapp publish](functions-run-local.md#project-file-deployment) command. In the following example, replace `<APP_NAME>` with the name of your app.
azure-functions Durable Functions Task Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-task-hubs.md
Title: Task hubs in Durable Functions - Azure
description: Learn what a task hub is in the Durable Functions extension for Azure Functions. Learn how to configure task hubs. Previously updated : 05/10/2022 Last updated : 06/28/2022
A *task hub* in [Durable Functions](durable-functions-overview.md) is a logical
> > For more information on the various storage provider options and how they compare, see the [Durable Functions storage providers](durable-functions-storage-providers.md) documentation.
-If multiple function apps share a storage account, each function app *must* be configured with a separate task hub name. A storage account can contain multiple task hubs. This restriction generally applies to other storage providers as well.
+If multiple function apps share a storage account, each function app *must* be configured with a separate task hub name. This requirement also applies to staging slots: each staging slot must be configured with a unique task hub name. A single storage account can contain multiple task hubs. This restriction generally applies to other storage providers as well.
> [!NOTE] > The exception to the task hub sharing rule is if you are configuring your app for regional disaster recovery. See the [disaster recovery and geo-distribution](durable-functions-disaster-recovery-geo-distribution.md) article for more information.
The task hub name will be set to the value of the `MyTaskHub` app setting. The f
} ```
+> [!NOTE]
+> When using deployment slots, it's a best practice to configure the task hub name using app settings. If you want to ensure that a particular slot always uses a particular task hub, use ["slot-sticky" app settings](../functions-deployment-slots.md#create-a-deployment-setting).
+ In addition to **host.json**, task hub names can also be configured in [orchestration client binding](durable-functions-bindings.md#orchestration-client) metadata. This is useful if you need to access orchestrations or entities that live in a separate function app. The following code demonstrates how to write a function that uses the [orchestration client binding](durable-functions-bindings.md#orchestration-client) to work with a task hub that is configured as an App Setting: # [C#](#tab/csharp)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
You can add the preview extension bundle by adding or replacing the following co
## Functions runtime > [!NOTE]
-> Python language support for the SQL bindings extension is only available for v4 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) and requires runtime v4.5.0 or greater for deployment in Azure. Learn more about determining the runtime in the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) documentation. Please see the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250) for the latest update on availability.
-
-The functions runtime required for local development and testing of Python functions isn't included in the current release of functions core tools and must be installed independently. The latest instructions on installing a preview version of functions core tools are available in the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250).
-
-Alternatively, a VS Code [development container](https://code.visualstudio.com/docs/remote/containers) definition can be used to expedite your environment setup. The definition components are available in the SQL bindings [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python/.devcontainer).
+> Python language support for the SQL bindings extension is available starting with v4.5.0 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version). You may need to update your install of Azure Functions [Core Tools](functions-run-local.md) for local development. Learn more about determining the runtime in Azure regions from the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) documentation. Please see the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250) for the latest update on availability.
## Install bundle
azure-functions Functions Bindings Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-output.md
Title: Apache Kafka output binding for Azure Functions description: Use Azure Functions to write messages to an Apache Kafka stream.- Last updated 05/14/2022- zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Kafka Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md
Title: Apache Kafka trigger for Azure Functions description: Use Azure Functions to run your code based on events from an Apache Kafka stream.- Last updated 05/14/2022- zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
Title: Apache Kafka bindings for Azure Functions description: Learn to integrate Azure Functions with an Apache Kafka stream.- Last updated 05/14/2022- zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-linux-custom-image.md
A function app on Azure manages the execution of your functions in your hosting
With the image deployed to your function app in Azure, you can now invoke the function as before through HTTP requests. In your browser, navigate to the following URL: `https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions` ::: zone-end ::: zone pivot="programming-language-csharp"
azure-functions Functions Create Maven Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-eclipse.md
Title: Create an Azure function app with Java and Eclipse description: How-to guide to create and publish a simple HTTP triggered serverless app using Java and Eclipse to Azure Functions.- Last updated 07/01/2018- ms.devlang: java
azure-functions Functions Create Private Site Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-private-site-access.md
Title: Enable private site access to Azure Functions description: Learn to set up Azure virtual network private site access for Azure Functions.-- Last updated 06/17/2020
azure-functions Functions Debug Event Grid Trigger Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-debug-event-grid-trigger-local.md
Title: Azure Functions Event Grid local debugging description: Learn to locally debug Azure Functions triggered by an Event Grid event- Last updated 10/18/2018- # Azure Function Event Grid Trigger Local Debugging
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
Title: Azure Functions deployment slots description: Learn to create and use deployment slots with Azure Functions- Last updated 03/02/2022- # Azure Functions deployment slots
azure-functions Functions Dotnet Dependency Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-dependency-injection.md
Title: Use dependency injection in .NET Azure Functions description: Learn how to use dependency injection for registering and using services in .NET functions- ms.devlang: csharp Last updated 03/24/2021- # Use dependency injection in .NET Azure Functions
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md
Title: Getting started with Azure Functions description: Take the first steps toward working with Azure Functions.- Last updated 11/19/2020- zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
Title: Use GitHub Actions to make code updates in Azure Functions description: Learn how to use GitHub Actions to define a workflow to build and deploy Azure Functions projects in GitHub.- Last updated 10/07/2020-
azure-functions Functions Idempotent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-idempotent.md
Title: Designing Azure Functions for identical input description: Building Azure Functions to be idempotent-- Last updated 06/09/2022
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md
Title: Manually run a non HTTP-triggered Azure Functions description: Use an HTTP request to run a non-HTTP triggered Azure Functions- Last updated 04/23/2020- # Manually run a non HTTP-triggered function
azure-functions Functions Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitor-log-analytics.md
Title: Monitoring Azure Functions with Azure Monitor Logs description: Learn how to use Azure Monitor Logs with Azure Functions to monitor function executions.- Last updated 04/15/2020- # Customer intent: As a developer, I want to monitor my functions so I can know if they're running correctly.
azure-functions Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md
Title: Azure Functions Overview description: Learn how Azure Functions can help build robust serverless apps.- ms.assetid: 01d6ca9f-ca3f-44fa-b0b9-7ffee115acd4 Last updated 05/27/2022-
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
Title: Azure Functions C# script developer reference description: Understand how to develop Azure Functions using C# script.- Last updated 12/12/2017- # Azure Functions C# script (.csx) developer reference
azure-functions Functions Reliable Event Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reliable-event-processing.md
Title: Azure Functions reliable event processing description: Avoid missing Event Hub messages in Azure Functions- Last updated 10/01/2020- # Azure Functions reliable event processing
azure-functions Functions Triggers Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-triggers-bindings.md
Title: Triggers and bindings in Azure Functions description: Learn to use triggers and bindings to connect your Azure Function to online events and cloud-based services.- Last updated 05/25/2022-
azure-functions Functions Twitter Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-twitter-email.md
Title: Create a function that integrates with Azure Logic Apps description: Create a function integrate with Azure Logic Apps and Azure Cognitive Services. The resulting workflow categorizes tweet sentiments sends email notifications.- ms.assetid: 60495cc5-1638-4bf0-8174-52786d227734 Last updated 04/10/2021- ms.devlang: csharp
azure-functions Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/pricing.md
Title: Azure Functions pricing description: Learn how billing works for Azure Functions.-- Last updated 11/20/2020
azure-functions Shift Expressjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/shift-expressjs.md
Title: Shifting from Express.js to Azure Functions description: Learn to refactor Express.js endpoints to Azure Functions.- Last updated 07/31/2020- ms.devlang: javascript
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
description: Configure continuous deployment to your applications hosted in Azur
Previously updated : 03/02/2022
+recommendations: false
Last updated : 06/27/2022 # Deploy an app in Azure Government with Azure Pipelines
-This article helps you use Azure Pipelines to set up continuous integration (CI) and continuous deployment (CD) of your web app running in Azure Government. CI/CD automates the build of your code from a repo along with the deployment (release) of the built code artifacts to a service or set of services in Azure Government. In this tutorial, you'll build a web app and deploy it to an Azure Governments app service. This build and release process is triggered by a change to a code file in the repo.
+This how-to guide helps you use Azure Pipelines to set up continuous integration (CI) and continuous delivery (CD) of your web app running in Azure Government. CI/CD automates the build of your code from a repository along with the deployment (release) of the built code artifacts to a service or set of services in Azure Government. In this how-to guide, you'll build a web app and deploy it to an Azure Governments App Service. The build and release process is triggered by a change to a code file in the repository.
-[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
+> [!NOTE]
+> [Azure DevOps](/azure/devops/) isn't available on Azure Government. While this how-to guide shows how to configure the CI/CD capabilities of Azure Pipelines to deploy an app to a service inside Azure Government, be aware that Azure Pipelines runs its pipelines outside of Azure Government. Research your organization's security and service policies before using it as part of your deployment tools. For guidance on how to use Azure DevOps Server to create a DevOps experience inside a private network on Azure Government, see [Azure DevOps Server on Azure Government](https://devblogs.microsoft.com/azuregov/azure-devops-server-in-azure-government/).
+
+[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by development teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites
-Before starting this tutorial, you must complete the following prerequisites:
+Before starting this how-to guide, you must complete the following prerequisites:
-+ [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization)
-+ [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project?;bc=%2fazure%2fdevops%2fuser-guide%2fbreadcrumb%2ftoc.json&tabs=new-nav&toc=%2fazure%2fdevops%2fuser-guide%2ftoc.json)
-+ Install and set up [Azure PowerShell](/powershell/azure/install-az-ps)
+- [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization)
+- [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project)
+- Install and set up [Azure PowerShell](/powershell/azure/install-az-ps)
If you don't have an active Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
-## Create Azure Government app service
-
-[Create an App service in your Azure Government subscription](documentation-government-howto-deploy-webandmobile.md).
-The following steps will set up a CD process to deploy to this Web App.
-
-## Set up Build and Source control integration
-
-Follow through one of the quickstarts below to set up a Build for your specific type of app:
--- [ASP.NET 4 app](/azure/devops/pipelines/apps/aspnet/build-aspnet-4)-- [ASP.NET Core app](/azure/devops/pipelines/ecosystems/dotnet-core)-- [Node.js app with Gulp](/azure/devops/pipelines/ecosystems/javascript)-
-## Generate a service principal
-
-1. Download or copy and paste the [service principal creation](https://github.com/yujhongmicrosoft/spncreationn/blob/master/spncreation.ps1) PowerShell script into an IDE or editor.
-
- > [!NOTE]
- > This script will be updated to use the Azure Az PowerShell module instead of the deprecated AzureRM PowerShell module.
-
-2. Open up the file and navigate to the `param` parameter. Replace the `$environmentName` variable with
-AzureUSGovernment." This action sets the service principal to be created in Azure Government.
-
-3. Open your PowerShell window and run the following command. This command sets a policy that enables running local files.
+## Create Azure Government App Service app
+
+Follow [Tutorial: Deploy an Azure App Service app](./documentation-government-howto-deploy-webandmobile.md) to learn how to deploy an Azure App Service app to Azure Government. The following steps will set up a CD process to deploy to your web app.
+
+## Set up build and source control integration
+
+Review one of the following quickstarts to set up a build for your specific type of app:
+
+- [ASP.NET 4](/azure/devops/pipelines/apps/aspnet/build-aspnet-4)
+- [.NET Core](/azure/devops/pipelines/ecosystems/dotnet-core)
+- [Node.js](/azure/devops/pipelines/ecosystems/javascript)
+
+## Generate a service principal
+
+1. Copy and paste the following service principal creation PowerShell script into an IDE or editor, and then save the script. This code is compatible only with Azure Az PowerShell v7.0.0 or higher.
+
+ ```powershell
+ param
+ (
+ [Parameter(Mandatory=$true, HelpMessage="Enter Azure subscription name - you need to be subscription admin to execute the script")]
+ [string] $subscriptionName,
+
+ [Parameter(Mandatory=$false, HelpMessage="Provide SPN role assignment")]
+ [string] $spnRole = "owner",
+
+ [Parameter(Mandatory=$false, HelpMessage="Provide Azure environment name for your subscription")]
+ [string] $environmentName = "AzureUSGovernment"
+ )
+
+ # Initialize
+ $ErrorActionPreference = "Stop"
+ $VerbosePreference = "SilentlyContinue"
+ $userName = ($env:USERNAME).Replace(' ', '')
+ $newguid = [guid]::NewGuid()
+ $displayName = [String]::Format("AzDevOps.{0}.{1}", $userName, $newguid)
+ $homePage = "http://" + $displayName
+ $identifierUri = $homePage
+
+ # Check for Azure Az PowerShell module
+ $isAzureModulePresent = Get-Module -Name Az -ListAvailable
+ if ([String]::IsNullOrEmpty($isAzureModulePresent) -eq $true)
+ {
+ Write-Output "Script requires Azure PowerShell modules to be present. Obtain Azure PowerShell from https://docs.microsoft.com//powershell/azure/install-az-ps" -Verbose
+ return
+ }
+
+ Import-Module -Name Az.Accounts
+ Write-Output "Provide your credentials to access your Azure subscription $subscriptionName" -Verbose
+ Connect-AzAccount -Subscription $subscriptionName -Environment $environmentName
+ $azureSubscription = Get-AzSubscription -SubscriptionName $subscriptionName
+ $connectionName = $azureSubscription.Name
+ $tenantId = $azureSubscription.TenantId
+ $id = $azureSubscription.SubscriptionId
+
+ # Create new Azure AD application
+ Write-Output "Creating new application in Azure AD (App URI - $identifierUri)" -Verbose
+ $azureAdApplication = New-AzADApplication -DisplayName $displayName -HomePage $homePage -Verbose
+ $appId = $azureAdApplication.AppId
+ $objectId = $azureAdApplication.Id
+ Write-Output "Azure AD application creation completed successfully (Application Id: $appId) and (Object Id: $objectId)" -Verbose
+
+ # Add secret to Azure AD application
+ Write-Output "Creating new secret for Azure AD application"
+ $secret = New-AzADAppCredential -ObjectId $objectId -EndDate (Get-Date).AddYears(2)
+ Write-Output "Secret created successfully" -Verbose
+
+ # Create new SPN
+ Write-Output "Creating new SPN" -Verbose
+ $spn = New-AzADServicePrincipal -ApplicationId $appId
+ $spnName = $spn.DisplayName
+ Write-Output "SPN creation completed successfully (SPN Name: $spnName)" -Verbose
+
+ # Assign role to SPN
+ Write-Output "Waiting for SPN creation to reflect in directory before role assignment"
+ Start-Sleep 20
+ Write-Output "Assigning role ($spnRole) to SPN app ($appId)" -Verbose
+ New-AzRoleAssignment -RoleDefinitionName $spnRole -ApplicationId $spn.AppId
+ Write-Output "SPN role assignment completed successfully" -Verbose
+
+ # Print values
+ Write-Output "`nCopy and paste below values for service connection" -Verbose
+ Write-Output "***************************************************************************"
+ Write-Output "Connection Name: $connectionName(SPN)"
+ Write-Output "Environment: $environmentName"
+ Write-Output "Subscription Id: $id"
+ Write-Output "Subscription Name: $connectionName"
+ Write-Output "Service Principal Id: $appId"
+ Write-Output "Tenant Id: $tenantId"
+ Write-Output "***************************************************************************"
+ ```
+
+2. Open your PowerShell window and run the following command, which sets a policy that enables running local files:
`Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass`
- When you're asked whether you want to change the execution policy, enter "A" (for "Yes to All").
+ When asked whether you want to change the execution policy, enter "A" (for "Yes to All").
-4. Navigate to the directory that has the edited script above.
+3. Navigate to the directory where you saved the service principal creation PowerShell script.
-5. Edit the following command with the name of your script and run:
+4. Edit the following command with the name of your script and run:
`./<name of script file you saved>`
-6. The "subscriptionName" parameter can be found by logging into your Azure Government subscription via `Connect-AzAccount -EnvironmentName AzureUSGovernment` and then running `Get-AzureSubscription`.
-
-7. When prompted for the "password" parameter, enter your desired password.
+5. The "subscriptionName" parameter can be found by logging into your Azure Government subscription via `Connect-AzAccount -EnvironmentName AzureUSGovernment` and then running `Get-AzureSubscription`.
-8. After providing your Azure Government subscription credentials, you should see the following message:
+6. After providing your Azure Government subscription credentials, you should see the following message:
- > [!NOTE]
- > The Environment variable should be `AzureUSGovernment`.
+ `The Environment variable should be AzureUSGovernment`
-9. After the script has run, you should see your service connection values. Copy these values as we'll need them when setting up our endpoint.
+7. After the script has run, you should see your service connection values. Copy these values as we'll need them when setting up our endpoint.
- ![ps4](./media/documentation-government-vsts-img11.png)
+ :::image type="content" source="./media/documentation-government-vsts-img11.png" alt-text="Service connection values displayed after running the PowerShell script." border="false":::
## Configure the Azure Pipelines service connection
-Follow the instructions in [Service connections for builds and releases](/azure/devops/pipelines/library/service-endpoints) to set up the Azure Pipelines service connection.
+Follow [Manage service connections](/azure/devops/pipelines/library/service-endpoints) to set up the Azure Pipelines service connection.
+
+Make one change specific to Azure Government:
-Make one change specific to Azure Government: In step #3 of [Service connections for builds and releases](/azure/devops/pipelines/library/service-endpoints), click on "use the full version of the service connection catalog" and set **Environment** to **AzureUSGovernment**.
+- In step #3 of [Manage service connections: Create a service connection](/azure/devops/pipelines/library/service-endpoints#create-a-service-connection), click on *Use the full version of the service connection catalog* and set **Environment** to **AzureUSGovernment**.
## Define a release process
-Follow [Deploy a web app to Azure App Services](/azure/devops/pipelines/apps/cd/deploy-webdeploy-webapps) instructions to set up your release pipeline and deploy to your application in Azure Government.
+Follow [Deploy an Azure Web App](/azure/devops/pipelines/targets/webapp) instructions to set up your release pipeline and deploy to your application in Azure Government.
## Q&A **Do I need a build agent?** <br/>
-You need at least one [agent](/azure/devops/pipelines/agents/agents) to run your deployments. By default, the build and deployment processes are configured to use the [hosted agents](/azure/devops/pipelines/agents/agents#microsoft-hosted-agents). Configuring a private agent would limit data sharing outside of Azure Government.
+You need at least one [agent](/azure/devops/pipelines/agents/agents) to run your deployments. By default, the build and deployment processes are configured to use [hosted agents](/azure/devops/pipelines/agents/agents#microsoft-hosted-agents). Configuring a private agent would limit data sharing outside of Azure Government.
-**I use Team Foundation Server on premises. Can I configure CD on my server to target Azure Government?** <br/>
-Currently, Team Foundation Server can't be used to deploy to an Azure Government Cloud.
+**Can I configure CD on Azure DevOps Server (formerly Team Foundation Server) to target Azure Government?** <br/>
+You can set up Azure DevOps Server in Azure Government. For guidance on how to use Azure DevOps Server to create a DevOps experience inside a private network on Azure Government, see [Azure DevOps Server on Azure Government](https://devblogs.microsoft.com/azuregov/azure-devops-server-in-azure-government/).
## Next steps -- Subscribe to the [Azure Government blog](https://devblogs.microsoft.com/azuregov/)-- Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag
+For more information, see the following resources:
+
+- [Sign up for Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial)
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [Ask questions via the azure-gov tag on StackOverflow](https://stackoverflow.com/tags/azure-gov)
+- [Azure Government blog](https://devblogs.microsoft.com/azuregov/)
+- [What is Infrastructure as Code? ΓÇô Azure DevOps](/devops/deliver/what-is-infrastructure-as-code)
+- [DevSecOps for infrastructure as code (IaC) ΓÇô Azure Architecture Center](/azure/architecture/solution-ideas/articles/devsecops-infrastructure-as-code)
+- [Testing your application and Azure environment ΓÇô Microsoft Azure Well-Architected Framework](/azure/architecture/framework/devops/release-engineering-testing)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure compliance](../compliance/index.yml)
azure-government Documentation Government Ase Disa Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-ase-disa-cap.md
Title: ASE deployment with DISA CAP
-description: This document provides a comparison of features and guidance on developing applications for Azure Government
-
-cloud: gov
-
+description: This article explains the baseline App Service Environment configuration for customers who use DISA CAP to connect to Azure Government.
- Previously updated : 11/29/2018-
+recommendations: false
Last updated : 06/27/2022 # App Service Environment reference for DoD customers connected to the DISA CAP
-This article explains the baseline configuration of an App Service Environment (ASE) with an internal load balancer (ILB) for customers who use the DISA CAP to connect to Azure Government.
+This article explains the baseline configuration of an App Service Environment (ASE) with an internal load balancer (ILB) for customers who use the Defense Information Systems Agency (DISA) Cloud Access Point (CAP) to connect to Azure Government.
## Environment configuration ### Assumptions
-The customer has deployed an ASE with an ILB and has implemented an ExpressRoute connection to the DISA Cloud Access Point (CAP).
+You've deployed an ASE with an ILB and have implemented an ExpressRoute connection to the DISA CAP.
### Route table
-When creating the ASE via the portal, a route table with a default route of 0.0.0.0/0 and next hop ΓÇ£InternetΓÇ¥ is created.
-However, since DISA advertises a default route out the ExpressRoute circuit, the User Defined Route (UDR) should either be deleted, or remove the default route to internet.
+When you create the ASE via the Azure Government portal, a route table with a default route of 0.0.0.0/0 and next hop ΓÇ£InternetΓÇ¥ is created. However, since DISA advertises a default route out of the ExpressRoute circuit, the User Defined Route (UDR) should either be deleted, or you should remove the default route to Internet.
-You will need to create new routes in the UDR for the management addresses in order to keep the ASE healthy. For Azure Government ranges, see [App Service Environment management addresses](../app-service/environment/management-addresses.md).
+You'll need to create new routes in the UDR for the management addresses to keep the ASE healthy. For Azure Government ranges, see [App Service Environment management addresses](../app-service/environment/management-addresses.md).
-- 23.97.29.209/32 --> Internet-- 13.72.53.37/32 --> Internet-- 13.72.180.105/32 --> Internet-- 52.181.183.11/32 --> Internet-- 52.227.80.100/32 --> Internet-- 52.182.93.40/32 --> Internet-- 52.244.79.34/32 --> Internet-- 52.238.74.16/32 --> Internet
+- 23.97.29.209/32 -> Internet
+- 13.72.53.37/32 -> Internet
+- 13.72.180.105/32 -> Internet
+- 52.181.183.11/32 -> Internet
+- 52.227.80.100/32 -> Internet
+- 52.182.93.40/32 -> Internet
+- 52.244.79.34/32 -> Internet
+- 52.238.74.16/32 -> Internet
Make sure the UDR is applied to the subnet your ASE is deployed to. ### Network security group (NSG)
-The ASE will be created with inbound and outbound security rules as shown below. The inbound security rules MUST allow ports 454-455 with an ephemeral source port range (*).
-
-The images below describe the default NSG rules created during the ASE creation. For more information, see [Networking considerations for an App Service Environment](../app-service/environment/network-info.md#network-security-groups)
+The ASE will be created with the following inbound and outbound security rules. The inbound security rules **must** allow ports 454-455 with an ephemeral source port range (*). The following images describe the default NSG rules generated during the ASE creation. For more information, see [Networking considerations for an App Service Environment](../app-service/environment/network-info.md#network-security-groups).
-![Default inbound NSG security rules for an ILB ASE](media/documentation-government-ase-disacap-inbound-route-table.png)
-![Default outbound NSG security rules for an ILB ASE](media/documentation-government-ase-disacap-outbound-route-table.png)
-### Service Endpoints
+### Service endpoints
-Depending on the storage you use, you will be required to enable Service Endpoints for SQL and Azure Storage to access them without going back down to the DISA BCAP. You also need to enable EventHub Service Endpoint for ASE logs. [Learn more](../app-service/environment/network-info.md#service-endpoints).
+Depending on the storage you use, you need to enable service endpoints for Azure SQL Database and Azure Storage to access them without going back to the DISA CAP. You also need to enable the Event Hubs service endpoint for ASE logs. For more information, see [Networking considerations for App Service Environment: Service endpoints](../app-service/environment/network-info.md#service-endpoints).
## FAQs
-Some configuration changes may take some time to take effect. Allow for several hours for changes to routing, NSGs, ASE Health, etc. to propagate and take effect, or optionally you can reboot the ASE.
+**How long will it take for configuration changes to take effect?** </br>
+Some configuration changes may take time to become effective. Allow several hours for changes to routing, NSGs, ASE Health, and so on, to propagate and take effect. Otherwise, you can optionally reboot the ASE.
-## Resource manager template sample
+## Azure Resource Manager template sample
> [!NOTE]
-> In order to deploy non-RFC 1918 IP addresses in the portal you must pre-stage the VNet and Subnet for the ASE. You can use a Resource Manager Template to deploy the ASE with non-RFC1918 IPs as well.
-
+> To deploy non-RFC 1918 IP addresses in the portal, you must pre-stage the VNet and subnet for the ASE. You can use an Azure Resource Manager template to deploy the ASE with non-RFC1918 IPs as well.
+
+</br>
+ <a href="https://portal.azure.us/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2FApp-Service-Environment-AzFirewall%2Fazuredeploy.json" target="_blank"> <img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazuregov.png" alt="Button to deploy to Azure Gov" /> </a>
-This template deploys an **ILB ASE** into the Azure Government or Azure DoD regions.
+This template deploys an **ILB ASE** into the Azure Government or DoD regions.
## Next steps
-[Azure Government overview](documentation-government-welcome.md)
+
+- [Sign up for Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial)
+- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/)
+- [Ask questions via the azure-gov tag on StackOverflow](https://stackoverflow.com/tags/azure-gov)
+- [Azure Government blog](https://devblogs.microsoft.com/azuregov/)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Secure Azure computing architecture](./compliance/secure-azure-computing-architecture.md)
+- [Azure Policy overview](../governance/policy/overview.md)
+- [Azure Policy regulatory compliance built-in initiatives](../governance/policy/samples/index.md#regulatory-compliance)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Azure Government isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
+- [Azure Government DoD overview](./documentation-government-overview-dod.md)
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
- Title: Troubleshooting no data - Application Insights for .NET
-description: Not seeing data in Azure Application Insights? Try here.
-- Previously updated : 05/21/2020----
-# Troubleshooting no data - Application Insights for .NET/.NET Core
--
-## Some of my telemetry is missing
-*In Application Insights, I only see a fraction of the events that are being generated by my app.*
-
-* If you're consistently seeing the same fraction, it's probably because of adaptive [sampling](../../azure-monitor/app/sampling.md). To confirm this, open Search (from the **Overview** in the portal on the left) and look at an instance of a Request or other event. To see the full property details, select the ellipsis (**...**) at the bottom of the **Properties** section. If Request Count > 1, sampling is in operation.
-* It's possible that you're hitting a [data rate limit](../service-limits.md#application-insights) for your pricing plan. These limits are applied per minute.
-
-*I'm randomly experiencing data loss.*
-
-* Check whether you're experiencing data loss at [Telemetry Channel](telemetry-channels.md#does-the-application-insights-channel-guarantee-telemetry-delivery-if-not-what-are-the-scenarios-in-which-telemetry-can-be-lost).
-
-* Check for any known issues in Telemetry Channel [GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/issues).
-
-*I'm experiencing data loss in Console App or on Web App when app is about to stop.*
-
-* SDK channel keeps telemetry in buffer, and sends them in batches. If the application is shutting down, you might need to explicitly call [Flush()](api-custom-events-metrics.md#flushing-data). Behavior of `Flush()` depends on the actual [channel](telemetry-channels.md#built-in-telemetry-channels) used.
-
-* Per [.NET Core/.NET Framework Console application](worker-service.md#net-corenet-framework-console-application), explicitly calling Flush() followed by sleep is required in Console Apps.
-
-## Request count collected by Application Insights SDK doesn't match the IIS log count for my application
-
-Internet Information Services (IIS) logs counts of all request reaching IIS and inherently could differ from the total request reaching an application. Due to this behavior, it isn't guaranteed that the request count collected by the SDKs will match the total IIS log count.
-
-## No data from my server
-* I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.*
-* A firewall issue is most likely the cause. [Set firewall exceptions for Application Insights to send data](../../azure-monitor/app/ip-addresses.md).
-
-*I [installed Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) on my web server to monitor existing apps. I don't see any results.*
-
-* See [Troubleshooting Status Monitor](./status-monitor-v2-troubleshoot.md).
-
-## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
-
-If you have an ASP.NET application hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
-
-[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md). The set of SSL security protocols is one of the quirks enabled by the httpRuntime targetFramework value in the system.web section of web.config.
-If the httpRuntime targetFramework is 4.5.2 or lower, then TLS 1.2 isn't included by default.
-
-> [!NOTE]
-> The httpRuntime targetFramework value is independent of the target framework used when building your application.
-
-To check the setting, open your web.config file and find the system.web section. Ensure that the `targetFramework` for `httpRuntime` is set to 4.6 or above.
-
- ```xml
- <system.web>
- ...
- <httpRuntime targetFramework="4.7.2" />
- ...
- </system.web>
- ```
-
-> [!NOTE]
-> Modifying the httpRuntime targetFramework value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Retargeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes).
-
-> [!NOTE]
-> If the targetFramework is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you are using your own virtual machine, you may need to enable TLS 1.2 in the OS.
--
-## FileNotFoundException: "Could not load file or assembly Microsoft.AspNet TelemetryCorrelation"
-
-For more information on this error, see [GitHub issue 1610 ]
-(https://github.com/microsoft/ApplicationInsights-dotnet/issues/1610).
-
-When upgrading from SDKs older than (2.4), you need to make sure the following changes applied to `web.config` and `ApplicationInsights.config`:
-
-1. Two http modules instead of one. In `web.config`, you should have two http modules. Order is important for some scenarios:
-
- ``` xml
- <system.webServer>
- <modules>
- <add name="TelemetryCorrelationHttpModule" type="Microsoft.AspNet.TelemetryCorrelation.TelemetryCorrelationHttpModule, Microsoft.AspNet.TelemetryCorrelation" preCondition="integratedMode,managedHandler" />
- <add name="ApplicationInsightsHttpModule" type="Microsoft.ApplicationInsights.Web.ApplicationInsightsHttpModule, Microsoft.AI.Web" preCondition="managedHandler" />
- </modules>
- </system.webServer>
- ```
-
-2. In `ApplicationInsights.config` in addition to `RequestTrackingTelemetryModule` you should have the following telemetry module:
-
- ``` xml
- <TelemetryModules>
- <Add Type="Microsoft.ApplicationInsights.Web.AspNetDiagnosticTelemetryModule, Microsoft.AI.Web"/>
- </TelemetryModules>
- ```
-
-***Failure to upgrade properly may lead to unexpected exceptions or telemetry not being collected.***
--
-## <a name="q01"></a>No 'Add Application Insights' option in Visual Studio
-*When I right-click an existing project in Solution Explorer, I don't see any Application Insights options.*
-
-* Not all types of .NET project are supported by the tools. Web and WCF projects are supported. For other project types such as desktop or service applications, you can still [add an Application Insights SDK to your project manually](./windows-desktop.md).
-* Make sure you have [Visual Studio 2013 Update 3 or later](/visualstudio/releasenotes/vs2013-update3-rtm-vs). It comes pre-installed with Developer Analytics tools, which provide the Application Insights SDK.
-* Select **Tools**, **Extensions and Updates** and check that **Developer Analytics Tools** is installed and enabled. If so, select **Updates** to see if there's an update available.
-* Open the New Project dialog and choose ASP.NET Web application. If you see the Application Insights option there, then the tools are installed. If not, try uninstalling and then reinstalling the Developer Analytics Tools.
-
-## <a name="q02"></a>Adding Application Insights failed
-*When I try to add Application Insights to an existing project, I see an error message.*
-
-Likely causes:
-
-* Communication with the Application Insights portal failed; or
-* There's a problem with your Azure account;
-* You only have [read access to the subscription or group where you were trying to create the new resource](./resources-roles-access-control.md).
-
-Fix:
-
-* Check that you provided sign-in credentials for the right Azure account.
-* In your browser, check that you have access to the [Azure portal](https://portal.azure.com). Open Settings and see if there's any restriction.
-* [Add Application Insights to your existing project](./asp-net.md): In Solution Explorer, right select your project and choose "Add Application Insights."
-
-## <a name="NuGetBuild"></a> "NuGet package(s) are missing" on my build server
-*Everything builds OK when I'm debugging on my development machine, but I get a NuGet error on the build server.*
-
-See [NuGet Package Restore](https://docs.nuget.org/Consume/Package-Restore)
-and [Automatic Package Restore](https://docs.nuget.org/Consume/package-restore/migrating-to-automatic-package-restore).
-
-## Missing menu command to open Application Insights from Visual Studio
-*When I right-click my project Solution Explorer, I don't see any Application Insights commands, or I don't see an Open Application Insights command.*
-
-Likely causes:
-
-* You created the Application Insights resource manually.
-* The project is of a type that isn't supported by the Application Insights tools.
-* The Developer Analytics tools are disabled in your Visual Studio.
-* Your Visual Studio is older than 2013 Update 3.
-
-Fix:
-
-* Make sure your Visual Studio version is 2013 update 3 or later.
-* Select **Tools**, **Extensions and Updates** and check that **Developer Analytics tools** is installed and enabled. If so, select **Updates** to see if there's an update available.
-* Right-click your project in Solution Explorer. If you see the command **Application Insights > Configure Application Insights**, use it to connect your project to the resource in the Application Insights service.
-
-Otherwise, your project type isn't directly supported by the Developer Analytics tools. To see your telemetry, sign in to the [Azure portal](https://portal.azure.com), choose Application Insights on the left navigation bar, and select your application.
-
-## 'Access denied' on opening Application Insights from Visual Studio
-*The 'Open Application Insights' menu command takes me to the Azure portal, but I get an 'access denied' error.*
-
-The Microsoft sign-in that you last used on your default browser doesn't have access to [the resource that was created when Application Insights was added to this app](./asp-net.md). There are two likely reasons:
-
-More than one Microsoft account - maybe a work and a personal Microsoft account? The sign-in that you last used on your default browser was for a different account than the one that has access to [add Application Insights to the project](./asp-net.md).
- * Fix: Select your name at top right of the browser window, and sign out. Then sign in with the account that has access. Then on the left navigation bar, select Application Insights and select your app.
-* Someone else added Application Insights to the project, and they forgot to give you [access to the resource group](./resources-roles-access-control.md) in which it was created.
- * Fix: If they used an organizational account, they can add you to the team; or they can grant you individual access to the resource group.
-
-## 'Asset not found' on opening Application Insights from Visual Studio
-*The 'Open Application Insights' menu command takes me to the Azure portal, but I get an 'asset not found' error.*
-
-Likely causes:
-
-* The Application Insights resource for your application has been deleted; or
-* The [connection string](./sdk-connection-string.md) was set or changed in ApplicationInsights.config by editing it directly, without updating the project file.
-
-The [connection string](./sdk-connection-string.md) in ApplicationInsights.config controls where the telemetry is sent. A line in the project file controls which resource is opened when you use the command in Visual Studio.
-
-Fix:
-
-* In Solution Explorer, right-click the project and choose Application Insights, Configure Application Insights. In the dialog, you can either choose to send telemetry to an existing resource, or create a new one. Or:
-* Open the resource directly. Sign in to [the Azure portal](https://portal.azure.com), select Application Insights on the left navigation bar, and then select your app.
-
-## Where do I find my telemetry?
-*I signed in to the [Microsoft Azure portal](https://portal.azure.com), and I'm looking at the Azure home dashboard. So where do I find my Application Insights data?*
-
-* On the left navigation bar, select Application Insights, then your app name. If you don't have any projects there, you need to [add or configure Application Insights in your web project](./asp-net.md).
- There you'll see some summary charts. You can select through them to see more detail.
-* In Visual Studio, while you're debugging your app, select the Application Insights button.
-
-## <a name="q03"></a> No server data (or no data at all)
-*I ran my app and then opened the Application Insights service in Microsoft Azure, but all the charts show 'Learn how to collect...' or 'Not configured.'* Or, *only Page View and user data, but no server data.*
-
-* Run your application in debug mode in Visual Studio (F5). Use the application so as to generate some telemetry. Check that you can see events logged in the Visual Studio output window.
- ![Screenshot that shows running your application in debug mode in Visual Studio.](./media/asp-net-troubleshoot-no-data/output-window.png)
-* In the Application Insights portal, open [Diagnostic Search](./diagnostic-search.md). Data usually appears here first.
-* Select the Refresh button. The blade refreshes itself periodically, but you can also do it manually. The refresh interval is longer for larger time ranges.
-* Verify the [connection strings](./sdk-connection-string.md) match. On the main blade for your app in the Application Insights portal, in the **Essentials** drop-down, look at **Connection string**. Then, in your project in Visual Studio, open ApplicationInsights.config and find the `<ConnectionString>`. Check that the two strings are equal. If not:
- * In the portal, select Application Insights and look for the app resource with the right string; or
- * In Visual Studio Solution Explorer, right-click the project and choose Application Insights, Configure. Reset the app to send telemetry to the right resource.
- * If you can't find the matching strings, check that you're using the same sign-in credentials in Visual Studio as in to the portal.
-* In the [Microsoft Azure home dashboard](https://portal.azure.com), look at the Service Health map. If there are some alert indications, wait until they've returned to OK and then close and reopen your Application Insights application blade.
-* Check also [our status blog](https://techcommunity.microsoft.com/t5/azure-monitor-status/bg-p/AzureMonitorStatusBlog).
-* Did you write any code for the [server-side SDK](./api-custom-events-metrics.md) that might change the [connection string](./sdk-connection-string.md) in `TelemetryClient` instances or in `TelemetryContext`? Or did you write a [filter or sampling configuration](./api-filtering-sampling.md) that might be filtering out too much?
-* If you edited ApplicationInsights.config, carefully check the configuration of [TelemetryInitializers and TelemetryProcessors](./api-filtering-sampling.md). An incorrectly named type or parameter can cause the SDK to send no data.
-
-## <a name="q04"></a>No data on Page Views, Browsers, Usage
-*I see data in Server Response Time and Server Requests charts, but no data in Page View Load time, or in the Browser or Usage blades.*
-
-The data comes from scripts in the web pages.
-
-* If you added Application Insights to an existing web project, [you have to add the scripts by hand](./javascript.md).
-* Make sure Internet Explorer isn't displaying your site in Compatibility mode.
-* Use the browser's debug feature (F12 on some browsers, then choose Network) to verify that data is being sent to `dc.services.visualstudio.com`.
-
-## No dependency or exception data
-See [dependency telemetry](./asp-net-dependencies.md) and [exception telemetry](asp-net-exceptions.md).
-
-## No performance data
-Performance data (CPU, IO rate, and so on) is available for [Java web services](java-2x-collectd.md), [Windows desktop apps](./windows-desktop.md), [IIS web apps and services if you install Application Insights Agent](./status-monitor-v2-overview.md), and [Azure Cloud Services](./app-insights-overview.md). you'll find it under Settings, Servers.
-
-## No (server) data since I published the app to my server
-* Check that you copied all the Microsoft. ApplicationInsights DLLs to the server, together with Microsoft.Diagnostics.Instrumentation.Extensions.Intercept.dll
-* In your firewall, you might have to [open some TCP ports](./ip-addresses.md).
-* If you have to use a proxy to send out of your corporate network, set [defaultProxy](/previous-versions/dotnet/netframework-1.1/aa903360(v=vs.71)) in Web.config
-* Windows Server 2008: Make sure you've installed the following updates: [KB2468871](https://support.microsoft.com/kb/2468871), [KB2533523](https://support.microsoft.com/kb/2533523), [KB2600217](https://www.microsoft.com/download/details.aspx?id=28936).
-
-## I used to see data, but it has stopped
-* Have you hit your monthly quota of data points? Open the Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for more capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/).
-
-## I don't see all the data I'm expecting
-If your application sends considerable data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the [adaptive sampling](./sampling.md) feature may operate and send only a percentage of your telemetry.
-
-You can disable it, but doing so isn't recommended. Sampling is designed so that related telemetry is correctly transmitted, for diagnostic purposes.
-
-## Client IP address is 0.0.0.0
-
-On February 5 2018, we announced that we removed logging of the Client IP address. This recommendation doesn't affect Geo Location.
-
-> [!NOTE]
-> If you need the first 3 octets of the IP address, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to add a custom attribute.
-> This does not affect data collected prior to February 5, 2018.
-
-## Wrong geographical data in user telemetry
-The city, region, and country dimensions are derived from IP addresses and aren't always accurate. These IP addresses are processed for location first and then changed to 0.0.0.0 to be stored.
-
-## Exception "method not found" on running in Azure Cloud Services
-Did you build for .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)? Earlier versions aren't automatically supported in Azure Cloud Services roles. [Install LTS on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
-
-## Troubleshooting Logs
-
-Follow these instructions to capture troubleshooting logs for your framework.
-
-### .NET Framework
-
-> [!NOTE]
-> Starting in version 2.14, the [Microsoft.AspNet.ApplicationInsights.HostingStartup](https://www.nuget.org/packages/Microsoft.AspNet.ApplicationInsights.HostingStartup) package is no longer necessary, SDK logs are now collected with the [Microsoft.ApplicationInsights](https://www.nuget.org/packages/Microsoft.ApplicationInsights/) package. No additional package is required.
-
-1. Modify your applicationinsights.config file to include the following XML:
-
- ```xml
- <TelemetryModules>
- <Add Type="Microsoft.ApplicationInsights.Extensibility.Implementation.Tracing.FileDiagnosticsTelemetryModule, Microsoft.ApplicationInsights">
- <Severity>Verbose</Severity>
- <LogFileName>mylog.txt</LogFileName>
- <LogFilePath>C:\\SDKLOGS</LogFilePath>
- </Add>
- </TelemetryModules>
- ```
- Your application must have Write permissions to the configured location
-
-2. Restart process so that these new settings are picked up by SDK
-
-3. Revert these changes when you're finished.
-
-### .NET Core
-
-1. Install the [Application Insights SDK NuGet package for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) package from NuGet. The version you install must match the current installed version of `Microsoft.ApplicationInsights`.
-
- The latest version of Microsoft.ApplicationInsights.AspNetCore is 2.14.0, and it refers to Microsoft.ApplicationInsights version 2.14.0. Hence the version of Microsoft.ApplicationInsights.AspNetCore to be installed should be 2.14.0.
-
-2. Modify `ConfigureServices` method in your `Startup.cs` class.:
-
- ```csharp
- services.AddSingleton<ITelemetryModule, FileDiagnosticsTelemetryModule>();
- services.ConfigureTelemetryModule<FileDiagnosticsTelemetryModule>( (module, options) => {
- module.LogFilePath = "C:\\SDKLOGS";
- module.LogFileName = "mylog.txt";
- module.Severity = "Verbose";
- } );
- ```
- Your application must have Write permissions to the configured location
-
-3. Restart process so that these new settings are picked up by SDK
-
-4. Revert these changes when you're finished.
--
-## <a name="PerfView"></a> Collect logs with PerfView
-[PerfView](https://github.com/Microsoft/perfview) is a free tool that helps isolate CPU, memory, and other issues.
-
-The Application Insights SDK log EventSource self-troubleshooting logs that can be captured by PerfView.
-
-To collect logs, download PerfView and run this command:
-```cmd
-PerfView.exe collect -MaxCollectSec:300 -NoGui /onlyProviders=*Microsoft-ApplicationInsights-Core,*Microsoft-ApplicationInsights-Data,*Microsoft-ApplicationInsights-WindowsServer-TelemetryChannel,*Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Dependency,*Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Web,*Microsoft-ApplicationInsights-Extensibility-DependencyCollector,*Microsoft-ApplicationInsights-Extensibility-HostingStartup,*Microsoft-ApplicationInsights-Extensibility-PerformanceCollector,*Microsoft-ApplicationInsights-Extensibility-EventCounterCollector,*Microsoft-ApplicationInsights-Extensibility-PerformanceCollector-QuickPulse,*Microsoft-ApplicationInsights-Extensibility-Web,*Microsoft-ApplicationInsights-Extensibility-WindowsServer,*Microsoft-ApplicationInsights-WindowsServer-Core,*Microsoft-ApplicationInsights-LoggerProvider,*Microsoft-ApplicationInsights-Extensibility-EventSourceListener,*Microsoft-ApplicationInsights-AspNetCore,*Redfield-Microsoft-ApplicationInsights-Core,*Redfield-Microsoft-ApplicationInsights-Data,*Redfield-Microsoft-ApplicationInsights-WindowsServer-TelemetryChannel,*Redfield-Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Dependency,*Redfield-Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Web,*Redfield-Microsoft-ApplicationInsights-Extensibility-DependencyCollector,*Redfield-Microsoft-ApplicationInsights-Extensibility-PerformanceCollector,*Redfield-Microsoft-ApplicationInsights-Extensibility-EventCounterCollector,*Redfield-Microsoft-ApplicationInsights-Extensibility-PerformanceCollector-QuickPulse,*Redfield-Microsoft-ApplicationInsights-Extensibility-Web,*Redfield-Microsoft-ApplicationInsights-Extensibility-WindowsServer,*Redfield-Microsoft-ApplicationInsights-LoggerProvider,*Redfield-Microsoft-ApplicationInsights-Extensibility-EventSourceListener,*Redfield-Microsoft-ApplicationInsights-AspNetCore
-```
-
-You can modify these parameters as needed:
-- **MaxCollectSec**. Set this parameter to prevent PerfView from running indefinitely and affecting the performance of your server.-- **OnlyProviders**. Set this parameter to only collect logs from the SDK. You can customize this list based on your specific investigations. -- **NoGui**. Set this parameter to collect logs without the GUI.--
-For more information,
-- [Recording performance traces with PerfView](https://github.com/dotnet/roslyn/wiki/Recording-performance-traces-with-PerfView).-- [Application Insights Event Sources](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/troubleshooting/ETW)-
-## Collect logs with dotnet-trace
-
-Alternatively, customers can also use a cross-platform .NET Core tool, [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace) for collecting logs that can further help in troubleshooting. This tool may be helpful for linux-based environments.
-
-After installation of [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace), execute the command below in bash.
-
-```bash
-dotnet-trace collect --process-id <PID> --providers Microsoft-ApplicationInsights-Core,Microsoft-ApplicationInsights-Data,Microsoft-ApplicationInsights-WindowsServer-TelemetryChannel,Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Dependency,Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Web,Microsoft-ApplicationInsights-Extensibility-DependencyCollector,Microsoft-ApplicationInsights-Extensibility-HostingStartup,Microsoft-ApplicationInsights-Extensibility-PerformanceCollector,Microsoft-ApplicationInsights-Extensibility-EventCounterCollector,Microsoft-ApplicationInsights-Extensibility-PerformanceCollector-QuickPulse,Microsoft-ApplicationInsights-Extensibility-Web,Microsoft-ApplicationInsights-Extensibility-WindowsServer,Microsoft-ApplicationInsights-WindowsServer-Core,Microsoft-ApplicationInsights-LoggerProvider,Microsoft-ApplicationInsights-Extensibility-EventSourceListener,Microsoft-ApplicationInsights-AspNetCore,Redfield-Microsoft-ApplicationInsights-Core,Redfield-Microsoft-ApplicationInsights-Data,Redfield-Microsoft-ApplicationInsights-WindowsServer-TelemetryChannel,Redfield-Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Dependency,Redfield-Microsoft-ApplicationInsights-Extensibility-AppMapCorrelation-Web,Redfield-Microsoft-ApplicationInsights-Extensibility-DependencyCollector,Redfield-Microsoft-ApplicationInsights-Extensibility-PerformanceCollector,Redfield-Microsoft-ApplicationInsights-Extensibility-EventCounterCollector,Redfield-Microsoft-ApplicationInsights-Extensibility-PerformanceCollector-QuickPulse,Redfield-Microsoft-ApplicationInsights-Extensibility-Web,Redfield-Microsoft-ApplicationInsights-Extensibility-WindowsServer,Redfield-Microsoft-ApplicationInsights-LoggerProvider,Redfield-Microsoft-ApplicationInsights-Extensibility-EventSourceListener,Redfield-Microsoft-ApplicationInsights-AspNetCore
-```
-
-## How to remove Application Insights
-
-Learn how to remove Application Insights in Visual Studio by following the steps provide in the [remove Application Insights article](./remove-application-insights.md).
-
-## Still not working...
-* [Microsoft Q&A question page for Application Insights](/answers/topics/azure-monitor.html)
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
For the template-based ASP.NET MVC app from this article, the file that you need
## Troubleshooting
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/asp-net-troubleshoot-no-data).
+ There's a known issue in the current version of Visual Studio 2019: storing the instrumentation key or connection string in a user secret is broken for .NET Framework-based apps. The key ultimately has to be hardcoded into the *applicationinsights.config* file to work around this bug. This article is designed to avoid this issue entirely, by not using user secrets. ## Open-source SDK
For the latest updates and bug fixes, [consult the release notes](./release-note
## Next steps * Add synthetic transactions to test that your website is available from all over the world with [availability monitoring](monitor-web-app-availability.md).
-* [Configure sampling](sampling.md) to help reduce telemetry traffic and data storage costs.
--
+* [Configure sampling](sampling.md) to help reduce telemetry traffic and data storage costs.
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
You can create up to 100 availability tests per Application Insights resource.
## Troubleshooting
-See the dedicated [troubleshooting article](troubleshoot-availability.md).
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability).
## Next steps
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
Yes, certain Telemetry Channels will persist data locally if an endpoint cannot
Telemetry channels that utilize local storage create temp files in the TEMP or APPDATA directories, which are restricted to the specific account running your application. This may happen when an endpoint was temporarily unavailable or you hit the throttling limit. Once this issue is resolved, the telemetry channel will resume sending all the new and persisted data.
-This persisted data is not encrypted locally. If this is a concern, review the data and restrict the collection of private data. (For more information, see [How to export and delete private data](../logs/personal-data-mgmt.md#how-to-export-and-delete-private-data).)
+This persisted data is not encrypted locally. If this is a concern, review the data and restrict the collection of private data. (For more information, see [How to export and delete private data](../logs/personal-data-mgmt.md#exporting-and-deleting-personal-data).)
If a customer needs to configure this directory with specific security requirements, it can be configured per framework. Please make sure that the process running your application has write access to this directory, but also make sure this directory is protected to avoid telemetry being read by unintended users.
AzureLogHandler(
## How do I send data to Application Insights using TLS 1.2?
-To insure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
+To ensure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your application/clients cannot communicate over at least TLS 1.2 you would not be able to send data to Application Insights. The approach you take to test and validate your application's TLS support will vary depending on the operating system/platform as well as the language/framework your application uses.
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Application Insights can test your website at regular intervals to check that it
[Learn more about how to set up availability web tests.][availability]
-## Questions? Problems?
-[Troubleshooting Java](java-2x-troubleshoot.md)
+## Troubleshooting
+
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/java-2x-troubleshoot).
## Next steps * [Monitor dependency calls](java-2x-agent.md)
azure-monitor Java 2X Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-troubleshoot.md
- Title: Troubleshoot Application Insights in a Java web project
-description: Troubleshooting guide - monitoring live Java apps with Application Insights.
- Previously updated : 03/14/2019----
-# Troubleshooting and Q and A for Application Insights for Java SDK
-
-> [!CAUTION]
-> This document applies to Application Insights Java 2.x which is no longer recommended.
->
-> Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
-
-Questions or problems with [Azure Application Insights in Java][java]? Here are some tips.
-
-## Build errors
-**In Eclipse or Intellij Idea, when adding the Application Insights SDK via Maven or Gradle, I get build or checksum validation errors.**
-
-* If the dependency `<version>` element is using a pattern with wildcard characters (e.g. (Maven) `<version>[2.0,)</version>` or (Gradle) `version:'2.+'`), try specifying a specific version instead like `2.6.4`.
-
-## No data
-**I added Application Insights successfully and ran my app, but I've never seen data in the portal.**
-
-* Wait a minute and click Refresh. The charts refresh themselves periodically, but you can also refresh manually. The refresh interval depends on the time range of the chart.
-* Check that you have an instrumentation key defined in the ApplicationInsights.xml file (in the resources folder in your project) or configured as Environment variable.
-* Verify that there is no `<DisableTelemetry>true</DisableTelemetry>` node in the xml file.
-* In your firewall, you might have to open TCP ports 80 and 443 for outgoing traffic to dc.services.visualstudio.com. See the [full list of firewall exceptions](./ip-addresses.md)
-* In the Microsoft Azure start board, look at the service status map. If there are some alert indications, wait until they have returned to OK and then close and re-open your Application Insights application blade.
-* [Turn on logging](#debug-data-from-the-sdk) by adding an `<SDKLogger />` element under the root node in the ApplicationInsights.xml file (in the resources folder in your project), and check for entries prefaced with AI: INFO/WARN/ERROR for any suspicious logs.
-* Make sure that the correct ApplicationInsights.xml file has been successfully loaded by the Java SDK, by looking at the console's output messages for a "Configuration file has been successfully found" statement.
-* If the config file is not found, check the output messages to see where the config file is being searched for, and make sure that the ApplicationInsights.xml is located in one of those search locations. As a rule of thumb, you can place the config file near the Application Insights SDK JARs. For example: in Tomcat, this would mean the WEB-INF/classes folder. During development you can place ApplicationInsights.xml in resources folder of your web project.
-* Please also look at [GitHub issues page](https://github.com/microsoft/ApplicationInsights-Java/issues) for known issues with the SDK.
-* Please ensure to use same version of Application Insights core, web, agent and logging appenders to avoid any version conflict issues.
--
-#### I used to see data, but it has stopped
-* Have you hit your monthly quota of data points? Open Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for additional capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/).
-* Have you recently upgraded your SDK? Please ensure that only Unique SDK jars are present inside the project directory. There should not be two different versions of SDK present.
-* Are you looking at the correct AI resource? Please match the iKey of your application to the resource where you are expecting telemetry. They should be the same.
-
-#### I don't see all the data I'm expecting
-* Open the Usage and estimated cost page and check whether [sampling](./sampling.md) is in operation. (100% transmission means that sampling isn't in operation.) The Application Insights service can be set to accept only a fraction of the telemetry that arrives from your app. This helps you keep within your monthly quota of telemetry.
-* Do you have SDK Sampling turned on? If yes, data would be sampled at the rate specified for all the applicable types.
-* Are you running an older version of Java SDK? Starting with version 2.0.1, we have introduced fault tolerance mechanism to handle intermittent network and backend failures as well as data persistence on local drives.
-* Are you getting throttled due to excessive telemetry? If you turn on INFO logging, you will see a log message "App is throttled". Our current limit is 32k telemetry items/second.
-
-### Java Agent cannot capture dependency data
-* Have you configured Java agent by following [Configure Java Agent](java-2x-agent.md) ?
-* Make sure both the Java agent jar and the AI-Agent.xml file are placed in the same folder.
-* Make sure that the dependency you are trying to auto-collect is supported for auto collection. Currently we only support MySQL, MsSQL, Oracle DB and Azure Cache for Redis dependency collection.
-
-## No usage data
-**I see data about requests and response times, but no page view, browser, or user data.**
-
-You successfully set up your app to send telemetry from the server. Now your next step is to [set up your web pages to send telemetry from the web browser][usage].
-
-Alternatively, if your client is an app in a [phone or other device][platforms], you can send telemetry from there.
-
-Use the same instrumentation key to set up both your client and server telemetry. The data will appear in the same Application Insights resource, and you'll be able to correlate events from client and server.
-
-## Disabling telemetry
-**How can I disable telemetry collection?**
-
-In code:
-
-```Java
-
- TelemetryConfiguration config = TelemetryConfiguration.getActive();
- config.setTrackingIsDisabled(true);
-```
-
-**Or**
-
-Update ApplicationInsights.xml (in the resources folder in your project). Add the following under the root node:
-
-```xml
-
- <DisableTelemetry>true</DisableTelemetry>
-```
-
-Using the XML method, you have to restart the application when you change the value.
-
-## Changing the target
-**How can I change which Azure resource my project sends data to?**
-
-* [Get the instrumentation key of the new resource.][java]
-* If you added Application Insights to your project using the Azure Toolkit for Eclipse, right click your web project, select **Azure**, **Configure Application Insights**, and change the key.
-* If you had configured the Instrumentation Key as environment variable please update the value of the environment variable with new iKey.
-* Otherwise, update the key in ApplicationInsights.xml in the resources folder in your project.
-
-## Debug data from the SDK
-
-**How can I find out what the SDK is doing?**
-
-To get more information about what's happening in the API, add `<SDKLogger/>` under the root node of the ApplicationInsights.xml configuration file.
-
-### ApplicationInsights.xml
-
-You can also instruct the logger to output to a file:
-
-```xml
- <SDKLogger type="FILE"><!-- or "CONSOLE" to print to stderr -->
- <Level>TRACE</Level>
- <UniquePrefix>AI</UniquePrefix>
- <BaseFolderPath>C:/agent/AISDK</BaseFolderPath>
-</SDKLogger>
-```
-
-### Spring Boot Starter
-
-To enable SDK logging with Spring Boot Apps using the Application Insights Spring Boot Starter, add the following to the `application.properties` file:
-
-```yaml
-azure.application-insights.logger.type=file
-azure.application-insights.logger.base-folder-path=C:/agent/AISDK
-azure.application-insights.logger.level=trace
-```
-
-or to print to standard error:
-
-```yaml
-azure.application-insights.logger.type=console
-azure.application-insights.logger.level=trace
-```
-
-### Java Agent
-
-To enable JVM Agent Logging update the [AI-Agent.xml file](java-2x-agent.md):
-
-```xml
-<AgentLogger type="FILE"><!-- or "CONSOLE" to print to stderr -->
- <Level>TRACE</Level>
- <UniquePrefix>AI</UniquePrefix>
- <BaseFolderPath>C:/agent/AIAGENT</BaseFolderPath>
-</AgentLogger>
-```
-
-### Java Command Line Properties
-_Since version 2.4.0_
-
-To enable logging using command line options, without changing configuration files:
-
-```
-java -Dapplicationinsights.logger.file.level=trace -Dapplicationinsights.logger.file.uniquePrefix=AI -Dapplicationinsights.logger.baseFolderPath="C:/my/log/dir" -jar MyApp.jar
-```
-
-or to print to standard error:
-
-```
-java -Dapplicationinsights.logger.console.level=trace -jar MyApp.jar
-```
-
-## The Azure start screen
-**I'm looking at [the Azure portal](https://portal.azure.com). Does the map tell me something about my app?**
-
-No, it shows the health of Azure servers around the world.
-
-*From the Azure start board (home screen), how do I find data about my app?*
-
-Assuming you [set up your app for Application Insights][java], click Browse, select Application Insights, and select the app resource you created for your app. To get there faster in future, you can pin your app to the start board.
-
-## Intranet servers
-**Can I monitor a server on my intranet?**
-
-Yes, provided your server can send telemetry to the Application Insights portal through the public internet.
-
-You may need to [open some outgoing ports in your server's firewall](./ip-addresses.md#outgoing-ports)
-to allow the SDK to send data to the portal.
-
-## Data retention
-**How long is data retained in the portal? Is it secure?**
-
-See [Data retention and privacy][data].
-
-## Debug logging
-Application Insights uses `org.apache.http`. This is relocated within Application Insights core jars under the namespace `com.microsoft.applicationinsights.core.dependencies.http`. This enables Application Insights to handle scenarios where different versions of the same `org.apache.http` exist in one code base.
-
->[!NOTE]
->If you enable DEBUG level logging for all namespaces in the app, it will be honored by all executing modules including `org.apache.http` renamed as `com.microsoft.applicationinsights.core.dependencies.http`. Application Insights will not be able to apply filtering for these calls because the log call is being made by the Apache library. DEBUG level logging produce a considerable amount of log data and is not recommended for live production instances.
-
-## Next steps
-**I set up Application Insights for my Java server app. What else can I do?**
-
-* [Monitor availability of your web pages][availability]
-* [Monitor web page usage][usage]
-* [Track usage and diagnose issues in your device apps][platforms]
-* [Write code to track usage of your app][track]
-* [Capture diagnostic logs][javalogs]
-
-## Get help
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/ms-application-insights)
-* [File an issue on GitHub](https://github.com/microsoft/ApplicationInsights-Java/issues)
-
-<!--Link references-->
-
-[availability]: ./monitor-web-app-availability.md
-[data]: ./data-retention-privacy.md
-[java]: java-2x-get-started.md
-[javalogs]: java-2x-trace-logs.md
-[platforms]: ./platforms.md
-[track]: ./api-custom-events-metrics.md
-[usage]: javascript.md
-
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
If you want to attach custom dimensions to your logs, use [Log4j 1.2 MDC](https:
## Troubleshooting
-For help with troubleshooting, see [Troubleshooting](java-standalone-troubleshoot.md).
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/java-standalone-troubleshoot).
## Release notes
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
- Title: Troubleshooting Azure Monitor Application Insights for Java
-description: Learn how to troubleshoot the Java agent for Azure Monitor Application Insights
- Previously updated : 11/30/2020---
-# Troubleshooting guide: Azure Monitor Application Insights for Java
-
-In this article, we cover some of the common issues that you might face while instrumenting a Java application by using the Java agent for Application Insights. We also cover the steps to resolve these issues. Application Insights is a feature of the Azure Monitor platform service.
-
-## Check the self-diagnostic log file
-
-By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory
-that holds the `applicationinsights-agent-3.3.0.jar` file.
-
-This log file is the first place to check for hints to any issues you might be experiencing.
-
-If no log file is generated, check that your Java application has write permission to the directory that holds the
-`applicationinsights-agent-3.3.0.jar` file.
-
-If still no log file is generated, check the stdout log from your Java application. Application Insights Java 3.x
-should log any errors to stdout that would prevent it from logging to its normal location.
-
-## JVM fails to start
-
-If the JVM fails to start with "Error opening zip file or JAR manifest missing",
-try re-downloading the agent jar file because it may have been corrupted during file transfer.
-
-## Upgrade from the Application Insights Java 2.x SDK
-
-If you're already using the Application Insights Java 2.x SDK in your application, you can keep using it.
-The Application Insights Java 3.x agent will detect it,
-and capture and correlate any custom telemetry you're sending via the 2.x SDK,
-while suppressing any auto-collection performed by the 2.x SDK to prevent duplicate telemetry.
-For more information, see [Upgrade from the Java 2.x SDK](./java-standalone-upgrade-from-2x.md).
-
-## Upgrade from Application Insights Java 3.0 Preview
-
-If you're upgrading from the Java 3.0 Preview agent, review all of the [configuration options](./java-standalone-config.md) carefully. The JSON structure has completely changed in the 3.0 general availability (GA) release.
-
-These changes include:
--- The configuration file name has changed from `ApplicationInsights.json` to `applicationinsights.json`.-- The `instrumentationSettings` node is no longer present. All content in `instrumentationSettings` is moved to the root level. -- Configuration nodes like `sampling`, `jmxMetrics`, `instrumentation`, and `heartbeat` are moved out of `preview` to the root level.-
-## Some logging is not auto-collected
-
-Logging is only captured if it first meets the level that is configured for the logging framework,
-and second, also meets the level that is configured for Application Insights.
-
-For example, if your logging framework is configured to log `WARN` (and above) from package `com.example`,
-and Application Insights is configured to capture `INFO` (and above),
-then Application Insights will only capture `WARN` (and above) from package `com.example`.
-
-The best way to know if a particular logging statement meets the logging frameworks' configured threshold
-is to confirm that it is showing up in your normal application log (e.g. file or console).
-
-Also note that if an exception object is passed to the logger, then the log message (and exception object details)
-will show up in the Azure portal under the `exceptions` table instead of the `traces` table.
-
-See the [auto-collected logging configuration](./java-standalone-config.md#auto-collected-logging) for more details.
-
-## Import SSL certificates
-
-This section helps you to troubleshoot and possibly fix the exceptions related to SSL certificates when using the Java agent.
-
-There are two different paths below for resolving this issue:
-* If using a default Java keystore
-* If using a custom Java keystore
-
-If you aren't sure which path to follow, check to see if you have a JVM arg `-Djavax.net.ssl.trustStore=...`.
-If you _don't_ have such a JVM arg, then you are probably using the default Java keystore.
-If you _do_ have such a JVM arg, then you are probably using a custom keystore,
-and the JVM arg will point you to your custom keystore.
-
-### If using the default Java keystore:
-
-Typically the default Java keystore will already have all of the CA root certificates. However there might be some exceptions, such as the ingestion endpoint certificate might be signed by a different root certificate. So we recommend the following three steps to resolve this issue:
-
-1. Check if the SSL certificate that was used to sign the Application Insights endpoint is already present in the default keystore. The trusted CA certificates, by default, are stored in `$JAVA_HOME/jre/lib/security/cacerts`. To list certificates in a Java keystore use the following command:
- > `keytool -list -v -keystore $PATH_TO_KEYSTORE_FILE`
-
- You can redirect the output to a temp file like this (will be easy to search later)
- > `keytool -list -v -keystore $JAVA_HOME/jre/lib/security/cacerts > temp.txt`
-
-2. Once you have the list of certificates, follow these [steps](#steps-to-download-ssl-certificate) to download the SSL certificate that was used to sign the Application Insights endpoint.
-
- Once you have the certificate downloaded, generate an SHA-1 hash on the certificate using the below command:
- > `keytool -printcert -v -file "your_downloaded_ssl_certificate.cer"`
-
- Copy the SHA-1 value and check if this value is present in "temp.txt" file you saved previously. If you are not able to find the SHA-1 value in the temp file, it indicates that the downloaded SSL cert is missing in default Java keystore.
--
-3. Import the SSL certificate to the default Java keystore using the following command:
- > `keytool -import -file "the cert file" -alias "some meaningful name" -keystore "path to cacerts file"`
-
- In this case it will be
-
- > `keytool -import -file "your downloaded ssl cert file" -alias "some meaningful name" $JAVA_HOME/jre/lib/security/cacerts`
--
-### If using a custom Java keystore:
-
-If you are using a custom Java keystore, you may need to import the Application Insights endpoint(s) SSL certificate(s) into it.
-We recommend the following two steps to resolve this issue:
-1. Follow these [steps](#steps-to-download-ssl-certificate) to download the SSL certificate from the Application Insights endpoint.
-2. Use the following command to import the SSL certificate to the custom Java keystore:
- > `keytool -importcert -alias your_ssl_certificate -file "your downloaded SSL certificate name.cer" -keystore "Your KeyStore name" -storepass "Your keystore password" -noprompt`
-
-### Steps to download SSL certificate
-
-1. Open your favorite browser and go to the URL from which you want to download the SSL certificate.
-
-2. Select the **View site information** (lock) icon in the browser, and then select the **Certificate** option.
-
- :::image type="content" source="media/java-ipa/troubleshooting/certificate-icon-capture.png" alt-text="Screenshot of the Certificate option in site information." lightbox="media/java-ipa/troubleshooting/certificate-icon-capture.png":::
-
-3. Later, you have to click on the "Certificate Path" -> Select the root Certificate -> Click on 'View Certificate'. This will pop up a new certificate menu and you can download the certificate, from the new menu.
-
- :::image type="content" source="media/java-ipa/troubleshooting/root-certificate.png" alt-text="Screenshot of how to select the root certificate." lightbox="media/java-ipa/troubleshooting/root-certificate.png":::
-
-4. Go to the **Details** tab and select **Copy to file**.
-5. Select the **Next** button, select **Base-64 encoded X.509 (.CER)** format, and then select **Next** again.
-
- :::image type="content" source="media/java-ipa/troubleshooting/certificate-export-wizard.png" alt-text="Screenshot of the Certificate Export Wizard, with a format selected." lightbox="media/java-ipa/troubleshooting/certificate-export-wizard.png":::
-
-6. Specify the file where you want to save the SSL certificate. Then select **Next** > **Finish**. You should see a "The export was successful" message.
-
-> [!WARNING]
-> You'll need to repeat these steps to get the new certificate before the current certificate expires. You can find the expiration information on the **Details** tab of the **Certificate** dialog box.
->
-> :::image type="content" source="media/java-ipa/troubleshooting/certificate-details.png" alt-text="Screenshot that shows SSL certificate details." lightbox="media/java-ipa/troubleshooting/certificate-details.png":::
-
-## Understanding UnknownHostException
-
-If you see this exception after upgrading to Java agent version greater than 3.2.0, upgrading your network to resolve the new endpoint shown in the exception might resolve the exception. The reason for the difference between Application Insights versions is that versions greater than 3.2.0 point to the new ingestion endpoint `v2.1/track` compared to the older `v2/track`. The new ingestion endpoint automatically redirects you to the ingestion endpoint (new endpoint shown in exception) nearest to the storage for your Application Insights resource.
-
-## Missing cipher suites
-
-If the Application Insights Java agent detects that you do not have any of the cipher suites that are supported by the endpoints it connects to, it will alert you and link you here.
-
-### Background on cipher suites:
-Cipher suites come into play before a client application and server exchange information over an SSL/TLS connection. The client application initiates an SSL handshake. Part of that process involves notifying the server which cipher suites it supports. The server receives that information and compares the cipher suites supported by the client application with the algorithms it supports. If it finds a match, the server notifies the client application and a secure connection is established. If it does not find a match, the server refuses the connection.
-
-#### How to determine client side cipher suites:
-In this case, the client is the JVM on which your instrumented application is running. Starting from 3.2.5, Application Insights Java will log a warning message if missing cipher suites could be causing connection failures to one of the service endpoints.
-
-If using an earlier version of Application Insights Java, compile and run the following Java program to get the list of supported cipher suites in your JVM:
-
-```
-import javax.net.ssl.SSLServerSocketFactory;
-
-public class Ciphers {
- public static void main(String[] args) {
- SSLServerSocketFactory ssf = (SSLServerSocketFactory) SSLServerSocketFactory.getDefault();
- String[] defaultCiphers = ssf.getDefaultCipherSuites();
- System.out.println("Default\tCipher");
- for (int i = 0; i < defaultCiphers.length; ++i) {
- System.out.print('*');
- System.out.print('\t');
- System.out.println(defaultCiphers[i]);
- }
- }
-}
-```
-Following are the cipher suites that are generally supported by the Application Insights endpoints:
-- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384-- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256-
-#### How to determine server side cipher suites:
-In this case, the server side is the Application Insights ingestion endpoint or the Application Insights Live metrics endpoint. You can use an online tool like [SSLLABS](https://www.ssllabs.com/ssltest/analyze.html) to determine the expected cipher suites based on the endpoint url.
-
-#### How to add the missing cipher suites:
-
-If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom Java runtime using `jlink` please make sure to include the same module.
-
-Otherwise, these cipher suites should already be part of modern Java 8+ distributions,
-so it is recommended to check where you installed your Java distribution from, and investigate why the security
-providers in that Java distribution's `java.security` configuration file differ from standard Java distributions.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
As shown, there are three different Azure Monitor exporters that support OpenCen
Each exporter accepts the same arguments for configuration, passed through the constructors. You can see details about each one here: - `connection_string`: The connection string used to connect to your Azure Monitor resource. Takes priority over `instrumentation_key`.
+- `credential`: Credential class used by AAD authentication. See `Authentication` section below.
- `enable_standard_metrics`: Used for `AzureMetricsExporter`. Signals the exporter to send [performance counter](../essentials/app-insights-metrics.md#performance-counters) metrics automatically to Azure Monitor. Defaults to `True`.-- `export_interval`: Used to specify the frequency in seconds of exporting.
+- `export_interval`: Used to specify the frequency in seconds of exporting. Defaults to 15s.
+- `grace_period`: Used to specify the timeout for shutdown of exporters in seconds. Defaults to 5s.
- `instrumentation_key`: The instrumentation key used to connect to your Azure Monitor resource.-- `logging_sampling_rate`: Used for `AzureLogHandler`. Provides a sampling rate [0,1.0] for exporting logs. Defaults to 1.0.
+- `logging_sampling_rate`: Used for `AzureLogHandler` and `AzureEventHandler`. Provides a sampling rate [0,1.0] for exporting logs/events. Defaults to 1.0.
- `max_batch_size`: Specifies the maximum size of telemetry that's exported at once. - `proxies`: Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies). - `storage_path`: A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is $USER + `.opencensus` + `.azure` + `python-file-name`.
+- `timeout`: Specifies the networking timeout to send telemetry to the ingestion service in seconds. Defaults to 10s.
## Integrate with Azure Functions
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Application Insights Agent is located here: https://www.powershellgallery.com/pa
- [Start-ApplicationInsightsMonitoringTrace](./status-monitor-v2-api-reference.md#start-applicationinsightsmonitoringtrace) ## Troubleshooting-- [Troubleshooting](status-monitor-v2-troubleshoot.md)-- [Known issues](status-monitor-v2-troubleshoot.md#known-issues)
+See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot).
## FAQ
azure-monitor Status Monitor V2 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-troubleshoot.md
- Title: Azure Application Insights Agent troubleshooting and known issues | Microsoft Docs
-description: The known issues of Application Insights Agent and troubleshooting examples. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure.
- Previously updated : 04/23/2019---
-# Troubleshooting Application Insights Agent (formerly named Status Monitor v2)
-
-When you enable monitoring, you might experience issues that prevent data collection.
-This article lists all known issues and provides troubleshooting examples.
-
-## Known issues
-
-### Conflicting DLLs in an app's bin directory
-
-If any of these DLLs are present in the bin directory, monitoring might fail:
--- Microsoft.ApplicationInsights.dll-- Microsoft.AspNet.TelemetryCorrelation.dll-- System.Diagnostics.DiagnosticSource.dll-
-Some of these DLLs are included in the Visual Studio default app templates, even if your app doesn't use them.
-You can use troubleshooting tools to see symptomatic behavior:
--- PerfView:
- ```
- ThreadID="7,500"
- ProcessorNumber="0"
- msg="Found 'System.Diagnostics.DiagnosticSource, Version=4.0.2.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' assembly, skipping attaching redfield binaries"
- ExtVer="2.8.13.5972"
- SubscriptionId=""
- AppName=""
- FormattedMessage="Found 'System.Diagnostics.DiagnosticSource, Version=4.0.2.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' assembly, skipping attaching redfield binaries"
- ```
--- IISReset and app load (without telemetry). Investigate with Sysinternals (Handle.exe and ListDLLs.exe):
- ```
- .\handle64.exe -p w3wp | findstr /I "InstrumentationEngine AI. ApplicationInsights"
- E54: File (R-D) C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\Runtime\Microsoft.ApplicationInsights.RedfieldIISModule.dll
-
- .\Listdlls64.exe w3wp | findstr /I "InstrumentationEngine AI ApplicationInsights"
- 0x0000000009be0000 0x127000 C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\Instrumentation64\MicrosoftInstrumentationEngine_x64.dll
- 0x0000000009b90000 0x4f000 C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\Instrumentation64\Microsoft.ApplicationInsights.ExtensionsHost_x64.dll
- 0x0000000004d20000 0xb2000 C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\Instrumentation64\Microsoft.ApplicationInsights.Extensions.Base_x64.dll
- ```
-
-### PowerShell Versions
-This product was written and tested using PowerShell v5.1.
-This module isn't compatible with PowerShell versions 6 or 7.
-We recommend using PowerShell v5.1 alongside newer versions.
-For more information, see [Using PowerShell 7 side by side with PowerShell 5.1](/powershell/scripting/install/migrating-from-windows-powershell-51-to-powershell-7#using-powershell-7-side-by-side-with-windows-powershell-51).
-
-### Conflict with IIS shared configuration
-
-If you have a cluster of web servers, you might be using a [shared configuration](/iis/web-hosting/configuring-servers-in-the-windows-web-platform/shared-configuration_211).
-The HttpModule can't be injected into this shared configuration.
-Run the Enable command on each web server to install the DLL into each server's GAC.
-
-After you run the Enable command, complete these steps:
-1. Go to the shared configuration directory and find the applicationHost.config file.
-2. Add this line to the modules section of your configuration:
- ```
- <modules>
- <!-- Registered global managed http module handler. The 'Microsoft.AppInsights.IIS.ManagedHttpModuleHelper.dll' must be installed in the GAC before this config is applied. -->
- <add name="ManagedHttpModuleHelper" type="Microsoft.AppInsights.IIS.ManagedHttpModuleHelper.ManagedHttpModuleHelper, Microsoft.AppInsights.IIS.ManagedHttpModuleHelper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler,runtimeVersionv4.0" />
- </modules>
- ```
-
-### IIS Nested Applications
-
-We don't instrument nested applications in IIS in version 1.0.
-
-### Advanced SDK Configuration isn't available.
-
-The SDK configuration isn't exposed to the end user in version 1.0.
-
-
-
-## Troubleshooting
-
-### Troubleshooting PowerShell
-
-#### Determine which modules are available
-You can use the `Get-Module -ListAvailable` command to determine which modules are installed.
-
-#### Import a module into the current session
-If a module hasn't been loaded into a PowerShell session, you can manually load it by using the `Import-Module <path to psd1>` command.
--
-### Troubleshooting the Application Insights Agent module
-
-#### List the commands available in the Application Insights Agent module
-Run the command `Get-Command -Module Az.ApplicationMonitor` to get the available commands:
-
-```
-CommandType Name Version Source
- -
-Cmdlet Disable-ApplicationInsightsMonitoring 0.4.0 Az.ApplicationMonitor
-Cmdlet Disable-InstrumentationEngine 0.4.0 Az.ApplicationMonitor
-Cmdlet Enable-ApplicationInsightsMonitoring 0.4.0 Az.ApplicationMonitor
-Cmdlet Enable-InstrumentationEngine 0.4.0 Az.ApplicationMonitor
-Cmdlet Get-ApplicationInsightsMonitoringConfig 0.4.0 Az.ApplicationMonitor
-Cmdlet Get-ApplicationInsightsMonitoringStatus 0.4.0 Az.ApplicationMonitor
-Cmdlet Set-ApplicationInsightsMonitoringConfig 0.4.0 Az.ApplicationMonitor
-Cmdlet Start-ApplicationInsightsMonitoringTrace 0.4.0 Az.ApplicationMonitor
-```
-
-#### Determine the current version of the Application Insights Agent module
-Run the `Get-ApplicationInsightsMonitoringStatus -PowerShellModule` command to display the following information about the module:
- - PowerShell module version
- - Application Insights SDK version
- - File paths of the PowerShell module
-
-Review the [API reference](status-monitor-v2-api-reference.md) for a detailed description of how to use this cmdlet.
--
-### Troubleshooting running processes
-
-You can inspect the processes on the instrumented computer to determine if all DLLs are loaded and environment variables are set.
-If monitoring is working, at least 12 DLLs should be loaded.
-
-* Use the `Get-ApplicationInsightsMonitoringStatus -InspectProcess` command to check the DLLs.
-* Use the `(Get-Process -id {PID}).StartInfo.EnvironmentVariables` command to check the environment variables. Following are the environment varibles set in the worker process or dotnet core process:
-
-```
-COR_ENABLE_PROFILING=1
-COR_PROFILER={324F817A-7420-4E6D-B3C1-143FBED6D855}
-COR_PROFILER_PATH_32=Path to MicrosoftInstrumentationEngine_x86.dll
-COR_PROFILER_PATH_64=Path to MicrosoftInstrumentationEngine_x64.dll
-MicrosoftInstrumentationEngine_Host={CA487940-57D2-10BF-11B2-A3AD5A13CBC0}
-MicrosoftInstrumentationEngine_HostPath_32=Path to Microsoft.ApplicationInsights.ExtensionsHost_x86.dll
-MicrosoftInstrumentationEngine_HostPath_64=Path to Microsoft.ApplicationInsights.ExtensionsHost_x64.dll
-MicrosoftInstrumentationEngine_ConfigPath32_Private=Path to Microsoft.InstrumentationEngine.Extensions.config
-MicrosoftInstrumentationEngine_ConfigPath64_Private=Path to Microsoft.InstrumentationEngine.Extensions.config
-MicrosoftAppInsights_ManagedHttpModulePath=Path to Microsoft.ApplicationInsights.RedfieldIISModule.dll
-MicrosoftAppInsights_ManagedHttpModuleType=Microsoft.ApplicationInsights.RedfieldIISModule.RedfieldIISModule
-ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=Microsoft.ApplicationInsights.StartupBootstrapper
-DOTNET_STARTUP_HOOKS=Path to Microsoft.ApplicationInsights.StartupHook.dll
-```
-
-Review the [API reference](status-monitor-v2-api-reference.md) for a detailed description of how to use this cmdlet.
--
-### Collect ETW logs by using PerfView
-
-#### Setup
-
-1. Download PerfView.exe and PerfView64.exe from [GitHub](https://github.com/Microsoft/perfview/releases).
-2. Start PerfView64.exe.
-3. Expand **Advanced Options**.
-4. Clear these check boxes:
- - **Zip**
- - **Merge**
- - **.NET Symbol Collection**
-5. Set these **Additional Providers**: `61f6ca3b-4b5f-5602-fa60-759a2a2d1fbd,323adc25-e39b-5c87-8658-2c1af1a92dc5,925fa42b-9ef6-5fa7-10b8-56449d7a2040,f7d60e07-e910-5aca-bdd2-9de45b46c560,7c739bb9-7861-412e-ba50-bf30d95eae36,252e28f4-43f9-5771-197a-e8c7e750a984,f9c04365-1d1f-5177-1cdc-a0b0554b6903`
--
-#### Collecting logs
-
-1. In a command console with Admin privileges, run the `iisreset /stop` command to turn off IIS and all web apps.
-2. In PerfView, select **Start Collection**.
-3. In a command console with Admin privileges, run the `iisreset /start` command to start IIS.
-4. Try to browse to your app.
-5. After your app is loaded, return to PerfView and select **Stop Collection**.
-
-## Next steps
--- Review the [API reference](status-monitor-v2-overview.md#powershell-api-reference) to learn about parameters you might have missed.
azure-monitor Troubleshoot Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/troubleshoot-availability.md
- Title: Troubleshoot your Azure Application Insights availability tests
-description: Troubleshoot web tests in Azure Application Insights. Get alerts if a website becomes unavailable or responds slowly.
- Previously updated : 02/14/2021---
-# Troubleshooting
-
-This article will help you to troubleshoot common issues that may occur when using availability monitoring.
-
-## Troubleshooting report steps for ping tests
-
-The Troubleshooting Report allows you to easily diagnose common problems that cause your **ping tests** to fail.
-
-![Animation of navigating from the availability tab by selecting a failure to the end-to-end transaction details to view the troubleshooting report](./media/troubleshoot-availability/availability-to-troubleshooter.gif)
-
-1. On the availability tab of your Application Insights resource, select overall or one of the availability tests.
-2. Either select **Failed** then a test under "Drill into" on the left or select one of the points on the scatter plot.
-3. On the end-to-end transaction detail page, select an event then under "Troubleshooting report summary" select **[Go to step]** to see the troubleshooting report.
-
-> [!NOTE]
-> If the connection re-use step is present, then DNS resolution, connection establishment, and TLS transport steps will not be present.
-
-|Step | Error message | Possible cause |
-|--||-|
-| Connection reuse | n/a | Usually dependent on a previously established connection meaning the web test step is dependent. So there would be no DNS, connection or SSL step required. |
-| DNS resolution | The remote name could not be resolved: "your URL" | The DNS resolution process failed, most likely due to misconfigured DNS records or temporary DNS server failures. |
-| Connection establishment | A connection attempt failed because the connected party did not properly respond after a period of time. | In general, it means your server is not responding to the HTTP request. A common cause is that our test agents are being blocked by a firewall on your server. If you would like to test within an Azure Virtual Network, you should add the Availability service tag to your environment.|
-| TLS transport | The client and server cannot communicate because they do not possess a common algorithm.| Only TLS 1.0, 1.1, and 1.2 are supported. SSL is not supported. This step does not validate SSL certificates and only establishes a secure connection. This step will only shows up when an error occurs. |
-| Receiving response header | Unable to read data from the transport connection. The connection was closed. | Your server committed a protocol error in the response header. For example, connection closed by your server when the response is not fully. |
-| Receiving response body | Unable to read data from the transport connection: The connection was closed. | Your server committed a protocol error in response body. For example, Connection closed by your server when the response is not fully read or the chunk size is wrong in chunked response body. |
-| Redirect limit validation | This webpage has too many redirects. This loop will be terminated here since this request exceeded the limit for auto redirects. | There's a limit of 10 redirects per test. |
-| Status code validation | `200 - OK` does not match the expected status `400 - BadRequest`. | The returned status code that is counted as a success. 200 is the code that indicates that a normal web page has been returned. |
-| Content validation | The required text 'hello' did not appear in the response. | The string is not an exact case-sensitive match in the response, for example the string "Welcome!". It must be a plain string, without wildcard characters (for example an asterisk). If your page content changes you might have to update the string. Only English characters are supported with content match. |
-
-## Common troubleshooting questions
-
-### Site looks okay but I see test failures? Why is Application Insights alerting me?
-
- * Does your test have **Parse dependent requests** enabled? That results in a strict check on resources such as scripts, images etc. These types of failures may not be noticeable on a browser. Check all the images, scripts, style sheets, and any other files loaded by the page. If any of them fails, the test is reported as failed, even if the main HTML page loads without issue. To desensitize the test to such resource failures, simply uncheck the Parse Dependent Requests from the test configuration.
-
- * To reduce odds of noise from transient network blips etc., ensure Enable retries for test failures configuration is checked. You can also test from more locations and manage alert rule threshold accordingly to prevent location-specific issues causing undue alerts.
-
- * Click on any of the red dots from the Availability scatter plot experience experience, or any availability failure from the Search explorer to see the details of why we reported the failure. The test result, along with the correlated server-side telemetry (if enabled) should help understand why the test failed. Common causes of transient issues are network or connection issues.
-
- * Did the test time-out? We abort tests after 2 minutes. If your ping or multi-step test takes longer than 2 minutes, we will report it as a failure. Consider breaking the test into multiple ones that can complete in shorter durations.
-
- * Did all locations report failure, or only some of them? If only some reported failures, it may be due to network/CDN issues. Again, clicking on the red dots should help understand why the location reported failures.
-
-### I did not get an email when the alert triggered, or resolved or both?
-
-Check the alerts' action group configuration to confirm your email is directly listed, or a distribution list you are on is configured to receive notifications. If it is, then check the distribution list configuration to confirm it can receive external emails. Also check if your mail administrator may have any policies configured that may cause this issue.
-
-### I did not receive the webhook notification?
-
-Check to ensure the application receiving the webhook notification is available, and successfully processes the webhook requests. See [this](../alerts/alerts-log-webhook.md) for more information.
-
-### I am getting 403 Forbidden errors, what does this mean?
-
-This error indicates that you need to add firewall exceptions to allow the availability agents to test your target url. For a full list of agent IP addresses to allow, consult the [IP exception article](./ip-addresses.md#availability-tests).
-
-### Intermittent test failure with a protocol violation error?
-
-The error ("protocol violation..CR must be followed by LF") indicates an issue with the server (or dependencies). This happens when malformed headers are set in the response. It can be caused by load balancers or CDNs. Specifically, some headers might not be using CRLF to indicate end of line, which violates the HTTP specification and therefore fail validation at the .NET WebRequest level. Inspect the response to spot headers, which might be in violation.
-
-> [!NOTE]
-> The URL may not fail on browsers that have a relaxed validation of HTTP headers. See this blog post for a detailed explanation of this issue: http://mehdi.me/a-tale-of-debugging-the-linkedin-api-net-and-http-protocol-violations/
-
-### I don't see any related server-side telemetry to diagnose test failures?*
-
-If you have Application Insights set up for your server-side application, that may be because [sampling](./sampling.md) is in operation. Select a different availability result.
-
-### Can I call code from my web test?
-
-No. The steps of the test must be in the .webtest file. And you can't call other web tests or use loops. But there are several plug-ins that you might find helpful.
--
-### Is there a difference between "web tests" and "availability tests"?
-
-The two terms may be referenced interchangeably. Availability tests is a more generic term that includes the single URL ping tests in addition to the multi-step web tests.
-
-### I'd like to use availability tests on our internal server that runs behind a firewall.
-
- There are two possible solutions:
-
- * Configure your firewall to permit incoming requests from the [IP addresses
- of our web test agents](./ip-addresses.md).
- * Write your own code to periodically test your internal server. Run the code as a background process on a test server behind your firewall. Your test process can send its results to Application Insights by using [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) API in the core SDK package. This requires your test server to have outgoing access to the Application Insights ingestion endpoint, but that is a much smaller security risk than the alternative of permitting incoming requests. The results will appear in the availability web tests blades though the experience will be slightly simplified from what is available for tests created via the portal. Custom availability tests will also appear as availability results in Analytics, Search, and Metrics.
-
-### Uploading a multi-step web test fails
-
-Some reasons this might happen:
- * There's a size limit of 300 K.
- * Loops aren't supported.
- * References to other web tests aren't supported.
- * Data sources aren't supported.
-
-### My multi-step test doesn't complete
-
-There's a limit of 100 requests per test. Also, the test is stopped if it runs longer than two minutes.
-
-### How can I run a test with client certificates?
-
-This is currently not supported.
-
-## Next steps
-
-* [Multi-step web testing](availability-multistep.md)
-* [URL ping tests](monitor-web-app-availability.md)
azure-monitor Usage Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-troubleshoot.md
- Title: Troubleshoot user analytics tools - Application Insights
-description: Troubleshooting guide - analyzing site and app usage with Application Insights.
- Previously updated : 07/30/2021---
-# Troubleshoot user behavior analytics tools in Application Insights
-Have questions about the [user behavior analytics tools in Application Insights](usage-overview.md): [Users, Sessions, Events](usage-segmentation.md), [Funnels](usage-funnels.md), [User Flows](usage-flows.md), [Retention](usage-retention.md), or Cohorts? Here are some answers.
-
-## Counting Users
-**The user behavior analytics tools show that my app has one user/session, but I know my app has many users/sessions. How can I fix these incorrect counts?**
-
-All telemetry events in Application Insights have an [anonymous user ID](./data-model-context.md#anonymous-user-id) and a [session ID](./data-model-context.md#session-id) as two of their standard properties. By default, all of the usage analytics tools count users and sessions based on these IDs. If these standard properties aren't being populated with unique IDs for each user and session of your app, you'll see an incorrect count of users and sessions in the usage analytics tools.
-
-If you're monitoring a web app, the easiest solution is to add the [Application Insights JavaScript SDK](./javascript.md) to your app, and make sure the script snippet is loaded on each page you want to monitor. The JavaScript SDK automatically generates anonymous user and session IDs, then populates telemetry events with these IDs as they're sent from your app.
-
-If you're monitoring a web service (no user interface), [create a telemetry initializer that populates the anonymous user ID and session ID properties](./usage-overview.md) according to your service's notions of unique users and sessions.
-
-If your app is sending [authenticated user IDs](./api-custom-events-metrics.md#authenticated-users), you can count based on authenticated user IDs in the Users tool. In the "Show" dropdown, choose "Authenticated users."
-
-The user behavior analytics tools don't currently support counting users or sessions based on properties other than anonymous user ID, authenticated user ID, or session ID.
-
-## Naming Events
-**My app has thousands of different page view and custom event names. It's hard to distinguish between them, and the user behavior analytics tools often become unresponsive. How can I fix these naming issues?**
-
-Page view and custom event names are used throughout the user behavior analytics tools. Naming events well is critical to getting value from these tools. The goal is a balance between having too few, overly generic names ("Button clicked") and having too many, overly specific names ("Edit button clicked on http:\//www.contoso.com/index").
-
-To make any changes to the page view and custom event names your app is sending, you need to change your app's source code and redeploy. **All telemetry data in Application Insights is stored for 90 days and cannot be deleted**, so changes you make to event names will take 90 days to fully manifest. For the 90 days after making name changes, both the old and new event names will show up in your telemetry, so adjust queries and communicate within your teams, accordingly.
-
-If your app is sending too many page view names, check whether these page view names are specified manually in code or if they're being sent automatically by the Application Insights JavaScript SDK:
-
-* If the page view names are manually specified in code using the [`trackPageView` API](https://github.com/Microsoft/ApplicationInsights-JS/blob/master/API-reference.md), change the name to be less specific. Avoid common mistakes like putting the URL in the name of the page view. Instead, use the URL parameter of the `trackPageView` API. Move other details from the page view name into custom properties.
-
-* If the Application Insights JavaScript SDK is automatically sending page view names, you can either change your pages' titles or switch to manually sending page view names. The SDK sends the [title](https://developer.mozilla.org/docs/Web/HTML/Element/title) of each page as the page view name, by default. You could change your titles to be more general, but be mindful of SEO and other impacts this change could have. Manually specifying page view names with the `trackPageView` API overrides the automatically collected names, so you could send more general names in telemetry without changing page titles.
-
-If your app is sending too many custom event names, change the name in the code to be less specific. Again, avoid putting URLs and other per-page or dynamic information in the custom event names directly. Instead, move these details into custom properties of the custom event with the `trackEvent` API. For example, instead of `appInsights.trackEvent("Edit button clicked on http://www.contoso.com/index")`, we suggest something like `appInsights.trackEvent("Edit button clicked", { "Source URL": "http://www.contoso.com/index" })`.
-
-## Next steps
-
-* [User behavior analytics tools overview](usage-overview.md)
-
-## Get help
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/ms-application-insights)
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Title: Configure ContainerLogv2 schema (preview) for Container Insights
-description: Switch your ContainerLog table to the ContainerLogv2 schema
+ Title: Configure the ContainerLogV2 schema (preview) for Container Insights
+description: Switch your ContainerLog table to the ContainerLogV2 schema.
Last updated 05/11/2022
-# Enable ContainerLogV2 schema (preview)
-Azure Monitor Container Insights is now in Public Preview of new schema for container logs called ContainerLogV2. As part of this schema, there are new fields to make common queries to view AKS (Azure Kubernetes Service) and Azure Arc enabled Kubernetes data. In addition, this schema is compatible as a part of [Basic Logs](../logs/basic-logs-configure.md), which offer a low cost alternative to standard analytics logs.
+# Enable the ContainerLogV2 schema (preview)
+Azure Monitor Container insights is now in public preview of a new schema for container logs, called ContainerLogV2. As part of this schema, there are new fields to make common queries to view Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes data. In addition, this schema is compatible with [Basic Logs](../logs/basic-logs-configure.md), which offers a low-cost alternative to standard analytics logs.
-> [!NOTE]
-> The ContainerLogv2 schema is currently a preview feature, Container Insights does not yet support the "View in Analytics" option, however the data is still available when queried directly from the [Log Analytics](./container-insights-log-query.md) interface.
+The ContainerLogV2 schema is a preview feature. Container insights does not yet support the **View in Analytics** option, but the data is available when it's queried directly from the [Log Analytics](./container-insights-log-query.md) interface.
->[!NOTE]
->The new fields are:
->* ContainerName
->* PodName
->* PodNamespace
+The new fields are:
+* `ContainerName`
+* `PodName`
+* `PodNamespace`
## ContainerLogV2 schema ```kusto
Azure Monitor Container Insights is now in Public Preview of new schema for cont
LogSource: string, TimeGenerated: datetime ```
-## Enable ContainerLogV2 schema
-1. Customers can enable ContainerLogV2 schema at cluster level.
-2. To enable ContainerLogV2 schema, configure the cluster's configmap, Learn more about [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) in Kubernetes documentation & [Azure Monitor configmap](./container-insights-agent-config.md#configmap-file-settings-overview).
-3. Follow the instructions accordingly when configuring an existing ConfigMap or using a new one.
+## Enable the ContainerLogV2 schema
+Customers can enable the ContainerLogV2 schema at the cluster level. To enable the ContainerLogV2 schema, configure the cluster's ConfigMap. Learn more about ConfigMap in [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and in [Azure Monitor documentation](./container-insights-agent-config.md#configmap-file-settings-overview).
+Follow the instructions to configure an existing ConfigMap or to use a new one.
-### Configuring an existing ConfigMap
-If your ConfigMap doesn't yet have the "[log_collection_settings.schema]" field, you'll need to append the following section in your existing ConfigMap yaml file:
+### Configure an existing ConfigMap
+If your ConfigMap doesn't yet have the `log_collection_settings.schema` field, you'll need to append the following section in your existing ConfigMap .yaml file:
```yaml [log_collection_settings.schema]
- # In the absence of this configmap, default value for containerlog_schema_version is "v1"
+ # In the absence of this ConfigMap, the default value for containerlog_schema_version is "v1"
# Supported values for this setting are "v1","v2" # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema containerlog_schema_version = "v2" ```
-### Configuring a new ConfigMap
-1. Download the new ConfigMap from [here](https://aka.ms/container-azm-ms-agentconfig). For the newly downloaded configmapdefault, the value for containerlog_schema_version is "v1"
-1. Update the "containerlog_schema_version = "v2""
+### Configure a new ConfigMap
+1. [Download the new ConfigMap](https://aka.ms/container-azm-ms-agentconfig). For the newly downloaded ConfigMap, the default value for `containerlog_schema_version` is `"v1"`.
+1. Update `containerlog_schema_version` to `"v2"`:
-```yaml
-[log_collection_settings.schema]
- # In the absence of this configmap, default value for containerlog_schema_version is "v1"
- # Supported values for this setting are "v1","v2"
- # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema
- containerlog_schema_version = "v2"
-```
+ ```yaml
+ [log_collection_settings.schema]
+ # In the absence of this ConfigMap, the default value for containerlog_schema_version is "v1"
+ # Supported values for this setting are "v1","v2"
+ # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema
+ containerlog_schema_version = "v2"
+ ```
-1. Once you have finished configuring the configmap, run the following kubectl command: kubectl apply -f `<configname>`
+3. After you finish configuring the ConfigMap, run the following kubectl command: `kubectl apply -f <configname>`.
->[!TIP]
->Example: kubectl apply -f container-azm-ms-agentconfig.yaml.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`
>[!NOTE]
->* The configuration change can take a few minutes to complete before taking effect, all omsagent pods in the cluster will restart.
->* The restart is a rolling restart for all omsagent pods, it will not restart all of them at the same time.
+>* The configuration change can take a few minutes to complete before it takes effect. All OMS agent pods in the cluster will restart.
+>* The restart is a rolling restart for all OMS agent pods. It won't restart all of them at the same time.
## Next steps
-* Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2
+* Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2.
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Title: Azure Activity log
-description: View the Azure Activity log and send it to Azure Monitor Logs, Azure Event Hubs, and Azure Storage.
+ Title: Azure activity log
+description: View the Azure Monitor activity log and send it to Azure Monitor Logs, Azure Event Hubs, and Azure Storage.
Last updated 09/09/2021
-# Azure Activity log
-The Activity log is a [platform log](./platform-logs-overview.md) in Azure that provides insight into subscription-level events. Activity log includes such information as when a resource is modified or when a virtual machine is started. You can view the Activity log in the Azure portal or retrieve entries with PowerShell and CLI. This article provides details on viewing the Activity log and sending it to different destinations.
+# Azure Monitor activity log
-For more functionality, you should create a diagnostic setting to send the Activity log to one or more of these locations for the following reasons:
-- to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting, and longer retention (up to two years) -- to Azure Event Hubs to forward outside of Azure-- to Azure Storage for cheaper, long-term archiving
+The Azure Monitor activity log is a [platform log](./platform-logs-overview.md) in Azure that provides insight into subscription-level events. The activity log includes information like when a resource is modified or a virtual machine is started. You can view the activity log in the Azure portal or retrieve entries with PowerShell and the Azure CLI. This article provides information on how to view the activity log and send it to different destinations.
-See [Create diagnostic settings to send platform logs and metrics to different destinations](./diagnostic-settings.md) for details on creating a diagnostic setting.
+For more functionality, create a diagnostic setting to send the activity log to one or more of these locations for the following reasons:
+
+- Send to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting and for longer retention, up to two years.
+- Send to Azure Event Hubs to forward outside of Azure.
+- Send to Azure Storage for cheaper, long-term archiving.
+
+For details on how to create a diagnostic setting, see [Create diagnostic settings to send platform logs and metrics to different destinations](./diagnostic-settings.md).
> [!NOTE]
-> Entries in the Activity Log are system generated and cannot be changed or deleted.
+> Entries in the activity log are system generated and can't be changed or deleted.
+
+## Retention period
+
+Activity log events are retained in Azure for *90 days* and then deleted. There's no charge for entries during this time regardless of volume. For more functionality, such as longer retention, create a diagnostic setting and route the entries to another location based on your needs. See the criteria in the preceding section.
-## Retention Period
+## View the activity log
-Activity log events are retained in Azure for **90 days** and then deleted. There's no charge for entries during this time regardless of volume. For more functionality such as longer retention, you should create a diagnostic setting and route the entires to another location based on your needs. See the criteria in the earlier section of this article.
+You can access the activity log from most menus in the Azure portal. The menu that you open it from determines its initial filter. If you open it from the **Monitor** menu, the only filter is on the subscription. If you open it from a resource's menu, the filter is set to that resource. You can always change the filter to view all other entries. Select **Add Filter** to add more properties to the filter.
-## View the Activity log
-You can access the Activity log from most menus in the Azure portal. The menu that you open it from determines its initial filter. If you open it from the **Monitor** menu, then the only filter will be on the subscription. If you open it from a resource's menu, then the filter is set to that resource. You can always change the filter though to view all other entries. Select **Add Filter** to add more properties to the filter.
+![Screenshot that shows the activity log.](./media/activity-log/view-activity-log.png)
-![View Activity Log](./media/activity-log/view-activity-log.png)
+For a description of activity log categories, see [Azure activity log event schema](activity-log-schema.md#categories).
-For a description of Activity log categories see [Azure Activity Log event schema](activity-log-schema.md#categories).
+## Download the activity log
-## Download the Activity log
Select **Download as CSV** to download the events in the current view.
-![Download Activity log](media/activity-log/download-activity-log.png)
+![Screenshot that shows downloading the activity log.](media/activity-log/download-activity-log.png)
### View change history
-For some events, you can view the Change history, which shows what changes happened during that event time. Select an event from the Activity Log you want to look deeper into. Select the **Change history (Preview)** tab to view any associated changes with that event.
+For some events, you can view the change history, which shows what changes happened during that event time. Select an event from the activity log you want to look at more deeply. Select the **Change history (Preview)** tab to view any associated changes with that event.
-![Change history list for an event](media/activity-log/change-history-event.png)
+![Screenshot that shows the Change history list for an event.](media/activity-log/change-history-event.png)
-If there are any associated changes with the event, you'll see a list of changes that you can select. This opens up the **Change history (Preview)** page. On this page, you see the changes to the resource. In the following example, you can see not only that the VM changed sizes, but what the previous VM size was before the change and what it was changed to. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
+If any changes are associated with the event, you'll see a list of changes that you can select. Selecting a change opens the **Change history (Preview)** page. This page displays the changes to the resource. In the following example, you can see that the VM changed sizes. The page displays the VM size before the change and after the change. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
-![Change history page showing differences](media/activity-log/change-history-event-details.png)
+![Screenshot that shows the Change history page showing differences.](media/activity-log/change-history-event-details.png)
+### Other methods to retrieve activity log events
-### Other methods to retrieve Activity log events
-You can also access Activity log events using the following methods:
--- Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet to retrieve the Activity Log from PowerShell. See [Azure Monitor PowerShell samples](../powershell-samples.md#retrieve-activity-log).-- Use [az monitor activity-log](/cli/azure/monitor/activity-log) to retrieve the Activity Log from CLI. See [Azure Monitor CLI samples](../cli-samples.md#view-activity-log).-- Use the [Azure Monitor REST API](/rest/api/monitor/) to retrieve the Activity Log from a REST client.
+You can also access activity log events by using the following methods:
+- Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet to retrieve the activity log from PowerShell. See [Azure Monitor PowerShell samples](../powershell-samples.md#retrieve-activity-log).
+- Use [az monitor activity-log](/cli/azure/monitor/activity-log) to retrieve the activity log from the CLI. See [Azure Monitor CLI samples](../cli-samples.md#view-activity-log).
+- Use the [Azure Monitor REST API](/rest/api/monitor/) to retrieve the activity log from a REST client.
## Send to Log Analytics workspace
- Send the Activity log to a Log Analytics workspace to enable the features of [Azure Monitor Logs](../logs/data-platform-logs.md) which includes the following:
-- Correlate Activity log data with other monitoring data collected by Azure Monitor.
+ Send the activity log to a Log Analytics workspace to enable the [Azure Monitor Logs](../logs/data-platform-logs.md) feature, where you:
+
+- Correlate activity log data with other monitoring data collected by Azure Monitor.
- Consolidate log entries from multiple Azure subscriptions and tenants into one location for analysis together.-- Use log queries to perform complex analysis and gain deep insights on Activity Log entries.-- Use log alerts with Activity entries allowing for more complex alerting logic.-- Store Activity log entries for longer than the Activity Log retention period.-- No data ingestion charges for Activity log data stored in a Log Analytics workspace.-- No data retention charges for the first 90 days for Activity log data stored in a Log Analytics workspace.
+- Use log queries to perform complex analysis and gain deep insights on activity log entries.
+- Use log alerts with Activity entries for more complex alerting logic.
+- Store activity log entries for longer than the activity log retention period.
+- Incur no data ingestion charges for activity log data stored in a Log Analytics workspace.
+- Incur no data retention charges for the first 90 days for activity log data stored in a Log Analytics workspace.
- Select **Export Activity Logs**.
+ Select **Export Activity Logs** to send the activity log to a Log Analytics workspace.
- ![Export activity logs](media/activity-log/diagnostic-settings-export.png)
+ ![Screenshot that shows exporting activity logs.](media/activity-log/diagnostic-settings-export.png)
-to send the Activity log to a Log Analytics workspace. You can send the Activity log from any single subscription to up to five workspaces.
+You can send the activity log from any single subscription to up to five workspaces.
-Activity log data in a Log Analytics workspace is stored in a table called *AzureActivity* that you can retrieve with a [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-tutorial.md). The structure of this table varies depending on the [category of the log entry](activity-log-schema.md). For a description of the table properties, see the [Azure Monitor data reference](/azure/azure-monitor/reference/tables/azureactivity).
+Activity log data in a Log Analytics workspace is stored in a table called `AzureActivity` that you can retrieve with a [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-tutorial.md). The structure of this table varies depending on the [category of the log entry](activity-log-schema.md). For a description of the table properties, see the [Azure Monitor data reference](/azure/azure-monitor/reference/tables/azureactivity).
-For example, to view a count of Activity log records for each category, use the following query:
+For example, to view a count of activity log records for each category, use the following query:
```kusto AzureActivity
AzureActivity
| where CategoryValue == "Administrative" ``` - ## Send to Azure Event Hubs
-Send the Activity Log to Azure Event Hubs to send entries outside of Azure, for example to a third-party SIEM or other log analytics solutions. Activity log events from Event Hubs are consumed in JSON format with a `records` element containing the records in each payload. The schema depends on the category and is described in [Schema from Storage Account and Event Hubs](activity-log-schema.md).
-Following is sample output data from Event Hubs for an Activity log:
+Send the activity log to Azure Event Hubs to send entries outside of Azure, for example, to a third-party SIEM or other log analytics solutions. Activity log events from event hubs are consumed in JSON format with a `records` element that contains the records in each payload. The schema depends on the category and is described in [Azure activity log event schema](activity-log-schema.md).
+
+The following sample output data is from event hubs for an activity log:
``` JSON {
Following is sample output data from Event Hubs for an Activity log:
} ```
-## Send to Azure storage
-Send the Activity Log to an Azure Storage Account if you want to retain your log data longer than 90 days for audit, static analysis, or backup. If you only must retain your events for 90 days or less you don't need to set up archival to a Storage Account, since Activity Log events are retained in the Azure platform for 90 days.
+## Send to Azure Storage
+
+Send the activity log to an Azure Storage account if you want to retain your log data longer than 90 days for audit, static analysis, or backup. If you're required to retain your events for 90 days or less, you don't need to set up archival to a storage account. Activity log events are retained in the Azure platform for 90 days.
-When you send the Activity log to Azure, a storage container is created in the Storage Account as soon as an event occurs. The blobs in the container use the following naming convention:
+When you send the activity log to Azure, a storage container is created in the storage account as soon as an event occurs. The blobs in the container use the following naming convention:
``` insights-activity-logs/resourceId=/SUBSCRIPTIONS/{subscription ID}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json ```
-For example, a particular blob might have a name similar to the following:
+For example, a particular blob might have a name similar to:
``` insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/y=2020/m=06/d=08/h=18/m=00/PT1H.json ```
-Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL (for example, h=12). During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00, since resource log events are broken into individual blobs per hour.
+Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL, for example, h=12. During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00 because resource log events are broken into individual blobs per hour.
-Each event is stored in the PT1H.json file with the following format that uses a common top-level schema but is otherwise unique for each category as described in [Activity log schema](activity-log-schema.md).
+Each event is stored in the PT1H.json file with the following format. This format uses a common top-level schema but is otherwise unique for each category, as described in [Activity log schema](activity-log-schema.md).
``` JSON { "time": "2020-06-12T13:07:46.766Z", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.COMPUTE/VIRTUALMACHINES/MV-VM-01", "correlationId": "0f0cb6b4-804b-4129-b893-70aeeb63997e", "operationName": "Microsoft.Resourcehealth/healthevent/Updated/action", "level": "Information", "resultType": "Updated", "category": "ResourceHealth", "properties": {"eventCategory":"ResourceHealth","eventProperties":{"title":"This virtual machine is starting as requested by an authorized user or process. It will be online shortly.","details":"VirtualMachineStartInitiatedByControlPlane","currentHealthStatus":"Unknown","previousHealthStatus":"Unknown","type":"Downtime","cause":"UserInitiated"}}} ``` - ## Legacy collection methods
-This section describes legacy methods for collecting the Activity log that were used prior to diagnostic settings. If you're using these methods, you should consider transitioning to diagnostic settings that provide better functionality and consistency with resource logs.
+
+This section describes legacy methods for collecting the activity log that were used prior to diagnostic settings. If you're using these methods, consider transitioning to diagnostic settings that provide better functionality and consistency with resource logs.
### Log profiles
-Log profiles are the legacy method for sending the Activity log to Azure storage or Event Hubs. Use the following procedure to continue working with a log profile or to disable it in preparation for migrating to a diagnostic setting.
-1. From the **Azure Monitor** menu in the Azure portal, select **Activity log**.
-3. Select **Export Activity Logs**.
+Log profiles are the legacy method for sending the activity log to storage or event hubs. Use the following procedure to continue working with a log profile or to disable it in preparation for migrating to a diagnostic setting.
- ![Export activity logs](media/activity-log/diagnostic-settings-export.png)
+1. From the **Azure Monitor** menu in the Azure portal, select **Activity log**.
+1. Select **Export Activity Logs**.
-4. Select the purple banner for the legacy experience.
+ ![Screenshot that shows exporting activity logs.](media/activity-log/diagnostic-settings-export.png)
- ![Legacy experience](media/activity-log/legacy-experience.png)
+1. Select the purple banner for the legacy experience.
+ ![Screenshot that shows the legacy experience.](media/activity-log/legacy-experience.png)
-### Configure log profile using PowerShell
+### Configure a log profile by using PowerShell
-If a log profile already exists, you first must remove the existing log profile and then create new one.
+If a log profile already exists, you first must remove the existing log profile and then create a new one.
-1. Use `Get-AzLogProfile` to identify if a log profile exists. If a log profile does exist, note the *name* property.
+1. Use `Get-AzLogProfile` to identify if a log profile exists. If a log profile exists, note the `Name` property.
-1. Use `Remove-AzLogProfile` to remove the log profile using the value from the *name* property.
+1. Use `Remove-AzLogProfile` to remove the log profile by using the value from the `Name` property.
```powershell # For example, if the log profile name is 'default' Remove-AzLogProfile -Name "default" ```
-3. Use `Add-AzLogProfile` to create a new log profile:
+1. Use `Add-AzLogProfile` to create a new log profile:
```powershell Add-AzLogProfile -Name my_log_profile -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/my_storage -serviceBusRuleId /subscriptions/s1/resourceGroups/Default-ServiceBus-EastUS/providers/Microsoft.ServiceBus/namespaces/mytestSB/authorizationrules/RootManageSharedAccessKey -Location global,westus,eastus -RetentionInDays 90 -Category Write,Delete,Action
If a log profile already exists, you first must remove the existing log profile
| Property | Required | Description | | | | | | Name |Yes |Name of your log profile. |
- | StorageAccountId |No |Resource ID of the Storage Account where the Activity Log should be saved. |
- | serviceBusRuleId |No |Service Bus Rule ID for the Service Bus namespace you would like to have Event Hubs created in. This is a string with the format: `{service bus resource ID}/authorizationrules/{key name}`. |
- | Location |Yes |Comma-separated list of regions for which you would like to collect Activity Log events. |
- | RetentionInDays |Yes |Number of days for which events should be retained in the Storage Account, from 1 through 365. A value of zero stores the logs indefinitely. |
- | Category |No |Comma-separated list of event categories that should be collected. Possible values are _Write_, _Delete_, and _Action_. |
+ | StorageAccountId |No |Resource ID of the storage account where the activity log should be saved. |
+ | serviceBusRuleId |No |Service Bus Rule ID for the Service Bus namespace where you want to have event hubs created. This string has the format `{service bus resource ID}/authorizationrules/{key name}`. |
+ | Location |Yes |Comma-separated list of regions for which you want to collect activity log events. |
+ | RetentionInDays |Yes |Number of days for which events should be retained in the storage account, from 1 through 365. A value of zero stores the logs indefinitely. |
+ | Category |No |Comma-separated list of event categories to be collected. Possible values are Write, Delete, and Action. |
### Example script
-Following is a sample PowerShell script to create a log profile that writes the Activity Log to both a Storage Account and an Event Hub.
+
+The following sample PowerShell script is used to create a log profile that writes the activity log to both a storage account and an event hub.
```powershell # Settings needed for the new log profile
Following is a sample PowerShell script to create a log profile that writes the
Add-AzLogProfile -Name $logProfileName -Location $locations -StorageAccountId $storageAccountId -ServiceBusRuleId $serviceBusRuleId ``` -
-### Configure log profile using Azure CLI
+### Configure a log profile by using the Azure CLI
If a log profile already exists, you first must remove the existing log profile and then create a log profile. 1. Use `az monitor log-profiles list` to identify if a log profile exists.
-2. Use `az monitor log-profiles delete --name "<log profile name>` to remove the log profile using the value from the *name* property.
-3. Use `az monitor log-profiles create` to create a log profile:
+1. Use `az monitor log-profiles delete --name "<log profile name>` to remove the log profile by using the value from the `name` property.
+1. Use `az monitor log-profiles create` to create a log profile:
```azurecli-interactive az monitor log-profiles create --name "default" --location null --locations "global" "eastus" "westus" --categories "Delete" "Write" "Action" --enabled false --days 0 --service-bus-rule-id "/subscriptions/<YOUR SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventHub/namespaces/<Event Hub NAME SPACE>/authorizationrules/RootManageSharedAccessKey"
If a log profile already exists, you first must remove the existing log profile
| Property | Required | Description | | | | | | name |Yes |Name of your log profile. |
- | storage-account-id |Yes |Resource ID of the Storage Account to which Activity Logs should be saved. |
- | locations |Yes |Space-separated list of regions for which you would like to collect Activity Log events. You can view a list of all regions for your subscription using `az account list-locations --query [].name`. |
- | days |Yes |Number of days for which events should be retained, from 1 through 365. A value of zero will store the logs indefinitely (forever). If zero, then the enabled parameter should be set to false. |
- |enabled | Yes |True or False. Used to enable or disable the retention policy. If True, then the days parameter must be a value greater than 0.
+ | storage-account-id |Yes |Resource ID of the storage account to which activity logs should be saved. |
+ | locations |Yes |Space-separated list of regions for which you want to collect activity log events. View a list of all regions for your subscription by using `az account list-locations --query [].name`. |
+ | days |Yes |Number of days for which events should be retained, from 1 through 365. A value of zero stores the logs indefinitely (forever). If zero, then the enabled parameter should be set to False. |
+ |enabled | Yes |True or False. Used to enable or disable the retention policy. If True, then the `days` parameter must be a value greater than zero.
| categories |Yes |Space-separated list of event categories that should be collected. Possible values are Write, Delete, and Action. | - ### Log Analytics workspace
-The legacy method for sending the Activity log into a Log Analytics workspace is connecting the sign in the workspace configuration.
-1. From the **Log Analytics workspaces** menu in the Azure portal, select the workspace to collect the Activity Log.
-1. In the **Workspace Data Sources** section of the workspace's menu, select **Azure Activity log**.
-1. Select the subscription that you want to connect.
+The legacy method for sending the activity log into a Log Analytics workspace is connecting the sign-in for the workspace configuration.
- ![Screenshot shows Log Analytics workspace with an Azure Activity log selected.](media/activity-log/workspaces.png)
+1. From the **Log Analytics workspaces** menu in the Azure portal, select the workspace to collect the activity log.
+1. In the **Workspace Data Sources** section of the workspace's menu, select **Azure Activity log**.
+1. Select the subscription that you want to connect to.
-2. Select **Connect** to connect the Activity sign in the subscription to the selected workspace. If the subscription is already connected to another workspace, select **Disconnect** first to disconnect it.
+ ![Screenshot that shows Log Analytics workspace with Azure Activity log selected.](media/activity-log/workspaces.png)
- ![Connect Workspaces](media/activity-log/connect-workspace.png)
+1. Select **Connect** to connect the activity sign-in subscription to the selected workspace. If the subscription is already connected to another workspace, select **Disconnect** first to disconnect it.
+ ![Screenshot that shows connecting workspaces.](media/activity-log/connect-workspace.png)
-To disable the setting, perform the same procedure and select **Disconnect** to remove the subscription from the workspace.
+To disable the setting, do the same procedure and select **Disconnect** to remove the subscription from the workspace.
### Data structure changes
-The Export activity logs experience, sends the same data as the legacy method used to send the Activity log with some changes to the structure of the *AzureActivity* table.
-The columns in the following table have been deprecated in the updated schema. They still exist in *AzureActivity* but they have no data. The replacements for these columns aren't new, but they contain the same data as the deprecated column. They are in a different format, so you might need to modify log queries that use them.
+The Export activity logs experience sends the same data as the legacy method used to send the activity log with some changes to the structure of the `AzureActivity` table.
-|Activity Log JSON | Log Analytics column name<br/>*(older deprecated)* | New Log Analytics column name | Notes |
+The columns in the following table have been deprecated in the updated schema. They still exist in `AzureActivity`, but they have no data. The replacements for these columns aren't new, but they contain the same data as the deprecated column. They're in a different format, so you might need to modify log queries that use them.
+
+|Activity log JSON | Log Analytics column name<br/>*(older deprecated)* | New Log Analytics column name | Notes |
|:|:|:|:| |category | Category | CategoryValue ||
-|status<br/><br/>*values are (success, start, accept, failure)* |ActivityStatus <br/><br/>*values same as JSON* |ActivityStatusValue<br/><br/>*values change to (succeeded, started, accepted, failed)* |The valid values change as shown|
+|status<br/><br/>Values are success, start, accept, failure |ActivityStatus <br/><br/>Values same as JSON |ActivityStatusValue<br/><br/>Values change to succeeded, started, accepted, failed |The valid values change as shown.|
|subStatus |ActivitySubstatus |ActivitySubstatusValue||
-|operationName | OperationName | OperationNameValue |REST API localizes operation name value. Log Analytics UI always shows English. |
+|operationName | OperationName | OperationNameValue |REST API localizes the operation name value. Log Analytics UI always shows English. |
|resourceProviderName | ResourceProvider | ResourceProviderValue || > [!Important]
-> In some cases, the values in these columns may be in all uppercase. If you have a query that includes these columns, you should use the [=~ operator](/azure/kusto/query/datatypes-string-operators) to do a case insensitive comparison.
-The following columns have been added to *AzureActivity* in the updated schema:
+> In some cases, the values in these columns might be all uppercase. If you have a query that includes these columns, use the [=~ operator](/azure/kusto/query/datatypes-string-operators) to do a case-insensitive comparison.
+
+The following columns have been added to `AzureActivity` in the updated schema:
- Authorization_d - Claims_d
The following columns have been added to *AzureActivity* in the updated schema:
## Activity log insights
-Activity log insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view Activity log insights in the Azure portal.
+Activity log insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view activity log insights in the Azure portal.
-Before using Activity log insights, you'll have to [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
+Before you use activity log insights, you must [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
-### How does Activity log insights work?
+### How do activity log insights work?
-Activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) are stored in a table called AzureActivity.
+Activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) are stored in a table called `AzureActivity`.
-Activity log insights are a curated [Log Analytics workbook](../visualize/workbooks-overview.md) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
+Activity log insights are a curated [Log Analytics workbook](../visualize/workbooks-overview.md) with dashboards that visualize the data in the `AzureActivity` table. For example, data might include which administrators deleted, updated, or created resources and whether the activities failed or succeeded.
-### View Activity log insights - Resource group / Subscription level
+### View activity log insights: Resource group or subscription level
-To view Activity log insights on a resource group or a subscription level:
+To view activity log insights on a resource group or a subscription level:
1. In the Azure portal, select **Monitor** > **Workbooks**.
-1. Select **Activity Logs Insights** in the **Insights** section.
+1. In the **Insights** section, select **Activity Logs Insights**.
- :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox= "media/activity-log/open-activity-log-insights-workbook.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a scale level.":::
+ :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox= "media/activity-log/open-activity-log-insights-workbook.png" alt-text="Screenshot that shows how to locate and open the Activity Logs Insights workbook on a scale level.":::
1. At the top of the **Activity Logs Insights** page, select:+ 1. One or more subscriptions from the **Subscriptions** dropdown. 1. Resources and resource groups from the **CurrentResource** dropdown. 1. A time range for which to view data from the **TimeRange** dropdown.
-### View Activity log insights on any Azure resource
+
+### View activity log insights on any Azure resource
>[!Note]
-> * Currently Applications Insights resources are not supported for this workbook.
+> Currently, Application Insights resources aren't supported for this workbook.
-To view Activity log insights on a resource level:
+To view activity log insights on a resource level:
-1. In the Azure portal, go to your resource, select **Workbooks**.
-1. Select **Activity Logs Insights** in the **Activity Logs Insights** section.
+1. In the Azure portal, go to your resource and select **Workbooks**.
+1. In the **Activity Logs Insights** section, select **Activity Logs Insights**.
- :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a resource level.":::
+ :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="Screenshot that shows how to locate and open the Activity Logs Insights workbook on a resource level.":::
-1. At the top of the **Activity Logs Insights** page, select:
-
- 1. A time range for which to view data from the **TimeRange** dropdown.
- * **Azure Activity Log Entries** shows the count of Activity log records in each activity log category.
+1. At the top of the **Activity Logs Insights** page, select a time range for which to view data from the **TimeRange** dropdown:
+
+ * **Azure Activity Log Entries** shows the count of activity log records in each activity log category.
- :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot of Azure Activity Logs by Category Value":::
+ :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot that shows Azure activity logs by category value.":::
- * **Activity Logs by Status** shows the count of Activity log records in each status.
+ * **Activity Logs by Status** shows the count of activity log records in each status.
- :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Screenshot of Azure Activity Logs by Status":::
+ :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Screenshot that shows Azure activity logs by status.":::
- * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of Activity log records for each resource and resource provider.
-
- :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Screenshot of Azure Activity Logs by Resource":::
+ * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of activity log records for each resource and resource provider.
+ :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Screenshot that shows Azure activity logs by resource.":::
+ ## Next steps+ * [Read an overview of platform logs](./platform-logs-overview.md)
-* [Review Activity log event schema](activity-log-schema.md)
-* [Create diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
+* [Review activity log event schema](activity-log-schema.md)
+* [Create a diagnostic setting to send activity logs to other destinations](./diagnostic-settings.md)
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
# Azure Monitor Metrics overview
-Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from [monitored resources](../monitor-reference.md) into a time series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.
-Metrics in Azure Monitor are lightweight and capable of supporting near real-time scenarios, so they're useful for alerting and fast detection of issues. You can analyze them interactively by using Metrics Explorer, be proactively notified with an alert when a value crosses a threshold, or visualize them in a workbook or dashboard.
+Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from [monitored resources](../monitor-reference.md) into a time-series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.
+
+Metrics in Azure Monitor are lightweight and capable of supporting near-real-time scenarios. For these reasons, they're useful for alerting and fast detection of issues. You can:
+
+- Analyze them interactively by using Metrics Explorer.
+- Be proactively notified with an alert when a value crosses a threshold.
+- Visualize them in a workbook or dashboard.
> [!NOTE]
-> Azure Monitor Metrics is one half of the data platform that supports Azure Monitor. The other is [Azure Monitor Logs](../logs/data-platform-logs.md), which collects and organizes log and performance data and allows that data to be analyzed with a rich query language.
+> Azure Monitor Metrics is one half of the data platform that supports Azure Monitor. The other half is [Azure Monitor Logs](../logs/data-platform-logs.md), which collects and organizes log and performance data. You can analyze that data by using a rich query language.
>
-> The Metrics feature can only store numeric data in a particular structure, whereas the Logs feature can store a variety of datatypes (each with its own structure). You can also perform complex analysis on log data by using log queries, which you can't use for analysis of metric data.
+> The Azure Monitor Metrics feature can only store numeric data in a particular structure. The Azure Monitor Logs feature can store a variety of datatypes, each with its own structure. You can also perform complex analysis on log data by using log queries, which you can't use for analysis of metric data.
## What can you do with Azure Monitor Metrics?
-The following table lists the ways that you can use the Metrics feature in Azure Monitor.
-| | Description |
+The following table lists the ways that you can use the Azure Monitor Metrics feature.
+
+| Uses | Description |
|:|:|
-| **Analyze** | Use [Metrics Explorer](metrics-charts.md) to analyze collected metrics on a chart and compare metrics from various resources. |
-| **Alert** | Configure a [metric alert rule](../alerts/alerts-metric.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the metric value crosses a threshold. |
-| **Visualize** | Pin a chart from Metrics Explorer to an [Azure dashboard](../app/tutorial-app-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources. |
-| **Automate** | Use [Autoscale](../autoscale/autoscale-overview.md) to increase or decrease resources based on a metric value crossing a threshold. |
-| **Retrieve** | Access metric values from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/metrics) or [Azure PowerShell cmdlets](/powershell/module/az.monitor).</li><li>Custom app via the [REST API](./rest-api-walkthrough.md) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
-| **Export** | [Route metrics to logs](./resource-logs.md#send-to-azure-storage) to analyze data in Azure Monitor Metrics together with data in Azure Monitor Logs and to store metric values for longer than 93 days.<br>Stream metrics to an [event hub](./stream-monitoring-data-event-hubs.md) to route them to external systems. |
-| **Archive** | [Archive](./platform-logs-overview.md) the performance or health history of your resource for compliance, auditing, or offline reporting purposes. |
+| Analyze | Use [Metrics Explorer](metrics-charts.md) to analyze collected metrics on a chart and compare metrics from various resources. |
+| Alert | Configure a [metric alert rule](../alerts/alerts-metric.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the metric value crosses a threshold. |
+| Visualize | Pin a chart from Metrics Explorer to an [Azure dashboard](../app/tutorial-app-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboards and combine with other data sources. |
+| Automate | Use [Autoscale](../autoscale/autoscale-overview.md) to increase or decrease resources based on a metric value crossing a threshold. |
+| Retrieve | Access metric values from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/metrics) or [Azure PowerShell cmdlets](/powershell/module/az.monitor).</li><li>Custom app via the [REST API](./rest-api-walkthrough.md) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
+| Export | [Route metrics to logs](./resource-logs.md#send-to-azure-storage) to analyze data in Azure Monitor Metrics together with data in Azure Monitor Logs and to store metric values for longer than 93 days.<br>Stream metrics to an [event hub](./stream-monitoring-data-event-hubs.md) to route them to external systems. |
+| Archive | [Archive](./platform-logs-overview.md) the performance or health history of your resource for compliance, auditing, or offline reporting purposes. |
![Diagram that shows sources and uses of metrics.](media/data-platform-metrics/metrics-overview.png) ## Data collection
-Azure Monitor collects metrics from the following sources. After these metrics are collected in the Azure Monitor metric database, they can be evaluated together regardless of their source.
--- **Azure resources**. Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a [distinct set of metrics](./metrics-supported.md) without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition. -- **Applications**. Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is being used. Values include _Server response time_ and _Browser exceptions_.
+Azure Monitor collects metrics from the following sources. After these metrics are collected in the Azure Monitor metric database, they can be evaluated together regardless of their source:
-- **Virtual machine agents**. Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and for Linux virtual machines by using the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/).--- **Custom metrics**. You can define metrics in addition to the standard metrics that are automatically available. You can [define custom metrics in your application](../app/api-custom-events-metrics.md) that's monitored by Application Insights or create custom metrics for an Azure service by using the [custom metrics API](./metrics-store-custom-rest-api.md).
+- **Azure resources**: Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a [distinct set of metrics](./metrics-supported.md) without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition.
+- **Applications**: Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is being used. Values include _Server response time_ and _Browser exceptions_.
+- **Virtual machine agents**: Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and for Linux virtual machines by using the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/).
+- **Custom metrics**: You can define metrics in addition to the standard metrics that are automatically available. You can [define custom metrics in your application](../app/api-custom-events-metrics.md) that's monitored by Application Insights. You can also create custom metrics for an Azure service by using the [custom metrics API](./metrics-store-custom-rest-api.md).
For a complete list of data sources that can send data to Azure Monitor Metrics, see [What is monitored by Azure Monitor?](../monitor-reference.md). ## Metrics Explorer+ Use [Metrics Explorer](metrics-charts.md) to interactively analyze the data in your metric database and chart the values of multiple metrics over time. You can pin the charts to a dashboard to view them with other visualizations. You can also retrieve metrics by using the [Azure monitoring REST API](./rest-api-walkthrough.md).
-![Screenshot of an example graph in Metrics Explorer that shows server requests, server response time, and failed requests.](media/data-platform-metrics/metrics-explorer.png)
+![Screenshot that shows an example graph in Metrics Explorer that displays server requests, server response time, and failed requests.](media/data-platform-metrics/metrics-explorer.png)
For more information, see [Getting started with Azure Monitor Metrics Explorer](./metrics-getting-started.md). ## Data structure+ Data that Azure Monitor Metrics collects is stored in a time-series database that's optimized for analyzing time-stamped data. Each set of metric values is a time series with the following properties:
-* The time that the value was collected.
+* The time when the value was collected.
* The resource that the value is associated with. * A namespace that acts like a category for the metric. * A metric name. * The value itself.
-* [Multiple dimensions](#multi-dimensional-metrics) when they're present. Note that custom metrics are limited to 10 dimensions.
+* [Multiple dimensions](#multi-dimensional-metrics) when they're present. Custom metrics are limited to 10 dimensions.
## Multi-dimensional metrics
-One of the challenges to metric data is that it often has limited information to provide context for collected values. Azure Monitor addresses this challenge with multi-dimensional metrics.
-Dimensions of a metric are name/value pairs that carry additional data to describe the metric value. For example, a metric called _Available disk space_ might have a dimension called _Drive_ with values _C:_ and _D:_. That dimension would allow viewing available disk space across all drives or for each drive individually.
+One of the challenges to metric data is that it often has limited information to provide context for collected values. Azure Monitor addresses this challenge with multi-dimensional metrics.
+
+Dimensions of a metric are name/value pairs that carry more data to describe the metric value. For example, a metric called _Available disk space_ might have a dimension called _Drive_ with values _C:_ and _D:_. That dimension would allow viewing available disk space across all drives or for each drive individually.
The following example illustrates two datasets for a hypothetical metric called _Network throughput_. The first dataset has no dimensions. The second dataset shows the values with two dimensions, _IP_ and _Direction_. ### Network throughput
-| Timestamp | Metric Value |
+| Timestamp | Metric value |
| - |:-| | 8/9/2017 8:14 | 1,331.8 Kbps | | 8/9/2017 8:15 | 1,141.4 Kbps |
This nondimensional metric can only answer a basic question like "What was my ne
### Network throughput and two dimensions ("IP" and "Direction")
-| Timestamp | Dimension "IP" | Dimension "Direction" | Metric Value|
+| Timestamp | Dimension "IP" | Dimension "Direction" | Metric value|
| - |:--|:- |:--| | 8/9/2017 8:14 | IP="192.168.5.2" | Direction="Send" | 646.5 Kbps | | 8/9/2017 8:14 | IP="192.168.5.2" | Direction="Receive" | 420.1 Kbps |
This nondimensional metric can only answer a basic question like "What was my ne
| 8/9/2017 8:15 | IP="10.24.2.15" | Direction="Send" | 155.0 Kbps | | 8/9/2017 8:15 | IP="10.24.2.15" | Direction="Receive" | 100.1 Kbps |
-This metric can answer questions such as "What was the network throughput for each IP address?" and "How much data was sent versus received?" Multi-dimensional metrics carry additional analytical and diagnostic value compared to nondimensional metrics.
+This metric can answer questions such as "What was the network throughput for each IP address?" and "How much data was sent versus received?" Multi-dimensional metrics carry more analytical and diagnostic value compared to nondimensional metrics.
+
+### View multi-dimensional performance counter metrics in Metrics Explorer
-### Viewing multi-dimensional performance counter metrics in Metrics Explorer
It's not possible to send performance counter metrics that contain an asterisk (\*) to Azure Monitor via the Classic Guest Metrics API. This API can't display metrics that contain an asterisk because it's a multi-dimensional metric, which classic metrics don't support. To configure and view multi-dimensional guest OS performance counter metrics by using the Azure Diagnostic extension: 1. Go to the **Diagnostic settings** page for your virtual machine.
-2. Select the **Performance counters** tab.
-3. Select **Custom** to configure the performance counters that you want to collect.
+1. Select the **Performance counters** tab.
+1. Select **Custom** to configure the performance counters that you want to collect.
- ![Screenshot of the performance counters section of the diagnostic settings page.](media/data-platform-metrics/azure-monitor-perf-counter.png)
+ ![Screenshot that shows the performance counters section of the Diagnostic settings page.](media/data-platform-metrics/azure-monitor-perf-counter.png)
-4. Select **Sinks**. Then select **Enabled** to send your data to Azure Monitor.
+1. Select **Sinks**. Then select **Enabled** to send your data to Azure Monitor.
- ![Screenshot of the sinks section of the diagnostic settings page.](media/data-platform-metrics/azure-monitor-sink.png)
+ ![Screenshot that shows the Sinks section of the Diagnostic settings page.](media/data-platform-metrics/azure-monitor-sink.png)
-5. To view your metric in Azure Monitor, select **Virtual Machine Guest** in the **Metric Namespace** dropdown list.
+1. To view your metric in Azure Monitor, select **Virtual Machine Guest** in the **Metric Namespace** dropdown.
- ![Screenshot of metric namespace.](media/data-platform-metrics/vm-guest-namespace.png)
+ ![Screenshot that shows the Metric Namespace dropdown.](media/data-platform-metrics/vm-guest-namespace.png)
-6. Select **Apply splitting** and fill in the details to split the metric by instance. You can then see the metric broken down by each of the possible values represented by the asterisk in the configuration. In this example, the asterisk represents the logical disk volumes plus the total.
+1. Select **Apply splitting** and fill in the details to split the metric by instance. You can then see the metric broken down by each of the possible values represented by the asterisk in the configuration. In this example, the asterisk represents the logical disk volumes plus the total.
- ![Screenshot of splitting a metric by instance.](media/data-platform-metrics/split-by-instance.png)
+ ![Screenshot that shows splitting a metric by instance.](media/data-platform-metrics/split-by-instance.png)
## Retention of metrics+ For most resources in Azure, platform metrics are stored for 93 days. There are some exceptions: -- **Classic guest OS metrics**: These are performance counters collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) or the [Linux diagnostic extension](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure storage account. Retention for these metrics is guaranteed to be at least 14 days, though no expiration date is written to the storage account.
+- **Classic guest OS metrics**: These performance counters are collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) or the [Linux diagnostic extension](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure Storage account. Retention for these metrics is guaranteed to be at least 14 days, although no expiration date is written to the storage account.
- For performance reasons, the portal limits how much data it displays based on volume. Therefore, the actual number of days that the portal retrieves can be longer than 14 days if the volume of data being written is not large.
+ For performance reasons, the portal limits how much data it displays based on volume. So, the actual number of days that the portal retrieves can be longer than 14 days if the volume of data being written isn't large.
-- **Guest OS metrics sent to Azure Monitor Metrics**: These are performance counters collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and sent to the [Azure Monitor data sink](../agents/diagnostics-extension-overview.md#data-destinations), or the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/) on Linux machines, or the newer [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) via data-collection rules. Retention for these metrics is 93 days.
+- **Guest OS metrics sent to Azure Monitor Metrics**: These performance counters are collected by the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and sent to the [Azure Monitor data sink](../agents/diagnostics-extension-overview.md#data-destinations), or the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/) on Linux machines, or the newer [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) via data-collection rules. Retention for these metrics is 93 days.
-- **Guest OS metrics collected by the Log Analytics agent**: These are performance counters collected by the Log Analytics agent and sent to a Log Analytics workspace. Retention for these metrics is 31 days and can be extended up to 2 years.
+- **Guest OS metrics collected by the Log Analytics agent**: These performance counters are collected by the Log Analytics agent and sent to a Log Analytics workspace. Retention for these metrics is 31 days and can be extended up to 2 years.
-- **Application Insights log-based metrics**. Behind the scenes, [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md) translate into log queries. Their retention is variable and matches the retention of events in underlying logs (31 days to 2 years). For Application Insights resources, logs are stored for 90 days.
+- **Application Insights log-based metrics**: Behind the scenes, [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md) translate into log queries. Their retention is variable and matches the retention of events in underlying logs, which is 31 days to 2 years. For Application Insights resources, logs are stored for 90 days.
> [!NOTE] > You can [send platform metrics for Azure Monitor resources to a Log Analytics workspace](./resource-logs.md#send-to-azure-storage) for long-term trending.
-> [!NOTE]
-> As mentioned earlier, for most resources in Azure, platform metrics are stored for 93 days. However, you can only query (in the **Metrics** tile) for a maximum of 30 days worth of data on any single chart. This limitation doesn't apply to log-based metrics.
->
-> If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can [pan](./metrics-charts.md#pan) the chart to view the full retention window.
+As mentioned earlier, for most resources in Azure, platform metrics are stored for 93 days. However, you can only query (in the **Metrics** tile) for a maximum of 30 days' worth of data on any single chart. This limitation doesn't apply to log-based metrics.
+
+If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can [pan](./metrics-charts.md#pan) the chart to view the full retention window.
## Next steps
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
Title: Monitor Azure resources with Azure Monitor | Microsoft Docs
-description: Describes how to collect and analyze monitoring data from resources in Azure using Azure Monitor.
+description: This article describes how to collect and analyze monitoring data from resources in Azure by using Azure Monitor.
Last updated 09/15/2021
# Tutorial: Monitor Azure resources with Azure Monitor
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This monitoring is provided by Azure Monitor, which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
-In this tutorial, you learn:
+When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. Azure Monitor is a full-stack monitoring service that provides a complete set of features to monitor your Azure resources. You can also use Azure Monitor to monitor resources in other clouds and on-premises.
+
+In this tutorial, you learn about:
> [!div class="checklist"]
-> * What Azure Monitor is and how it's integrated into the portal for other Azure services
-> * The types of data collected by Azure Monitor for Azure resources
-> * Azure Monitor tools used to collect and analyze data
+> * Azure Monitor and how it's integrated into the portal for other Azure services.
+> * The types of data collected by Azure Monitor for Azure resources.
+> * Azure Monitor tools that are used to collect and analyze data.
> [!NOTE] > This tutorial describes Azure Monitor concepts and walks you through different menu items. To jump right into using Azure Monitor features, start with [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md). - ## Monitoring data
+This section discusses collecting and monitoring data.
+ ### Azure Monitor data collection
-As soon as you create an Azure resource, Azure Monitor is enabled and starts collecting metrics and activity logs. With some configuration, you can gather additional monitoring data and enable additional features. The Azure Monitor data platform is made up of Metrics and Logs. Each collects different kinds of data and enables different Azure Monitor features.
-- [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time series database. The metric database is automatically created for each Azure subscription. Use [metrics explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Metrics.-- [Azure Monitor Logs](../logs/data-platform-logs.md) collects logs and performance data where they can be retrieved and analyzed in a different ways using log queries. You must create a Log Analytics workspace to collect log data. Use [Log Analytics](../logs/log-analytics-tutorial.md) to analyze data from Azure Monitor Logs.
+As soon as you create an Azure resource, Azure Monitor is enabled and starts collecting metrics and activity logs. With some configuration, you can gather more monitoring data and enable other features. The Azure Monitor data platform is made up of Metrics and Logs. Each feature collects different kinds of data and enables different Azure Monitor features.
+
+- [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time-series database. The metric database is automatically created for each Azure subscription. Use [Metrics Explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Metrics.
+- [Azure Monitor Logs](../logs/data-platform-logs.md) collects logs and performance data where they can be retrieved and analyzed in different ways by using log queries. You must create a Log Analytics workspace to collect log data. Use [Log Analytics](../logs/log-analytics-tutorial.md) to analyze data from Azure Monitor Logs.
### <a id="monitoring-data-from-azure-resources"></a> Monitor data from Azure resources+ While resources from different Azure services have different monitoring requirements, they generate monitoring data in the same formats so that you can use the same Azure Monitor tools to analyze all Azure resources. Diagnostic settings define where resource logs and metrics for a particular resource should be sent. Possible destinations are: -- [Activity log](./platform-logs-overview.md) - Subscription level events that track operations for each Azure resource, for example creating a new resource or starting a virtual machine. Activity log events are automatically generated and collected for viewing in the Azure portal. You can create a diagnostic setting to send the Activity log to Azure Monitor Logs.-- [Platform metrics](../essentials/data-platform-metrics.md) - Numerical values that are automatically collected at regular intervals and describe some aspect of a resource at a particular time. Platform metrics are automatically generated and collected in Azure Monitor Metrics.-- [Resource logs](./platform-logs-overview.md) - Provide insight into operations that were performed by an Azure resource, for example getting a secret from a Key Vault or making a request to a database. Resource logs are generated automatically, but you must create a diagnostic setting to send them to Azure Monitor Logs.-- [Virtual machine guest metrics and logs]() - Performance and log data from the guest operating system of Azure virtual machines. You must install an agent on the virtual machine to collect this data and send it to Azure Monitor Metrics and Azure Monitor Logs.-
+- [Activity log](./platform-logs-overview.md): Subscription-level events that track operations for each Azure resource, for example, creating a new resource or starting a virtual machine. Activity log events are automatically generated and collected for viewing in the Azure portal. You can create a diagnostic setting to send the activity log to Azure Monitor Logs.
+- [Platform metrics](../essentials/data-platform-metrics.md): Numerical values that are automatically collected at regular intervals and describe some aspect of a resource at a particular time. Platform metrics are automatically generated and collected in Azure Monitor Metrics.
+- [Resource logs](./platform-logs-overview.md): Provide insight into operations that were performed by an Azure resource. Operation examples might be getting a secret from a key vault or making a request to a database. Resource logs are generated automatically, but you must create a diagnostic setting to send them to Azure Monitor Logs.
+- [Virtual machine guest metrics and logs](): Performance and log data from the guest operating system of Azure virtual machines. You must install an agent on the virtual machine to collect this data and send it to Azure Monitor Metrics and Azure Monitor Logs.
## Menu options
-While you can access Azure Monitor features from the **Monitor** menu in the Azure portal, Azure Monitor features can be accessed directly from the menu for different Azure services. While different Azure services may have slightly different experiences, they share a common set of monitoring options in the Azure portal. This includes **Overview** and **Activity log** and multiple options in the **Monitoring** section of the menu.
+You can access Azure Monitor features from the **Monitor** menu in the Azure portal. You can also access Azure Monitor features directly from the menu for different Azure services. Different Azure services might have slightly different experiences, but they share a common set of monitoring options in the Azure portal. These menu items include **Overview** and **Activity log** and multiple options in the **Monitoring** section of the menu.
## Overview page
-The **Overview** page includes details about the resource and often its current state. For example, a virtual machine will show its current running state. Many Azure services will have a **Monitoring** tab that includes charts for a set of key metrics. This is a quick way to view the operation of the resource, and you can click on any of the charts to open them in metrics explorer for more detailed analysis.
-See [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md) for a tutorial on using metrics explorer.
+The **Overview** page includes details about the resource and often its current state. For example, a virtual machine shows its current running state. Many Azure services have a **Monitoring** tab that includes charts for a set of key metrics. Charts are a quick way to view the operation of the resource. You can select any of the charts to open them in Metrics Explorer for more detailed analysis.
-![Overview page](media/monitor-azure-resource/overview-page.png)
-### Activity log
-The **Activity log** menu item lets you view entries in the [activity log](../essentials/activity-log.md) for the current resource.
+For a tutorial on using Metrics Explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
+![Screenshot that shows the Overview page.](media/monitor-azure-resource/overview-page.png)
+
+### Activity log
+
+The **Activity log** menu item lets you view entries in the [activity log](../essentials/activity-log.md) for the current resource.
+ ## Alerts
-The **Alerts** page will show you any recent alerts that have been fired for the resource. Alerts proactively notify you when important conditions are found in your monitoring data and can use data from either Metrics or Logs.
-See [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md) for tutorials on create alert rules and viewing alerts.
+The **Alerts** page shows you any recent alerts that were fired for the resource. Alerts proactively notify you when important conditions are found in your monitoring data and can use data from either Metrics or Logs.
+For tutorials on how to create alert rules and view alerts, see [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md).
+ ## Metrics
-The **Metrics** menu item opens [metrics explorer](./metrics-getting-started.md) which allows you to work with individual metrics or combine multiple to identify correlations and trends. This is the same metrics explorer that's opened when you click on one of the charts in the **Overview** page.
-See [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md) for a tutorial on using metrics explorer.
+The **Metrics** menu item opens [Metrics Explorer](./metrics-getting-started.md). You can use it to work with individual metrics or combine multiple metrics to identify correlations and trends. This is the same Metrics Explorer that opens when you select one of the charts on the **Overview** page.
+For a tutorial on how to use Metrics Explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
## Diagnostic settings
-The **Diagnostic settings** page lets you create a [diagnostic setting](../essentials/diagnostic-settings.md) to collect the resource logs for your resource. You can send them to multiple locations, but the most common is to send to a Log Analytics workspace so you can analyze them with Log Analytics.
-
-See [Tutorial: Collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md) for a tutorial on creating a diagnostic setting.
+The **Diagnostic settings** page lets you create a [diagnostic setting](../essentials/diagnostic-settings.md) to collect the resource logs for your resource. You can send them to multiple locations, but the most common use is to send them to a Log Analytics workspace so you can analyze them with Log Analytics.
+For a tutorial on how to create a diagnostic setting, see [Tutorial: Collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).
-## Insights
-The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../monitor-reference.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features.
+## Insights
+The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../monitor-reference.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features.
-See [Insights and Core solutions](../monitor-reference.md#insights-and-curated-visualizations) for a list of insights that are available and links to their documentation.
+For a list of insights that are available and links to their documentation, see [Insights and core solutions](../monitor-reference.md#insights-and-curated-visualizations).
## Next steps
-Now that you have a basic understanding of Azure Monitor, get start analyzing some metrics for an Azure resource.
+
+Now that you have a basic understanding of Azure Monitor, get started analyzing some metrics for an Azure resource.
> [!div class="nextstepaction"] > [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md)
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
Title: Overview of Azure platform logs | Microsoft Docs
-description: Overview of logs in Azure Monitor which provide rich, frequent data about the operation of an Azure resource.
+description: Overview of logs in Azure Monitor, which provide rich, frequent data about the operation of an Azure resource.
Last updated 12/19/2019
# Overview of Azure platform logs
-Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. They are automatically generated although you need to configure certain platform logs to be forwarded to one or more destinations to be retained. This article provides an overview of platform logs including what information they provide and how you can configure them for collection and analysis.
+
+Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. Although they're automatically generated, you need to configure certain platform logs to be forwarded to one or more destinations to be retained. This article provides an overview of platform logs including what information they provide and how you can configure them for collection and analysis.
## Types of platform logs+ The following table lists the specific platform logs that are available at different layers of Azure. | Log | Layer | Description | |:|:|:|
-| [Resource logs](./resource-logs.md) | Azure Resources | Provide insight into operations that were performed within an Azure resource (the *data plane*), for example getting a secret from a Key Vault or making a request to a database. The content of resource logs varies by the Azure service and resource type.<br><br>*Resource logs were previously referred to as diagnostic logs.* |
-| [Activity log](../essentials/activity-log.md) | Azure Subscription | Provides insight into the operations on each Azure resource in the subscription from the outside (*the management plane*) in addition to updates on Service Health events. Use the Activity Log, to determine the _what_, _who_, and _when_ for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There is a single Activity log for each Azure subscription. |
-| [Azure Active Directory logs](../../active-directory/reports-monitoring/overview-reports.md) | Azure Tenant | Contains the history of sign-in activity and audit trail of changes made in the Azure Active Directory for a particular tenant. |
+| [Resource logs](./resource-logs.md) | Azure Resources | Provide insight into operations that were performed within an Azure resource (the *data plane*). Examples might be getting a secret from a key vault or making a request to a database. The content of resource logs varies by the Azure service and resource type.<br><br>*Resource logs were previously referred to as diagnostic logs.* |
+| [Activity log](../essentials/activity-log.md) | Azure Subscription | Provides insight into the operations on each Azure resource in the subscription from the outside (the *management plane*) in addition to updates on Service Health events. Use the Activity log to determine the _what_, _who_, and _when_ for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There's a single activity log for each Azure subscription. |
+| [Azure Active Directory (Azure AD) logs](../../active-directory/reports-monitoring/overview-reports.md) | Azure Tenant | Contain the history of sign-in activity and audit trail of changes made in Azure AD for a particular tenant. |
> [!NOTE]
-> The Azure Activity Log is primarily for activities that occur in Azure Resource Manager. It does not track resources using the Classic/RDFE model. Some Classic resource types have a proxy resource provider in Azure Resource Manager (for example, Microsoft.ClassicCompute). If you interact with a Classic resource type through Azure Resource Manager using these proxy resource providers, the operations appear in the Activity Log. If you interact with a Classic resource type outside of the Azure Resource Manager proxies, your actions are only recorded in the Operation Log. The Operation Log can be browsed in a separate section of the portal.
-
-![Platform logs overview](media/platform-logs-overview/logs-overview.png)
+> The Azure activity log is primarily for activities that occur in Azure Resource Manager. It doesn't track resources by using the classic/RDFE model. Some classic resource types have a proxy resource provider in Resource Manager (for example, Microsoft.ClassicCompute). If you interact with a classic resource type through Resource Manager by using these proxy resource providers, the operations appear in the activity log. If you interact with a classic resource type outside of the Resource Manager proxies, your actions are only recorded in the Operation log. The Operation log can be browsed in a separate section of the portal.
+![Diagram that shows a platform logs overview.](media/platform-logs-overview/logs-overview.png)
+## View platform logs
+There are different options for viewing and analyzing the different Azure platform logs:
-## Viewing platform logs
-There are different options for viewing and analyzing the different Azure platform logs.
--- View the Activity log in the Azure portal and access events from PowerShell and CLI. See [View the Activity log](../essentials/activity-log.md#view-the-activity-log) for details. -- View Azure Active Directory Security and Activity reports in the Azure portal. See [What are Azure Active Directory reports?](../../active-directory/reports-monitoring/overview-reports.md) for details.-- Resource logs are automatically generated by supported Azure resources, but they aren't available to be viewed unless you create a [diagnostic setting](#diagnostic-settings).
+- View the activity log in the Azure portal and access events from PowerShell and the Azure CLI. See [View the activity log](../essentials/activity-log.md#view-the-activity-log) for details.
+- View Azure AD security and activity reports in the Azure portal. See [What are Azure AD reports?](../../active-directory/reports-monitoring/overview-reports.md) for details.
+- Resource logs are automatically generated by supported Azure resources. They aren't available to be viewed unless you create a [diagnostic setting](#diagnostic-settings).
## Diagnostic settings
-Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send platform logs to one of the following destinations for analysis or other purposes. Resource logs must have a diagnostic setting be used since they have no other way of being viewed.
+
+Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send platform logs to one of the following destinations for analysis or other purposes. Resource logs must have a diagnostic setting to be used because they have no other way of being viewed.
| Destination | Description | |:|:|
-| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. |
-| Event hub | Send platform log data outside of Azure, for example to a third-party SIEM or custom telemetry platform. |
-| Azure storage | Archive the logs for audit or backup. |
-| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you are already using one of the partners. |
+| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. |
+| Event hub | Send platform log data outside of Azure, for example, to a third-party SIEM or custom telemetry platform. |
+| Azure Storage | Archive the logs for audit or backup. |
+| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners. |
-- For details on creating a diagnostic setting for activity log or resource logs, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md). -- For details on creating a diagnostic setting for Azure Active Directory logs, see the following articles.
+- For details on how to create a diagnostic setting for activity logs or resource logs, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).
+- For details on how to create a diagnostic setting for Azure AD logs, see the following articles:
- [Integrate Azure AD logs with Azure Monitor logs](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
- - [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
- - [Tutorial: Archive Azure AD logs to an Azure storage account](../../active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md)
+ - [Tutorial: Stream Azure AD logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
+ - [Tutorial: Archive Azure AD logs to an Azure Storage account](../../active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md)
## Pricing model
-Processing data to stream logs is charged for [certain services](resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a Log Analytics charge for ingesting the data into a workspace.
+Processing data to stream logs is charged for [certain services](resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace. There is a Log Analytics charge for ingesting the data into a workspace.
-The charge is based on the number of bytes in the exported JSON formatted log data, measured in GB (10^9 bytes).
+The charge is based on the number of bytes in the exported JSON-formatted log data, measured in GB (10^9 bytes).
-Pricing is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+Pricing is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
## Next steps
-* [Read more details about the Activity log](../essentials/activity-log.md)
+* [Read more details about activity logs](../essentials/activity-log.md)
* [Read more details about resource logs](./resource-logs.md)
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
# Azure resource logs
-Azure resource logs are [platform logs](../essentials/platform-logs-overview.md) that provide insight into operations that were performed within an Azure resource. The content of resource logs varies by the Azure service and resource type. Resource logs are not collected by default. This article describes the [diagnostic setting](diagnostic-settings.md) required for each Azure resource to send its resource logs to different destinations.
+
+Azure resource logs are [platform logs](../essentials/platform-logs-overview.md) that provide insight into operations that were performed within an Azure resource. The content of resource logs varies by the Azure service and resource type. Resource logs aren't collected by default. This article describes the [diagnostic setting](diagnostic-settings.md) required for each Azure resource to send its resource logs to different destinations.
## Send to Log Analytics workspace
- Send resource logs to a Log Analytics workspace to enable the features of [Azure Monitor Logs](../logs/data-platform-logs.md) which includes the following:
+
+ Send resource logs to a Log Analytics workspace to enable the features of [Azure Monitor Logs](../logs/data-platform-logs.md), where you can:
- Correlate resource log data with other monitoring data collected by Azure Monitor. - Consolidate log entries from multiple Azure resources, subscriptions, and tenants into one location for analysis together.
Azure resource logs are [platform logs](../essentials/platform-logs-overview.md)
[Create a diagnostic setting](../essentials/diagnostic-settings.md) to send resource logs to a Log Analytics workspace. This data is stored in tables as described in [Structure of Azure Monitor Logs](../logs/data-platform-logs.md). The tables used by resource logs depend on what type of collection the resource is using: -- Azure diagnostics - All data written is to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.-- Resource-specific - Data is written to individual table for each category of the resource.
+- **Azure diagnostics**: All data is written to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
+- **Resource-specific**: Data is written to individual tables for each category of the resource.
### Resource-specific
-In this mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting. This method is recommended since it makes it much easier to work with the data in log queries, provides better discoverability of schemas and their structure, improves performance across both ingestion latency and query times, and the ability to grant Azure RBAC rights on a specific table. All Azure services will eventually migrate to the Resource-Specific mode.
-The example above would result in three tables being created:
-
-- Table *Service1AuditLogs* as follows:
+In this mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting. We recommend this method because it:
+
+- Makes it easier to work with the data in log queries.
+- Provides better discoverability of schemas and their structure.
+- Improves performance across ingestion latency and query times.
+- Provides the ability to grant Azure role-based access control rights on a specific table.
+
+All Azure services will eventually migrate to the resource-specific mode.
- | Resource Provider | Category | A | B | C |
+The preceding example creates three tables:
+
+- Table `Service1AuditLogs`
+
+ | Resource provider | Category | A | B | C |
| -- | -- | -- | -- | -- | | Service1 | AuditLogs | x1 | y1 | z1 | | Service1 | AuditLogs | x5 | y5 | z5 | | ... | -- Table *Service1ErrorLogs* as follows:
+- Table `Service1ErrorLogs`
- | Resource Provider | Category | D | E | F |
+ | Resource provider | Category | D | E | F |
| -- | -- | -- | -- | -- | | Service1 | ErrorLogs | q1 | w1 | e1 | | Service1 | ErrorLogs | q2 | w2 | e2 | | ... | -- Table *Service2AuditLogs* as follows:
+- Table `Service2AuditLogs`
- | Resource Provider | Category | G | H | I |
+ | Resource provider | Category | G | H | I |
| -- | -- | -- | -- | -- | | Service2 | AuditLogs | j1 | k1 | l1| | Service2 | AuditLogs | j3 | k3 | l3| | ... |
-### Azure diagnostics mode
-In this mode, all data from any diagnostic setting will be collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. This is the legacy method used today by most Azure services. Since multiple resource types send data to the same table, its schema is the superset of the schemas of all the different data types being collected. See [AzureDiagnostics reference](/azure/azure-monitor/reference/tables/azurediagnostics) for details on the structure of this table and how it works with this potentially large number of columns.
+### Azure diagnostics mode
-Consider the following example where diagnostic settings are being collected in the same workspace for the following data types:
+In this mode, all data from any diagnostic setting is collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. This legacy method is used today by most Azure services. Because multiple resource types send data to the same table, its schema is the superset of the schemas of all the different data types being collected. For details on the structure of this table and how it works with this potentially large number of columns, see [AzureDiagnostics reference](/azure/azure-monitor/reference/tables/azurediagnostics).
-- Audit logs of service 1 (having a schema consisting of columns A, B, and C) -- Error logs of service 1 (having a schema consisting of columns D, E, and F) -- Audit logs of service 2 (having a schema consisting of columns G, H, and I)
+Consider an example where diagnostic settings are collected in the same workspace for the following data types:
-The AzureDiagnostics table will look as follows:
+- Audit logs of service 1 have a schema that consists of columns A, B, and C
+- Error logs of service 1 have a schema that consists of columns D, E, and F
+- Audit logs of service 2 have a schema that consists of columns G, H, and I
+
+The `AzureDiagnostics` table looks like this example:
| ResourceProvider | Category | A | B | C | D | E | F | G | H | I | | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- |
The AzureDiagnostics table will look as follows:
| ... | ### Select the collection mode
-Most Azure resources will write data to the workspace in either **Azure Diagnostic** or **Resource-Specific mode** without giving you a choice. See the [documentation for each service](./resource-logs-schema.md) for details on which mode it uses. All Azure services will eventually use Resource-Specific mode. As part of this transition, some resources will allow you to select a mode in the diagnostic setting. Specify resource-specific mode for any new diagnostic settings since this makes the data easier to manage and may help you to avoid complex migrations at a later date.
-
- ![Diagnostic Settings mode selector](media/resource-logs/diagnostic-settings-mode-selector.png)
-> [!NOTE]
-> For an example setting the collection mode using a resource manager template, see [Resource Manager template samples for diagnostic settings in Azure Monitor](./resource-manager-diagnostic-settings.md#diagnostic-setting-for-recovery-services-vault).
+Most Azure resources write data to the workspace in either **Azure diagnostics** or **resource-specific** mode without giving you a choice. For more information, see [Common and service-specific schemas for Azure resource logs](./resource-logs-schema.md).
+All Azure services eventually use the resource-specific mode. As part of this transition, some resources allow you to select a mode in the diagnostic setting. Specify resource-specific mode for any new diagnostic settings because this mode makes the data easier to manage. It also might help you avoid complex migrations later.
+
+ ![Screenshot that shows the Diagnostics settings mode selector.](media/resource-logs/diagnostic-settings-mode-selector.png)
-You can modify an existing diagnostic setting to resource-specific mode. In this case, data that was already collected will remain in the _AzureDiagnostics_ table until it's removed according to your retention setting for the workspace. New data will be collected in the dedicated table. Use the [union](/azure/kusto/query/unionoperator) operator to query data across both tables.
+> [!NOTE]
+> For an example that sets the collection mode by using an Azure Resource Manager template, see [Resource Manager template samples for diagnostic settings in Azure Monitor](./resource-manager-diagnostic-settings.md#diagnostic-setting-for-recovery-services-vault).
-Continue to watch [Azure Updates](https://azure.microsoft.com/updates/) blog for announcements about Azure services supporting Resource-Specific mode.
+You can modify an existing diagnostic setting to resource-specific mode. In this case, data that was already collected remains in the `AzureDiagnostics` table until it's removed according to your retention setting for the workspace. New data is collected in the dedicated table. Use the [union](/azure/kusto/query/unionoperator) operator to query data across both tables.
+Continue to watch the [Azure Updates](https://azure.microsoft.com/updates/) blog for announcements about Azure services that support resource-specific mode.
## Send to Azure Event Hubs
-Send resource logs to an event hub to send them outside of Azure, for example to a third-party SIEM or other log analytics solutions. Resource logs from event hubs are consumed in JSON format with a `records` element containing the records in each payload. The schema depends on the resource type as described in [Common and service-specific schema for Azure Resource Logs](resource-logs-schema.md).
-Following is sample output data from Event Hubs for a resource log:
+Send resource logs to an event hub to send them outside of Azure. For example, resource logs might be sent to a third-party SIEM or other log analytics solutions. Resource logs from event hubs are consumed in JSON format with a `records` element that contains the records in each payload. The schema depends on the resource type as described in [Common and service-specific schema for Azure resource logs](resource-logs-schema.md).
+
+The following sample output data is from Azure Event Hubs for a resource log:
```json {
Following is sample output data from Event Hubs for a resource log:
``` ## Send to Azure Storage
-Send resource logs to Azure storage to retain it for archiving. Once you have created the diagnostic setting, a storage container is created in the storage account as soon as an event occurs in one of the enabled log categories.
+
+Send resource logs to Azure Storage to retain them for archiving. After you've created the diagnostic setting, a storage container is created in the storage account as soon as an event occurs in one of the enabled log categories.
> [!NOTE] > An alternate strategy for archiving is to send the resource log to a Log Analytics workspace with an [archive policy](../logs/data-retention-archive.md).
The blobs within the container use the following naming convention:
insights-logs-{log category name}/resourceId=/SUBSCRIPTIONS/{subscription ID}/RESOURCEGROUPS/{resource group name}/PROVIDERS/{resource provider name}/{resource type}/{resource name}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json ```
-For example, the blob for a network security group might have a name similar to the following:
+The blob for a network security group might have a name similar to this example:
``` insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUP/TESTNSG/y=2016/m=08/d=22/h=18/m=00/PT1H.json ```
-Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL (for example, h=12). During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00, since resource log events are broken into individual blobs per hour.
+Each PT1H.json blob contains a JSON blob of events that occurred within the hour specified in the blob URL, for example, h=12. During the present hour, events are appended to the PT1H.json file as they occur. The minute value (m=00) is always 00 because resource log events are broken into individual blobs per hour.
-Within the PT1H.json file, each event is stored with the following format. This will use a common top-level schema but be unique for each Azure service as described in [Resource logs schema](./resource-logs-schema.md).
+Within the PT1H.json file, each event is stored in the following format. It uses a common top-level schema but is unique for each Azure service, as described in [Resource logs schema](./resource-logs-schema.md).
> [!NOTE]
-> Logs are written to the blob relevant to time that the log was generated, not time that it was received. This means at the turn of the hour, both the previous hour and current hour blobs could be receiving new writes.
-
+> Logs are written to the blob relevant to the time that the log was generated, not the time that it was received. So, at the turn of the hour, both the previous hour and current hour blobs could be receiving new writes.
``` JSON {"time": "2016-07-01T00:00:37.2040000Z","systemId": "46cdbb41-cb9c-4f3d-a5b4-1d458d827ff1","category": "NetworkSecurityGroupRuleCounter","resourceId": "/SUBSCRIPTIONS/s1id1234-5679-0123-4567-890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/TESTNSG","operationName": "NetworkSecurityGroupCounters","properties": {"vnetResourceGuid": "{12345678-9012-3456-7890-123456789012}","subnetPrefix": "10.3.0.0/24","macAddress": "000123456789","ruleName": "/subscriptions/ s1id1234-5679-0123-4567-890123456789/resourceGroups/testresourcegroup/providers/Microsoft.Network/networkSecurityGroups/testnsg/securityRules/default-allow-rdp","direction": "In","type": "allow","matchedConnections": 1988}} ``` ## Azure Monitor partner integrations
-Resource logs can also be sent partner solutions that are fully integrated into Azure. See [Azure Monitor partner integrations](../../partner-solutions/overview.md) for a list of these solutions and details on configuring them.
+
+Resource logs can also be sent to partner solutions that are fully integrated into Azure. For a list of these solutions and details on how to configure them, see [Azure Monitor partner integrations](../../partner-solutions/overview.md).
## Next steps
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Queries with Basic Logs must use a workspace for the scope. You can't run querie
You can run two concurrent queries per user. ### Purge
-You canΓÇÖt [purge personal data](personal-data-mgmt.md#how-to-export-and-delete-private-data) from Basic Logs tables.
+You canΓÇÖt [purge personal data](personal-data-mgmt.md#exporting-and-deleting-personal-data) from Basic Logs tables.
## Run a query on a Basic Logs table
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
If you set the data retention policy to 30 days, you can purge older data immedi
Note that workspaces with a 30-day retention policy might actually keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter.
-You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#how-to-export-and-delete-private-data), which removes personal data. You canΓÇÖt purge data from archived logs.
+You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#exporting-and-deleting-personal-data), which removes personal data. You canΓÇÖt purge data from archived logs.
The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. **To lower retention costs, decrease the retention period for the workspace or for specific tables.**
azure-monitor Personal Data Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/personal-data-mgmt.md
Title: Guidance for personal data stored in Azure Log Analytics| Microsoft Docs
-description: This article describes how to manage personal data stored in Azure Log Analytics and the methods to identify and remove it.
+ Title: Managing personal data in Azure Monitor Log Analytics and Application Insights
+description: This article describes how to manage personal data stored in Azure Monitor Log Analytics and the methods to identify and remove it.
-- Previously updated : 05/18/2018+++ Last updated : 06/28/2022
+# Customer intent: As an Azure Monitor admin user, I want to understand how to manage personal data in logs Azure Monitor collects.
-# Guidance for personal data stored in Log Analytics and Application Insights
+# Managing personal data in Log Analytics and Application Insights
-Log Analytics is a data store where personal data is likely to be found. Application Insights stores its data in a Log Analytics partition. This article will discuss where in Log Analytics and Application Insights such data is typically found, as well as the capabilities available to you to handle such data.
+Log Analytics is a data store where personal data is likely to be found. Application Insights stores its data in a Log Analytics partition. This article explains where Log Analytics and Application Insights store personal data and how to manage this data.
-> [!NOTE]
-> For the purposes of this article _log data_ refers to data sent to a Log Analytics workspace, while _application data_ refers to data collected by Application Insights. If you are using a workspace-based Application Insights resource, the information on log data will apply but if you are using the classic Application Insights resource then the application data applies.
+In this article, _log data_ refers to data sent to a Log Analytics workspace, while _application data_ refers to data collected by Application Insights. If you're using a workspace-based Application Insights resource, the information on log data applies. If you're using a classic Application Insights resource, the application data applies.
[!INCLUDE [gdpr-dsr-and-stp-note](../../../includes/gdpr-dsr-and-stp-note.md)] ## Strategy for personal data handling
-While it will be up to you and your company to ultimately determine the strategy with which you will handle your private data (if at all), the following are some possible approaches. They are listed in order of preference from a technical point of view from most to least preferable:
+While it's up to you and your company to define a strategy for handling personal data, here are a few approaches, listed from most to least preferable from a technical point of view:
+
+* Stop collecting personal data, or obfuscate, anonymize, or adjust collected data to exclude it from being considered "personal". This is _by far_ the preferred approach, which saves you the need to create a costly and impactful data handling strategy.
+* Normalize the data to reduce negative affects on the data platform and performance. For example, instead of logging an explicit User ID, create a lookup to correlate the username and their details to an internal ID that can then be logged elsewhere. That way, if a user asks you to delete their personal information, you can delete only the row in the lookup table that corresponds to the user.
+* If you need to collect personal data, build a process using the purge API path and the existing query API to meet any obligations to export and delete any personal data associated with a user.
-* Where possible, stop collection of, obfuscate, anonymize, or otherwise adjust the data being collected to exclude it from being considered "private". This is _by far_ the preferred approach, saving you the need to create a very costly and impactful data handling strategy.
-* Where not possible, attempt to normalize the data to reduce the impact on the data platform and performance. For example, instead of logging an explicit User ID, create a lookup data that will correlate the username and their details to an internal ID that can then be logged elsewhere. That way, should one of your users ask you to delete their personal information, it is possible that only deleting the row in the lookup table corresponding to the user will be sufficient.
-* Finally, if private data must be collected, build a process around the purge API path and the existing query API path to meet any obligations you may have around exporting and deleting any private data associated with a user.
+## Where to look for personal data in Log Analytics
-## Where to look for private data in Log Analytics?
+Log Analytics prescribes a schema to your data, but allows you to override every field with custom values. You can also ingest custom schemas. As such, it's impossible to say exactly where personal data will be found in your specific workspace. The following locations, however, are good starting points in your inventory.
-Log Analytics is a flexible store, which while prescribing a schema to your data, allows you to override every field with custom values. Additionally, any custom schema can be ingested. As such, it is impossible to say exactly where Private data will be found in your specific workspace. The following locations, however, are good starting points in your inventory:
+> [!NOTE]
+> Some of the queries below use `search *` to query all tables in a workspace. We highly recommend you avoid using `search *`, which creates a highly inefficient query, whenever possible. Instead, query a specific table.
### Log data
-* *IP addresses*: Log Analytics collects a variety of IP information across many different tables. For example, the following query shows all tables where IPv4 addresses have been collected over the last 24 hours:
+* **IP addresses**: Log Analytics collects various IP information in multiple tables. For example, the following query shows all tables that collected IPv4 addresses in the last 24 hours:
``` search * | where * matches regex @'\b((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.|$)){4}\b' //RegEx originally provided on https://stackoverflow.com/questions/5284147/validating-ipv4-addresses-with-regexp | summarize count() by $table ```
-* *User IDs*: User IDs are found in a large variety of solutions and tables. You can look for a particular username across your entire dataset using the search command:
+
+* **User IDs**: You'll find user usernames and user IDs in various solutions and tables. You can look for a particular username or user ID across your entire dataset using the search command:
```
- search "[username goes here]"
+ search "<username or user ID>"
```
- Remember to look not only for human-readable user names but also GUIDs that can directly be traced back to a particular user!
-* *Device IDs*: Like user IDs, device IDs are sometimes considered "private". Use the same approach as listed above for user IDs to identify tables where this might be a concern.
-* *Custom data*: Log Analytics allows the collection in a variety of methods: custom logs and custom fields, the [HTTP Data Collector API](../logs/data-collector-api.md) , and custom data collected as part of system event logs. All of these are susceptible to containing private data, and should be examined to verify whether any such data exists.
-* *Solution-captured data*: Because the solution mechanism is an open-ended one, we recommend reviewing all tables generated by solutions to ensure compliance.
+
+ Remember to look not only for human-readable usernames but also for GUIDs that can be traced back to a particular user.
+* **Device IDs**: Like user IDs, device IDs are sometimes considered personal data. Use the approach listed above for user IDs to identify tables that hold personal data.
+* **Custom data**: Log Analytics lets you collect custom data through custom logs, custom fields, the [HTTP Data Collector API](../logs/data-collector-api.md), and as part of system event logs. Check all custom data for personal data.
+* **Solution-captured data**: Because the solution mechanism is open-ended, we recommend reviewing all tables generated by solutions to ensure compliance.
### Application data
-* *IP addresses*: While Application Insights will by default obfuscate all IP address fields to "0.0.0.0", it is a fairly common pattern to override this value with the actual user IP to maintain session information. The Analytics query below can be used to find any table that contains values in the IP address column other than "0.0.0.0" over the last 24 hours:
+* **IP addresses**: While Application Insights obfuscates all IP address fields to `0.0.0.0` by default, it's fairly common to override this value with the actual user IP to maintain session information. Use the query below to find any table that contains values in the *IP address* column other than `0.0.0.0` in the last 24 hours:
``` search client_IP != "0.0.0.0" | where timestamp > ago(1d) | summarize numNonObfuscatedIPs_24h = count() by $table ```
-* *User IDs*: By default, Application Insights will use randomly generated IDs for user and session tracking. However, it is common to see these fields overridden to store an ID more relevant to the application. For example: usernames, AAD GUIDs, etc. These IDs are often considered to be in-scope as personal data, and therefore, should be handled appropriately. Our recommendation is always to attempt to obfuscate or anonymize these IDs. Fields where these values are commonly found include session_Id, user_Id, user_AuthenticatedId, user_AccountId, as well as customDimensions.
-* *Custom data*: Application Insights allows you to append a set of custom dimensions to any data type. These dimensions can be *any* data. Use the following query to identify any custom dimensions collected over the last 24 hours:
+
+* **User IDs**: By default, Application Insights uses randomly generated IDs for user and session tracking in fields such as *session_Id*, *user_Id*, *user_AuthenticatedId*, *user_AccountId*, and *customDimensions*. However, it's common to override these fields with an ID that's more relevant to the application, such as usernames or Azure Active Directory GUIDs. These IDs are often considered to be personal data. We recommend obfuscating or anonymizing these IDs.
+* **Custom data**: Application Insights allows you to append a set of custom dimensions to any data type. Use the following query to identify custom dimensions collected in the last 24 hours:
``` search * | where isnotempty(customDimensions) | where timestamp > ago(1d) | project $table, timestamp, name, customDimensions ```
-* *In-memory and in-transit data*: Application Insights will track exceptions, requests, dependency calls, and traces. Private data can often be collected at the code and HTTP call level. Review the exceptions, requests, dependencies, and traces tables to identify any such data. Use [telemetry initializers](../app/api-filtering-sampling.md) where possible to obfuscate this data.
-* *Snapshot Debugger captures*: The [Snapshot Debugger](../app/snapshot-debugger.md) feature in Application Insights allows you to collect debug snapshots whenever an exception is caught on the production instance of your application. Snapshots will expose the full stack trace leading to the exceptions as well as the values for local variables at every step in the stack. Unfortunately, this feature does not allow for selective deletion of snap points, or programmatic access to data within the snapshot. Therefore, if the default snapshot retention rate does not satisfy your compliance requirements, the recommendation is to turn off the feature.
-
-## How to export and delete private data
+
+* **In-memory and in-transit data**: Application Insights tracks exceptions, requests, dependency calls, and traces. You'll often find personal data at the code and HTTP call level. Review exceptions, requests, dependencies, and traces tables to identify any such data. Use [telemetry initializers](../app/api-filtering-sampling.md) where possible to obfuscate this data.
+* **Snapshot Debugger captures**: The [Snapshot Debugger](../app/snapshot-debugger.md) feature in Application Insights lets you collect debug snapshots when Application Insights detects an exception on the production instance of your application. Snapshots expose the full stack trace leading to the exceptions and the values for local variables at every step in the stack. Unfortunately, this feature doesn't allow selective deletion of snap points or programmatic access to data within the snapshot. Therefore, if the default snapshot retention rate doesn't satisfy your compliance requirements, we recommend you turn off the feature.
-As mentioned in the [strategy for personal data handling](#strategy-for-personal-data-handling) section earlier, it is __strongly__ recommended to if it all possible, to restructure your data collection policy to disable the collection of private data, obfuscating or anonymizing it, or otherwise modifying it to remove it from being considered "private". Handling the data will foremost result in costs to you and your team to define and automate a strategy, build an interface for your customers to interact with their data through, and ongoing maintenance costs. Further, it is computationally costly for Log Analytics and Application Insights, and a large volume of concurrent query or purge API calls have the potential to negatively impact all other interaction with Log Analytics functionality. That said, there are indeed some valid scenarios where private data must be collected. For these cases, data should be handled as described in this section.
+## Exporting and deleting personal data
+We __strongly__ recommend you restructure your data collection policy to stop collecting personal data, obfuscate or anonymize personal data, or otherwise modify such data until it's no longer considered personal. In handling personal, data you'll incur costs in defining and automating a strategy, building an interface through which your customers interact with their data, and ongoing maintenance. It's also computationally costly for Log Analytics and Application Insights, and a large volume of concurrent Query or Purge API calls can negatively affect all other interactions with Log Analytics functionality. However, if you have to collect personal data, follow the guidelines in this section.
+> [!IMPORTANT]
+> While most purge operations complete much quicker, **the formal SLA for the completion of purge operations is set at 30 days** due to their heavy impact on the data platform. This SLA meets GDPR requirements. It's an automated process, so there's no way to expedite the operation.
### View and export
-For both view and export data requests, the [Log Analytics query API](https://dev.loganalytics.io/) or the [Application Insights query API](https://dev.applicationinsights.io/quickstart) should be used. Logic to convert the shape of the data to an appropriate one to deliver to your users will be up to you to implement. [Azure Functions](https://azure.microsoft.com/services/functions/) makes a great place to host such logic.
+Use the [Log Analytics query API](/rest/api/loganalytics/dataaccess/query) or the [Application Insights query API](/rest/api/application-insights/query) for view and export data requests.
-> [!IMPORTANT]
-> While the vast majority of purge operations may complete much quicker than the SLA, **the formal SLA for the completion of purge operations is set at 30 days** due to their heavy impact on the data platform used. This SLA meets GDPR requirements. It's an automated process so there is no way to request that an operation be handled faster.
+You need to implement the logic for converting the data to an appropriate format for delivery to your users. [Azure Functions](https://azure.microsoft.com/services/functions/) is a great place to host such logic.
### Delete > [!WARNING] > Deletes in Log Analytics are destructive and non-reversible! Please use extreme caution in their execution.
-We have made available as part of a privacy handling a *purge* API path. This path should be used sparingly due to the risk associated with doing so, the potential performance impact, and the potential to skew all-up aggregations, measurements, and other aspects of your Log Analytics data. See the [Strategy for personal data handling](#strategy-for-personal-data-handling) section for alternative approaches to handle private data.
-
-> [!NOTE]
-> Once the purge operation has been performed, the data cannot be accessed while the [purge operation status](/rest/api/loganalytics/workspacepurge/getpurgestatus) is *pending*.
-
-Purge is a highly privileged operation that no app or user in Azure (including even the resource owner) will have permissions to execute without explicitly being granted a role in Azure Resource Manager. This role is _Data Purger_ and should be cautiously delegated due to the potential for data loss.
-
-> [!IMPORTANT]
-> In order to manage system resources, purge requests are throttled at 50 requests per hour. You should batch the execution of purge requests by sending a single command whose predicate includes all user identities that require purging. Use the [in operator](/azure/kusto/query/inoperator) to specify multiple identities. You should run the query before executing the purge request to verify that the results are expected.
+Azure Monitor's Purge API lets you delete personal data. Use the purge operation sparingly to avoid potential risks, performance impact, and the potential to skew all-up aggregations, measurements, and other aspects of your Log Analytics data. See the [Strategy for personal data handling](#strategy-for-personal-data-handling) section for alternative approaches to handling personal data.
+Purge is a highly privileged operation. Applications and Azure users, including the resource owner, can't execute a purge operation without explicitly being granted the _Data Purger_ role in Azure Resource Manager. Grant this role cautiously due to the potential for data loss.
+To manage system resources, we limit purge requests to 50 requests an hour. Batch the execution of purge requests by sending a single command whose predicate includes all user identities that require purging. Use the [in operator](/azure/kusto/query/inoperator) to specify multiple identities. Run the query before executing the purge request to verify the expected results.
-Once the Azure Resource Manager role has been assigned, two new API paths are available:
+> [!NOTE]
+> After initiating a purge request, you cannot access the related data while the [purge operation status](/rest/api/loganalytics/workspacepurge/getpurgestatus) is *pending*.
#### Log data
-* [POST purge](/rest/api/loganalytics/workspacepurge/purge) - takes an object specifying parameters of data to delete and returns a reference GUID
-* GET purge status - the POST purge call will return an 'x-ms-status-location' header that will include a URL that you can call to determine the status of your purge API. For example:
+* The [Workspace Purge POST API](/rest/api/loganalytics/workspacepurge/purge) takes an object specifying parameters of data to delete and returns a reference GUID.
+* The [Get Purge Status POST API](/rest/api/loganalytics/workspace-purge/get-purge-status) returns an 'x-ms-status-location' header that includes a URL you can call to determine the status of your purge operation. For example:
``` x-ms-status-location: https://management.azure.com/subscriptions/[SubscriptionId]/resourceGroups/[ResourceGroupName]/providers/Microsoft.OperationalInsights/workspaces/[WorkspaceName]/operations/purge-[PurgeOperationId]?api-version=2015-03-20 ```
-> [!IMPORTANT]
-> While we expect the vast majority of purge operations to complete much quicker than our SLA, due to their heavy impact on the data platform used by Log Analytics, **the formal SLA for the completion of purge operations is set at 30 days**.
- #### Application data
-* [POST purge](/rest/api/application-insights/components/purge) - takes an object specifying parameters of data to delete and returns a reference GUID
-* GET purge status - the POST purge call will return an 'x-ms-status-location' header that will include a URL that you can call to determine the status of your purge API. For example:
+* The [Components - Purge POST API](/rest/api/application-insights/components/purge) takes an object specifying parameters of data to delete and returns a reference GUID.
+* The [Components - Get Purge Status GET API](/rest/api/application-insights/components/get-purge-status) returns an 'x-ms-status-location' header that includes a URL you can call to determine the status of your purge operation. For example:
``` x-ms-status-location: https://management.azure.com/subscriptions/[SubscriptionId]/resourceGroups/[ResourceGroupName]/providers/microsoft.insights/components/[ComponentName]/operations/purge-[PurgeOperationId]?api-version=2015-05-01 ```
-> [!IMPORTANT]
-> While the vast majority of purge operations may complete much quicker than the SLA, due to their heavy impact on the data platform used by Application Insights, **the formal SLA for the completion of purge operations is set at 30 days**.
- ## Next steps-- To learn more about how Log Analytics data is collected, processed, and secured, see [Log Analytics data security](../logs/data-security.md).-- To learn more about how Application Insights data is collected, processed, and secured, see [Application Insights data security](../app/data-retention-privacy.md).
+- Learn more about [how Log Analytics collects, processes, and secures data](../logs/data-security.md).
+- Learn more about [how Application Insights collects, processes, and secures data](../app/data-retention-privacy.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 06/17/2022 Last updated : 06/28/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files standard network features are supported for the following reg
* Australia Central 2 * Australia East * Australia Southeast
+* Canada Central
* East US * East US 2 * France Central
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 06/15/2022 Last updated : 06/28/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
[Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for AVS provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
- Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West US. Regional coverage will expand as the preview progresses.
+ Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West Europe, West US. Regional coverage will expand as the preview progresses.
* [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions)
azure-percept Azure Percept Devkit Software Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-software-release-notes.md
This page provides information of changes and fixes for each Azure Percept DK OS
To download the update images, refer to [Azure Percept DK software releases for USB cable update](./software-releases-usb-cable-updates.md) or [Azure Percept DK software releases for OTA update](./software-releases-over-the-air-updates.md).
+## June (2206) Release
+
+- Operating System
+ - Latest security updates on OpenSSL, cifs-utils, zlib, cpio, Nginx, and Lua packages.
+
## May (2205) Release - Operating System
azure-percept Software Releases Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-over-the-air-updates.md
Microsoft would service each dev kit release with OTA packages. However, as ther
|Release|Applicable Version(s)|Download Links|Note| |||||
-|March Service Release (2203)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108, <br>2021.111.124.109, <br>2022.101.112.106, <br>2022.102.109.102|[2022.103.110.103 OTA update package](<https://download.microsoft.com/download/2/3/4/234bdbf8-8f08-4d7a-8b33-7d5afc921bf1/2022.103.110.103 OTA update package.zip>)|Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management > Updates** in your IoT Hub, select the **switch to the older version** link in the banner. For more information, please refer to [Update Azure Percept DK over-the-air](./how-to-update-over-the-air.md).|
+|June Service Release (2206)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108, <br>2021.111.124.109, <br>2022.101.112.106, <br>2022.102.109.102, <br>2022.103.110.103|[2022.106.120.102 OTA update package](<https://download.microsoft.com/download/b/7/1/b71877b8-4882-4447-b3f3-8359ee8341e2/2022.106.120.102 OTA update package.zip>)|Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management > Updates** in your IoT Hub, select the **switch to the older version** link in the banner. For more information, please refer to [Update Azure Percept DK over-the-air](./how-to-update-over-the-air.md).|
**Hard-stop releases:**
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
This page provides information and download links for all the dev kit OS/firmwar
## Latest releases - **Latest service release**
-May Service Release (2205): [Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)
+June Service Release (2206): [Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip](https://download.microsoft.com/download/4/7/a/47af6fc2-d9a0-4e66-822b-ad36700fefff/Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip)
- **Latest major update or known stable version** Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)
Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download
|Release|Download Links|Note| |||::|
+|June Service Release (2206)|[Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip](https://download.microsoft.com/download/4/7/a/47af6fc2-d9a0-4e66-822b-ad36700fefff/Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip)||
|May Service Release (2205)|[Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)|| |March Service Release (2203)|[Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip)|| |February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip)||
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-powershell.md
You need a Bicep file to deploy. The file must be local.
You need Azure PowerShell and to be connected to Azure: - **Install Azure PowerShell cmdlets on your local computer.** To deploy Bicep files, you need [Azure PowerShell](/powershell/azure/install-az-ps) version **5.6.0 or later**. For more information, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps).
+- **Install Bicep CLI.** Azure PowerShell doesn't automatically install the Bicep CLI. Instead, you must [manually install the Bicep CLI](install.md#install-manually).
- **Connect to Azure by using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)**. If you have multiple Azure subscriptions, you might also need to run [Set-AzContext](/powershell/module/Az.Accounts/Set-AzContext). For more information, see [Use multiple Azure subscriptions](/powershell/azure/manage-subscriptions-azureps). If you don't have PowerShell installed, you can use Azure Cloud Shell. For more information, see [Deploy Bicep files from Azure Cloud Shell](./deploy-cloud-shell.md).
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | deployments | resource group | 1-64 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
-> | resourcegroups | subscription | 1-90 | Letters or digits as defined by the [Char.IsLetterOrDigit](/dotnet/api/system.char.isletterordigit) function.<br><br>Valid characters are members of the following categories in [UnicodeCategory](/dotnet/api/system.globalization.unicodecategory):<br>**UppercaseLetter**,<br>**LowercaseLetter**,<br>**TitlecaseLetter**,<br>**ModifierLetter**,<br>**OtherLetter**,<br>**DecimalDigitNumber**.<br><br>Can't end with period. |
+> | resourcegroups | subscription | 1-90 | Underscores, hyphens, periods, and letters or digits as defined by the [Char.IsLetterOrDigit](/dotnet/api/system.char.isletterordigit) function.<br><br>Valid characters are members of the following categories in [UnicodeCategory](/dotnet/api/system.globalization.unicodecategory):<br>**UppercaseLetter**,<br>**LowercaseLetter**,<br>**TitlecaseLetter**,<br>**ModifierLetter**,<br>**OtherLetter**,<br>**DecimalDigitNumber**.<br><br>Can't end with period. |
> | tagNames | resource | 1-512 | Can't use:<br>`<>%&\?/` or control characters | > | tagNames / tagValues | tag name | 1-256 | All characters. | > | templateSpecs | resource group | 1-90 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / sqlDatabases | Yes | Yes | > | workspaces / sqlPools | Yes | Yes |
+<a id="synapsenote"></a>
+
+> [!NOTE]
+> The Master database doesn't support tags, but other databases, including Azure Synapse Analytics databases, support tags. Azure Synapse Analytics databases must be in Active (not Paused) state.
+ ## Microsoft.TestBase > [!div class="mx-tableFixed"]
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
The following table lists all the valid names for key/value pairs in the connect
| key | Description | Required | Default value | Example value | | -- | -- | -- | -- | |
-| Endpoint | The URI of your ASRS instance. | Y | N/A | https://foo.service.signalr.net |
+| Endpoint | The URI of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` |
| Port | The port that your ASRS instance is listening on. | N | 80/443, depends on endpoint uri schema | 8080 | | Version | The version of given connection string. | N | 1.0 | 1.0 |
-| ClientEndpoint | The URI of your reverse proxy, like App Gateway or API Management | N | null | https://foo.bar |
+| ClientEndpoint | The URI of your reverse proxy, like App Gateway or API Management | N | null | `https://foo.bar` |
| AuthType | The auth type, we'll use AccessKey to authorize requests by default. **Case insensitive** | N | null | azure, azure.msi, azure.app | ### Use AccessKey
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
If you're not going to continue to use this app, delete all resources created by
In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. > [!div class="nextstepaction"]
-> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings)
+> [Azure Web PubSub bindings for Azure Functions](/azure/azure-web-pubsub/reference-functions-bindings)
> [!div class="nextstepaction"]
-> [Quick start: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+> [Quick start: Create a simple chatroom with Azure Web PubSub](/azure/azure-web-pubsub/tutorial-build-chat)
> [!div class="nextstepaction"] > [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
If you're not going to continue to use this app, delete all resources created by
In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. > [!div class="nextstepaction"]
-> [Tutorial: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+> [Tutorial: Create a simple chatroom with Azure Web PubSub](/azure/azure-web-pubsub/tutorial-build-chat)
> [!div class="nextstepaction"]
-> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings)
+> [Azure Web PubSub bindings for Azure Functions](/azure/azure-web-pubsub/reference-functions-bindings)
> [!div class="nextstepaction"] > [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
cdn Cdn App Dev Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-node.md
You will then be presented a series of questions to initialize your project. Fo
![NPM init output](./media/cdn-app-dev-node/cdn-npm-init.png)
-Our project is now initialized with a *packages.json* file. Our project is going to use some Azure libraries contained in NPM packages. We'll use the library for Azure Active Directory authentication in Node.js (@azure/ms-rest-nodeauth) and the Azure CDN Client Library for JavaScript (@azure/arm-cdn). Let's add those to the project as dependencies.
+Our project is now initialized with a *packages.json* file. Our project is going to use some Azure libraries contained in NPM packages. We'll use the library for Azure Active Directory authentication in Node.js (@azure/identity) and the Azure CDN Client Library for JavaScript (@azure/arm-cdn). Let's add those to the project as dependencies.
```console
-npm install --save @azure/ms-rest-nodeauth
+npm install --save @azure/identity
npm install --save @azure/arm-cdn ```
After the packages are done installing, the *package.json* file should look simi
"author": "Cam Soper", "license": "MIT", "dependencies": {
- "@azure/arm-cdn": "^5.2.0",
- "@azure/ms-rest-nodeauth": "^3.0.0"
+ "@azure/arm-cdn": "^7.0.1",
+ "@azure/identity": "^2.0.4"
} } ```
With *app.js* open in our editor, let's get the basic structure of our program w
1. Add the "requires" for our NPM packages at the top with the following: ``` javascript
- var msRestAzure = require('@azure/ms-rest-nodeauth');
+ const { DefaultAzureCredential } = require("@azure/identity");
const { CdnManagementClient } = require('@azure/arm-cdn'); ``` 2. We need to define some constants our methods will use. Add the following. Be sure to replace the placeholders, including the **&lt;angle brackets&gt;**, with your own values as needed.
With *app.js* open in our editor, let's get the basic structure of our program w
3. Next, we'll instantiate the CDN management client and give it our credentials. ``` javascript
- var credentials = new msRestAzure.ApplicationTokenCredentials(clientId, tenantId, clientSecret);
+ var credentials = new DefaultAzureCredential();
var cdnClient = new CdnManagementClient(credentials, subscriptionId); ```
function cdnCreate() {
} // create profile <profile name>
-function cdnCreateProfile() {
+async function cdnCreateProfile() {
requireParms(3); console.log("Creating profile..."); var standardCreateParameters = {
function cdnCreateProfile() {
} };
- cdnClient.profiles.create( resourceGroupName, parms[2], standardCreateParameters, callback);
+ await cdnClient.profiles.beginCreateAndWait( resourceGroupName, parms[2], standardCreateParameters, callback);
} // create endpoint <profile name> <endpoint name> <origin hostname>
-function cdnCreateEndpoint() {
+async function cdnCreateEndpoint() {
requireParms(5); console.log("Creating endpoint..."); var endpointProperties = {
function cdnCreateEndpoint() {
}] };
- cdnClient.endpoints.create(resourceGroupName, parms[2], parms[3], endpointProperties, callback);
+ await cdnClient.endpoints.beginCreateAndWait(resourceGroupName, parms[2], parms[3], endpointProperties, callback);
} ```
Assuming the endpoint has been created, one common task that we might want to pe
```javascript // purge <profile name> <endpoint name> <path>
-function cdnPurge() {
+async function cdnPurge() {
requireParms(4); console.log("Purging endpoint..."); var purgeContentPaths = [ parms[3] ];
- cdnClient.endpoints.purgeContent(resourceGroupName, parms[2], parms[3], purgeContentPaths, callback);
+ await cdnClient.endpoints.beginPurgeContentAndWait(resourceGroupName, parms[2], parms[3], purgeContentPaths, callback);
} ```
function cdnPurge() {
The last function we will include deletes endpoints and profiles. ```javascript
-function cdnDelete() {
+async function cdnDelete() {
requireParms(2); switch(parms[1].toLowerCase()) {
function cdnDelete() {
case "profile": requireParms(3); console.log("Deleting profile...");
- cdnClient.profiles.deleteMethod(resourceGroupName, parms[2], callback);
+ await cdnClient.profiles.beginDeleteAndWait(resourceGroupName, parms[2], callback);
break; // delete endpoint <profile name> <endpoint name> case "endpoint": requireParms(4); console.log("Deleting endpoint...");
- cdnClient.endpoints.deleteMethod(resourceGroupName, parms[2], parms[3], callback);
+ await cdnClient.endpoints.beginDeleteAndWait(resourceGroupName, parms[2], parms[3], callback);
break; default:
cloud-services-extended-support Enable Key Vault Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md
Title: Apply the Key Vault VM Extension in Azure Cloud Services (extended support)
-description: Enable KeyVault VM Extension for Cloud Services (extended support)
+ Title: Apply the Key Vault VM extension in Azure Cloud Services (extended support)
+description: Learn about the Key Vault VM extension for Windows and how to enable it in Azure Cloud Services.
# Apply the Key Vault VM extension to Azure Cloud Services (extended support)
-## What is the Key Vault VM Extension?
-The Key Vault VM extension provides automatic refresh of certificates stored in an Azure Key Vault. Specifically, the extension monitors a list of observed certificates stored in key vaults, and upon detecting a change, retrieves, and installs the corresponding certificates. For more details, see [Key Vault VM extension for Windows](../virtual-machines/extensions/key-vault-windows.md).
+This article provides basic information about the Azure Key Vault VM extension for Windows and shows you how to enable it in Azure Cloud Services.
-## What's new in the Key Vault VM Extension?
-The Key Vault VM extension is now supported on the Azure Cloud Services (extended support) platform to enable the management of certificates end to end. The extension can now pull certificates from a configured Key Vault at a pre-defined polling interval and install them for use by the service.
+## What is the Key Vault VM extension?
+The Key Vault VM extension provides automatic refresh of certificates stored in an Azure key vault. Specifically, the extension monitors a list of observed certificates stored in key vaults. When the extension detects a change, it retrieves and installs the corresponding certificates. For more information, see [Key Vault VM extension for Windows](../virtual-machines/extensions/key-vault-windows.md).
-## How can I leverage the Key Vault VM extension?
-The following tutorial will show you how to install the Key Vault VM extension on PaaSV1 services by first creating a bootstrap certificate in your vault to get a token from AAD that will help in the authentication of the extension with the vault. Once the authentication process is set up and the extension is installed all latest certificates will be pulled down automatically at regular polling intervals.
+## What's new in the Key Vault VM extension?
+The Key Vault VM extension is now supported on the Azure Cloud Services (extended support) platform to enable the management of certificates end to end. The extension can now pull certificates from a configured key vault at a predefined polling interval and install them for the service to use.
+
+## How can I use the Key Vault VM extension?
+The following procedure will show you how to install the Key Vault VM extension on Azure Cloud Services by first creating a bootstrap certificate in your vault to get a token from Azure Active Directory (Azure AD). That token will help in the authentication of the extension with the vault. After the authentication process is set up and the extension is installed, all the latest certificates will be pulled down automatically at regular polling intervals.
> [!NOTE]
-> The Key Vault VM extension downloads all the certificates in the windows certificate store or to the location provided by "certificateStoreLocation" property in the VM extension settings. Currently, the KV VM extension grants access to the private key of the certificate only to the local system admin account.
+> The Key Vault VM extension downloads all the certificates in the Windows certificate store to the location provided by the `certificateStoreLocation` property in the VM extension settings. Currently, the Key Vault VM extension grants access to the private key of the certificate only to the local system admin account.
-## Prerequisites
-To use the Azure Key Vault VM extension, you need to have an Azure Active Directory tenant. For more information on setting up a new Active Directory tenant, see [Setup your AAD tenant](../active-directory/develop/quickstart-create-new-tenant.md)
+### Prerequisites
+To use the Azure Key Vault VM extension, you need to have an Azure AD tenant. For more information, see [Quickstart: Set up a tenant](../active-directory/develop/quickstart-create-new-tenant.md).
-## Enable the Azure Key Vault VM extension
+### Enable the Azure Key Vault VM extension
-1. [Generate a certificate](../key-vault/certificates/create-certificate-signing-request.md) in your vault and download the .cer for that certificate.
+1. [Generate a certificate](../key-vault/certificates/create-certificate-signing-request.md) in your vault and download the .cer file for that certificate.
-2. In the [Azure portal](https://portal.azure.com) navigate to **App Registrations**.
+2. In the [Azure portal](https://portal.azure.com), go to **App registrations**.
- :::image type="content" source="media/app-registration-0.jpg" alt-text="Shows selecting app registration in the portal.":::
+ :::image type="content" source="media/app-registration-0.jpg" alt-text="Screenshot of resources available in the Azure portal, including app registrations.":::
-3. In the App Registrations page select **New Registration** on the top left corner of the page
+3. On the **App registrations** page, select **New registration**.
- :::image type="content" source="media/app-registration-1.png" alt-text="Shows the app registration sin the Azure portal.":::
+ :::image type="content" source="media/app-registration-1.png" alt-text="Screenshot that shows the page for app registrations in the Azure portal.":::
-4. On the next page you can fill the form and complete the app creation.
+4. On the next page, fill out the form and complete the app creation.
-5. Upload the .cer of the certificate to the Azure Active Directory app portal.
+5. Upload the .cer file of the certificate to the Azure AD app portal.
- - Optionally you can also leverage the [Key Vault Event Grid notification feature](https://azure.microsoft.com/updates/azure-key-vault-event-grid-integration-is-now-available/) to upload the certificate.
+ Optionally, you can use the [Azure Event Grid notification feature for Key Vault](https://azure.microsoft.com/updates/azure-key-vault-event-grid-integration-is-now-available/) to upload the certificate.
-6. Grant the Azure Active Directory app secret list/get permissions in Key Vault:
- - If you are using RBAC preview, search for the name of the AAD app you created and assign it to the Key Vault Secrets User (preview) role.
- - If you are using vault access policies, then assign **Secret-Get** permissions to the AAD app you created. For more information, see [Assign access policies](../key-vault/general/assign-access-policy-portal.md)
+6. Grant the Azure Active Directory app secret permissions in Key Vault:
+
+ - If you're using a role-based access control (RBAC) preview, search for the name of the Azure AD app that you created and assign it to the Key Vault Secrets User (preview) role.
+ - If you're using vault access policies, assign **Secret-Get** permissions to the Azure AD app that you created. For more information, see [Assign access policies](../key-vault/general/assign-access-policy-portal.md).
-7. Install first
-step and the Key Vault VM extension using the ARM template snippet for `cloudService` resource as shown below:
+7. Install the Key Vault VM extension by using the Azure Resource Manager template snippet for the `cloudService` resource:
```json {
step and the Key Vault VM extension using the ARM template snippet for `cloudSer
} } ```
- You might need to specify the certificate store for boot strap certificate in ServiceDefinition.csdef like below:
+ You might need to specify the certificate store for the bootstrap certificate in *ServiceDefinition.csdef*:
```xml <Certificates>
step and the Key Vault VM extension using the ARM template snippet for `cloudSer
``` ## Next steps
-Further improve your deployment by [enabling monitoring in Cloud Services (extended support)](enable-alerts.md)
+Further improve your deployment by [enabling monitoring in Azure Cloud Services (extended support)](enable-alerts.md).
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-get-started.md
For a video introduction to Azure Storage best practices and patterns, see Micro
For more information, see the following resources:
-* [Azure Cloud Services Part 1: Introduction](https://justazure.com/microsoft-azure-cloud-services-part-1-introduction/)
* [How to manage Cloud Services](cloud-services-how-to-manage-portal.md) * [Azure Storage](../storage/index.yml) * [How to choose a cloud service provider](https://azure.microsoft.com/overview/choosing-a-cloud-service-provider/)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/call-api.md
Previously updated : 06/03/2022 Last updated : 06/28/2022
You can also use the client libraries provided by the Azure SDK to send requests
|Language |Package version | |||
- |.NET | [1.0.0-beta.3 ](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0-beta.3) |
- |Python | [1.1.0b1](https://pypi.org/project/azure-ai-language-conversations/) |
+ |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
5. See the following reference documentation for more information:
- * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
+ * [C#](/dotnet/api/azure.ai.language.conversations)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/call-api.md
Previously updated : 05/20/2022 Last updated : 06/28/2022 ms.devlang: csharp, python
You can also use the client libraries provided by the Azure SDK to send requests
:::image type="content" source="../../custom-text-classification/media/get-endpoint-azure.png" alt-text="Screenshot showing how to get the Azure endpoint." lightbox="../../custom-text-classification/media/get-endpoint-azure.png"::: - 3. Download and install the client library package for your language of choice: |Language |Package version | |||
- |.NET | [1.0.0-beta.3 ](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0-beta.3) |
- |Python | [1.1.0b1](https://pypi.org/project/azure-ai-language-conversations/) |
+ |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
5. See the following reference documentation for more information:
- * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
+ * [C#](/dotnet/api/azure.ai.language.conversations)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
Last updated 11/23/2021
The question answering Authoring API is used to automate common tasks like adding new question answer pairs, as well as creating, publishing, and maintaining projects/knowledge bases. > [!NOTE]
-> Currently authoring functionality is only available via the REST API. This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
+> Authoring functionality is available via the REST API and [Authoring SDK (preview)](https://docs.microsoft.com/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
## Prerequisites
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 06/22/2022 Last updated : 06/28/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features. ## June 2022
-* Python client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python).
+* v1.0 client libraries for [conversational language understanding](./conversational-language-understanding/how-to/call-api.md?tabs=azure-sdk#send-a-conversational-language-understanding-request) and [orchestration workflow](./orchestration-workflow/how-to/call-api.md?tabs=azure-sdk#send-an-orchestration-workflow-request) are Generally Available for the following languages:
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
+* v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for:
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
+ ## May 2022
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Previously updated : 01/31/2022 Last updated : 06/23/2022 zone_pivot_groups: container-apps
You learn how to:
> * Deploy two dapr-enabled container apps: one that produces orders and one that consumes orders and stores them > * Verify the interaction between the two microservices.
-With Azure Container Apps, you get a fully managed version of the Dapr APIs when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
+With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) quickstart.
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Previously updated : 03/22/2022 Last updated : 06/23/2022 ms.devlang: azurecli
You learn how to:
> * Deploy two apps that produce and consume messages and persist them in the state store > * Verify the interaction between the two microservices.
-With Azure Container Apps, you get a fully managed version of the Dapr APIs when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
+With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
container-apps Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices.md
Previously updated : 11/02/2021 Last updated : 06/23/2022
- Independent [scaling](scale-app.md), [versioning](application-lifecycle-management.md), and [upgrades](application-lifecycle-management.md) - [Service discovery](connect-apps.md)-- Native [Dapr integration](microservices-dapr.md)
+- Native [Dapr integration](./dapr-overview.md)
:::image type="content" source="media/microservices/azure-container-services-microservices.png" alt-text="Container apps are deployed as microservices.":::
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
Previously updated : 11/02/2021 Last updated : 06/23/2022
With Azure Container Apps, you can:
- [**Use internal ingress and service discovery**](connect-apps.md) for secure internal-only endpoints with built-in DNS-based service discovery. -- [**Build microservices with Dapr**](microservices.md) and access its rich set of APIs.
+- [**Build microservices with Dapr**](microservices.md) and [access its rich set of APIs](./dapr-overview.md).
- [**Run containers from any registry**](containers.md), public or private, including Docker Hub and Azure Container Registry (ACR).
container-registry Container Registry Tasks Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-managed-identity.md
Last updated 01/14/2020-+ # Use an Azure-managed identity in ACR Tasks
az acr task credential add \
You can get the client ID of the identity by running the [az identity show][az-identity-show] command. The client ID is a GUID of the form `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+The `--use-identity` parameter is not optional, if the registry has public network access disabled and relies only on certain trusted services to run ACR tasks. See, [example of ACR Tasks](allow-access-trusted-services.md#example-acr-tasks) as a trusted service.
+ ### 5. Run the task After configuring a task with a managed identity, run the task. For example, to test one of the tasks created in this article, manually trigger it using the [az acr task run][az-acr-task-run] command. If you configured additional, automated task triggers, the task runs when automatically triggered.
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-login.md
Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command to get
See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions.
-If you're experiencing problems using the registry with Azure Kubernetes Service, run the [az aks check-acr](/cli/azure/aks#az-aks-check-acr) command to validate that the registry is accessible from the AKS cluster.
+Follow the instructions from the [AKS support doc](https://docs.microsoft.com/troubleshoot/azure/azure-kubernetes/cannot-pull-image-from-acr-to-aks-cluster) if you fail to pull images from ACR to the AKS cluster.
> [!NOTE] > Some authentication or authorization errors can also occur if there are firewall or network configurations that prevent registry access. See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md).
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
To learn more, see [how to configure analytical TTL on a container](configure-sy
Data tiering refers to the separation of data between storage infrastructures optimized for different scenarios. Thereby improving the overall performance and cost-effectiveness of the end-to-end data stack. With analytical store, Azure Cosmos DB now supports automatic tiering of data from the transactional store to analytical store with different data layouts. With analytical store optimized in terms of storage cost compared to the transactional store, allows you to retain much longer horizons of operational data for historical analysis.
-After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure the transactional store Time-to-Live (TTTL) property to have records automatically deleted from the transactional store after a certain time period. Similarly, the analytical store Time-to-Live (ATTL) allows you to manage the lifecycle of data retained in the analytical store independent from the transactional store. By enabling analytical store and configuring TTL properties, you can seamlessly tier and define the data retention period for the two stores.
+After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure `transactional TTL` property to have records automatically deleted from the transactional store after a certain time period. Similarly, the `analytical TTL` allows you to manage the lifecycle of data retained in the analytical store, independent from the transactional store. By enabling analytical store and configuring transactional and analytical `TTL` properties, you can seamlessly tier and define the data retention period for the two stores.
+
+> [!NOTE]
+> When `analytical TTL` is bigger than `transactional TTL`, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL` in analytical store. If your container data may need an update or a delete at some point in time in the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future.
+
+> [!NOTE]
+> If your scenario doesn't demand physical deletes, you can adopt a logical delete/update approach. Insert in transactional store another version of the same document that only exists in analytical store but needs a logical delete/update. Maybe with a flag indicating that it's a delete or an update of an expired document. Both versions of the same document will co-exist in analytical store, and your application should only consider the last one.
++
+## Resilience
+
+Analytical store relies on Azure Storage and offers the following protection against physical failure:
+
+ * Single region Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) Azure Storage accounts.
+ * If any geo-region replication is configured for the Azure Cosmos DB database account, analytical store is allocated in Zone-Redundant Storage (ZRS) Azure storage accounts.
## Backup
-Currently analytical store doesn't support backup and restore, and your backup policy can't be planned relying on that. For more information, check the limitations section of [this](synapse-link.md#limitations) document. While continuous backup mode isn't supported in database accounts with Synapse Link enabled, periodic backup mode is.
+Although analytical store has built-in protection against physical failures, backup can be necessary for accidental deletes or updates in transactional store. In those cases, you can restore a container and use the restored container to backfill the data in the original container, or fully rebuild analytical store if necessary.
-With periodic backup mode and existing containers, you can:
+> [!NOTE]
+> Currently analytical store isn't backuped and can't be restored, and your backup policy can't be planned relying on that.
+
+Synapse Link, and analytical store by consequence, has different compatibility levels with Azure Cosmos DB backup modes:
+
+* Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account without any restriction.
+* Continuous backup mode isn't fully supported yet:
+ * Currently continuous backup mode can't be used in database accounts with Synapse Link enabled.
+ * Currently database accounts with continuous backup mode enabled can enable Synapse Link through a support case.
+ * Currently new database accounts can be created with continous backup mode and Synapse Link enabled, using Azure CLI or PowerShell. Those two features must be turned on at the same time, in the exact same command that creates the database account.
+
+### Backup Polices
- ### Fully rebuild analytical store when TTTL >= ATTL
+There two possible backup polices and to understand how to use them, two details about Cosmos DB backups are very important:
+
+ * The original container is restored without analytical store in both backup modes.
+ * Cosmos DB doesn't support containers overwrite from a restore.
+
+Now let's see how to use backup and restores from the analytical store perspective.
+
+ #### Restoring a container with TTTL >= ATTL
- The original container is restored without analytical store. But you can enable it and it will be rebuild with all data that existing in the container.
+ When `transactional TTL` is equal or bigger than `analytical TTL`, all data in analytical store still exists in transactional store. In case of a restore, you have two possible situations:
+ * To use the restored container as a replacement for the original container. To rebuild analytical store, just enable Synapse Link at account level and container level.
+ * To use the restored container as a data source to backfill or update the data in the original container. In this case, analytical store will automatically reflect the data operations.
- ### Partially rebuild analytical store when TTTL < ATTL
+ #### Restoring a container with TTTL < ATTL
-The data that was only in analytical store isn't restored, but it will be kept available for queries as long as you keep the original container. Analytical store is only deleted when you delete the container. Your analytical queries in Azure Synapse Analytics can read data from both original and restored container's analytical stores. Example:
+When `transactional TTL` is smaller than `analytical TTL`, some data only exists in analytical store and won't be in the restored container. Again your have two possible situations:
+ * To use the restored container as a replacement for the original container. In this case, when you enable Synapse Link at container level, only the data that was in transactional store will be included in the new analytical store. But please note that the analytical store of the original container remains available for queries as long as the original container exists. You may want to change your application to query both.
+ * To use the restored container as a data source to backfill or update the data in the original container:
+ * Analytical store will automatically reflect the data operations for the data that is in transactional store.
+ * If you re-insert data that was previously removed from transactional store due to `transactional TTL`, this data will be duplicated in analytical store.
+
+Example:
* Container `OnlineOrders` has TTTL set to one month and ATTL set for one year. * When you restore it to `OnlineOrdersNew` and turn on analytical store to rebuild it, there will be only one month of data in both transactional and analytical store.
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
description: Azure Cosmos DB's point-in-time restore feature helps to recover da
Previously updated : 04/06/2022 Last updated : 06/28/2022 # Continuous backup with point-in-time restore in Azure Cosmos DB+ [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)]
-Azure Cosmos DB's point-in-time restore feature helps in multiple scenarios such as the following:
+Azure Cosmos DB's point-in-time restore feature helps in multiple scenarios including:
-* To recover from an accidental write or delete operation within a container.
-* To restore a deleted account, database, or a container.
-* To restore into any region (where backups existed) at the restore point in time.
+* Recovering from an accidental write or delete operation within a container.
+* Restoring a deleted account, database, or a container.
+* Restoring into any region (where backups existed) at the restore point in time.
>
-> [!VIDEO https://aka.ms/docs.continuous-backup-restore]
+> [!VIDEO <https://aka.ms/docs.continuous-backup-restore>]
+
+Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. Continuous backups are taken in every region where the account exists. For example, an account can have a write region in West US and read regions in East US and East US 2. These replica regions can then be backed up to a remote Azure Storage account in each respective region. By default, each region stores the backup in Locally Redundant storage accounts. If the region has [Availability zones](/azure/architecture/reliability/architect) enabled then the backup is stored in Zone-Redundant storage accounts.
-Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. Continuous backups are taken in every region where the account exists. The following image shows how a container with write region in West US, read regions in East and East US 2 is backed up to a remote Azure Blob Storage account in the respective regions. By default, each region stores the backup in Locally Redundant storage accounts. If the region has [Availability zones](/azure/architecture/reliability/architect) enabled then the backup is stored in Zone-Redundant storage accounts.
+Diagram illustrating how a container with a write region in West US and read regions in East and East US 2 is backed up. The container is backed up to a remote Azure Blob Storage account in each respective write and read region.
+The time window available for restore (also known as retention period) is the lower value of the following two options: 30-day &amp; 7-day.
-The available time window for restore (also known as retention period) is the lower value of the following two: *30 days back in past from now* or *up to the resource creation time*. The point in time for restore can be any timestamp within the retention period. In strong consistency mode, backup taken in the write region is more up to date when compared to the read regions. Read regions can lag behind due to network or other transient issues. While doing restore, you can [get latest restorable timestamp](get-latest-restore-timestamp.md) for a given resource in that region to ensure that the resource has taken backups up to the given timestamp and can restore in that region.
+The selected option depends on the chosen tier of continuous backup. The point in time for restore can be any timestamp within the retention period no further back than the point when the resource was created. In strong consistency mode, backups taken in the write region are more up to date when compared to the read regions. Read regions can lag behind due to network or other transient issues. While doing restore, you can [get the latest restorable timestamp](get-latest-restore-timestamp.md) for a given resource in a specific region. Getting the latest timestamp ensures that the resource has taken backups up to the given timestamp, and can restore in that region.
-Currently, you can restore the Azure Cosmos DB account for SQL API or MongoDB contents point in time to another account via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (az CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). Table API or Gremlin APIs are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (az CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell).
+Currently, you can restore an Azure Cosmos DB account (SQL API or API for MongoDB) contents at a specific point in time to another account. You can perform this restore operation via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). Table API or Gremlin APIs are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell).
## Backup storage redundancy
By default, Azure Cosmos DB stores continuous mode backup data in locally redund
## What is restored?
-In a steady state, all mutations performed on the source account (which includes databases, containers, and items) are backed up asynchronously within 100 seconds. If the backup media (that is Azure storage) is down or unavailable, the mutations are persisted locally until the media is available back and then they are flushed out to prevent any loss in fidelity of operations that can be restored.
+In a steady state, all mutations performed on the source account (which includes databases, containers, and items) are backed up asynchronously within 100 seconds. If the Azure Storage backup media is down or unavailable, the mutations are persisted locally until the media is available. Then the mutations are flushed out to prevent any loss in fidelity of operations that can be restored.
You can choose to restore any combination of provisioned throughput containers, shared throughput database, or the entire account. The restore action restores all data and its index properties into a new account. The restore process ensures that all the data restored in an account, database, or a container is guaranteed to be consistent up to the restore time specified. The duration of restore will depend on the amount of data that needs to be restored. > [!NOTE] > With the continuous backup mode, the backups are taken in every region where your Azure Cosmos DB account is available. Backups taken for each region account are Locally redundant by default and Zone redundant if your account has [availability zone](/azure/architecture/reliability/architect) feature enabled for that region. The restore action always restores data into a new account.
-## What is not restored?
+## What isn't restored?
The following configurations aren't restored after the point-in-time recovery:
You can add these configurations to the restored account after the restore is co
## Restorable timestamp for live accounts
-To restore Azure Cosmos DB live accounts that are not deleted, it is a best practice to always identify the [latest restorable timestamp](get-latest-restore-timestamp.md) for the container. You can then use this timestamp to restore the account to its latest version.
+To restore Azure Cosmos DB live accounts that aren't deleted, it's a best practice to always identify the [latest restorable timestamp](get-latest-restore-timestamp.md) for the container. You can then use this timestamp to restore the account to its latest version.
## Restore scenarios The following are some of the key scenarios that are addressed by the point-in-time-restore feature. Scenarios [1] through [3] demonstrate how to trigger a restore if the restore timestamp is known beforehand.
-However, there could be scenarios where you don't know the exact time of accidental deletion or corruption. Scenarios [4] and [5] demonstrate how to _discover_ the restore timestamp using the new event feed APIs on the restorable database or containers.
+However, there could be scenarios where you don't know the exact time of accidental deletion or corruption. Scenarios [4] and [5] demonstrate how to *discover* the restore timestamp using the new event feed APIs on the restorable database or containers.
:::image type="content" source="./media/continuous-backup-restore-introduction/restorable-account-scenario.png" alt-text="Life-cycle events with timestamps for a restorable account." lightbox="./media/continuous-backup-restore-introduction/restorable-account-scenario.png" border="false":::
Azure Cosmos DB allows you to isolate and restrict the restore permissions for c
## <a id="continuous-backup-pricing"></a>Pricing
-Azure Cosmos DB accounts that have continuous backup enabled will incur an additional monthly charge to *store the backup* and to *restore your data*. The restore cost is added every time the restore operation is initiated. If you configure an account with continuous backup but don't restore the data, only backup storage cost is included in your bill.
+Azure Cosmos DB accounts that have continuous 30-day backup enabled will incur an extra monthly charge to *store the backup*. Both the 30-day and 7-day tier of continuous back incur charges to *restore your data*. The restore cost is added every time the restore operation is initiated. If you configure an account with continuous backup but don't restore the data, only backup storage cost is included in your bill.
-The following example is based on the price for an Azure Cosmos account deployed in West US. The pricing and calculation can vary depending on the region you are using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
+The following example is based on the price for an Azure Cosmos account deployed in West US. The pricing and calculation can vary depending on the region you're using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
-* All accounts enabled with continuous backup policy incur an additional monthly charge for backup storage that is calculated as follows:
+* All accounts enabled with continuous backup policy with 30-day incur a monthly charge for backup storage that is calculated as follows:
- $0.20/GB * Data size in GB in account * Number of regions
+ $0.20/GB \* Data size in GB in account \* Number of regions
-* Every restore API invocation incurs a one time charge. The charge is a function of the amount of data restore and it is calculated as follows:
+* Every restore API invocation incurs a one time charge. The charge is a function of the amount of data restore and it's calculated as follows:
- $0.15/GB * Data size in GB.
+ $0.15/GB \* Data size in GB.
For example, if you have 1 TB of data in two regions then:
-* Backup storage cost is calculated as (1000 * 0.20 * 2) = $400 per month
+* Backup storage cost is calculated as (1000 \* 0.20 \* 2) = $400 per month
-* Restore cost is calculated as (1000 * 0.15) = $150 per restore
+* Restore cost is calculated as (1000 \* 0.15) = $150 per restore
> [!TIP]
-> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db).
+> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db). Continous 7-day tier does not incur charges for backup of the data.
+
+## Continuous 30-day tier vs Continuous 7-day tier
+
+* Retention period for one tier is 30-day vs 7-day for another tier.
+* 30-day retention tier is charged for backup storage, 7-day retention tier isn't charged.
+* Restore is always charged in either tier
## Customer-managed keys See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk.md#how-do-customer-managed-keys-affect-continuous-backups) to learn: -- How to configure your Azure Cosmos DB account when using customer-managed keys in conjunction with continuous backups.-- How do customer-managed keys affect restores?
+* How to configure your Azure Cosmos DB account when using customer-managed keys with continuous backups.
+* How do customer-managed keys affect restores?
## Current limitations Currently the point in time restore functionality has the following limitations:
-* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra API is not supported at present
+* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra API isn't supported now.
* Table API and Gremlin API are in preview and supported via PowerShell and Azure CLI.
-* Multi-regions write accounts are not supported.
+* Multi-regions write accounts aren't supported.
-* Azure Synapse Link and periodic backup mode can coexist in the same database account. However, analytical store data isn't included in backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval.
+* Azure Synapse Link and periodic backup mode can coexist in the same database account. However, analytical store data isn't included in backups and restores. When Azure Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval.
-* Azure Synapse Link and continuous backup mode can't coexist in the same database account. Currently database accounts with Synapse Link enabled can't use continuous backup mode and vice-versa.
+* Azure Synapse Link and continuous backup mode can't coexist in the same database account. Currently database accounts with Azure Synapse Link enabled can't use continuous backup mode and vice-versa.
-* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account did not exist.
+* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account didn't exist.
-* The restore window is only 30 days and it cannot be changed.
+* The restore window is only 30-day for continuous 30-day tier and it can't be changed. Similarly it's only 7-day for continuous 7-day tier and that also can't be changed.
-* The backups are not automatically geo-disaster resistant. You have to explicitly add another region to have resiliency for the account and the backup.
+* The backups aren't automatically geo-disaster resistant. You've to explicitly add another region to have resiliency for the account and the backup.
-* While a restore is in progress, don't modify or delete the Identity and Access Management (IAM) policies that grant the permissions for the account or change any VNET, firewall configuration.
+* While a restore is in progress, don't modify or delete the Identity and Access Management (IAM) policies. These policies grant the permissions for the account to change any VNET, firewall configuration.
-* Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created are not supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb/custom-commands.md).
+* Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created aren't supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb/custom-commands.md).
-* The point-in-time restore functionality always restores to a new Azure Cosmos account. Restoring to an existing account is currently not supported. If you are interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative.
+* The point-in-time restore functionality always restores to a new Azure Cosmos account. Restoring to an existing account is currently not supported. If you're interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative.
-* After restoring, it is possible that for certain collections the consistent index may be rebuilding. You can check the status of the rebuild operation via the [IndexTransformationProgress](how-to-manage-indexing-policy.md) property.
+* After restoring, it's possible that for certain collections the consistent index may be rebuilding. You can check the status of the rebuild operation via the [IndexTransformationProgress](how-to-manage-indexing-policy.md) property.
-* The restore process restores all the properties of a container including its TTL configuration. As a result, it is possible that the data restored is deleted immediately if you configured that way. In order to prevent this situation, the restore timestamp must be before the TTL properties were added into the container.
+* The restore process restores all the properties of a container including its TTL configuration. As a result, it's possible that the data restored is deleted immediately if you configured that way. In order to prevent this situation, the restore timestamp must be before the TTL properties were added into the container.
-* Unique indexes in API for MongoDB can't be added or updated when you create a continuous backup mode account or migrate an account from periodic to continuous mode.
+* Unique indexes in API for MongoDB can't be added or updated when you create a continuous backup mode account. They also can't be modified when you migrate an account from periodic to continuous mode.
-* Continuous mode restore may not restore throughput setting valid as of restore point.
+* Continuous mode restore may not restore throughput setting valid as of restore point.
## Next steps
-* Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
+* Enable continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
* [Get the latest restorable timestamp](get-latest-restore-timestamp.md) for SQL and MongoDB accounts. * Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). * [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md). * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
-* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
cosmos-db Continuous Backup Restore Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-resource-model.md
Previously updated : 03/02/2022 Last updated : 06/28/2022 - # Resource model for the Azure Cosmos DB point-in-time restore feature+ [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)] This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored. This feature is supported in Azure Cosmos DB API for SQL and the Cosmos DB API for MongoDB. Currently, this feature is in preview for Azure Cosmos DB Gremlin API and Table API accounts.
The database account's resource model is updated with a few extra properties to
### BackupPolicy
-A new property in the account level backup policy named `Type` under `backuppolicy` parameter enables continuous backup and point-in-time restore functionalities. This mode is called **continuous backup**. You can set this mode when creating the account or while [migrating an account from periodic to continuous mode](migrate-continuous-backup.md). After continuous mode is enabled, all the containers and databases created within this account will have continuous backup and point-in-time restore functionalities enabled by default.
+A new property in the account level backup policy named ``Type`` under the ``backuppolicy`` parameter enables continuous backup and point-in-time restore. This mode is referred to as **continuous backup**. You can set this mode when creating the account or while [migrating an account from periodic to continuous mode](migrate-continuous-backup.md). After continuous mode is enabled, all the containers and databases created within this account will have point-in-time restore and continuous backup enabled by default. The continuous backup tier can be set to ``Continuous7Days`` or ``Continuous30Days``. By default, if no tier is provided, ``Continuous30Days`` is applied on the account.
> [!NOTE]
-> Currently the point-in-time restore feature is available for Azure Cosmos DB API for MongoDB and SQL accounts. After you create an account with continuous mode you can't switch it to a periodic mode.
+> Currently the point-in-time restore feature is available for Azure Cosmos DB API for MongoDB and SQL API accounts. It is also available for Table API and Gremlin API in preview. After you create an account with continuous mode you can't switch it to a periodic mode. The ``Continuous7Days`` tier is in preview.
### CreateMode
This property indicates how the account was created. The possible values are *De
The `RestoreParameters` resource contains the restore operation details including, the account ID, the time to restore, and resources that need to be restored.
-|Property Name |Description |
-|||
-|restoreMode | The restore mode should be *PointInTime* |
-|restoreSource | The instanceId of the source account from which the restore will be initiated. |
-|restoreTimestampInUtc | Point in time in UTC to restore the account. |
-|databasesToRestore | List of `DatabaseRestoreResource` objects to specify which databases and containers should be restored. Each resource represents a single database and all the collections under that database, see the [restorable SQL resources](#restorable-sql-resources) section for more details. If this value is empty, then the entire account is restored. |
-|gremlinDatabasesToRestore | List of `GremlinDatabaseRestoreResource` objects to specify which databases and graphs should be restored. Each resource represents a single database and all the graphs under that database. See the [restorable Gremlin resources](#restorable-graph-resources) section for more details. If this value is empty, then the entire account is restored. |
-|tablesToRestore | List of `TableRestoreResource` objects to specify which tables should be restored. Each resource represents a table under that database, see the [restorable Table resources](#restorable-table-resources) section for more details. If this value is empty, then the entire account is restored. |
+| Property Name | Description |
+| | |
+| ``restoreMode`` | The restore mode should be ``PointInTime``. |
+| ``restoreSource`` | The instanceId of the source account from which the restore will be initiated. |
+| ``restoreTimestampInUtc`` | Point in time in UTC to restore the account. |
+| ``databasesToRestore`` | List of `DatabaseRestoreResource` objects to specify which databases and containers should be restored. Each resource represents a single database and all the collections under that database. For more information, see [restorable SQL resources](#restorable-sql-resources). If this value is empty, then the entire account is restored. |
+| ``gremlinDatabasesToRestore`` | List of `GremlinDatabaseRestoreResource` objects to specify which databases and graphs should be restored. Each resource represents a single database and all the graphs under that database. For more information, see [restorable Gremlin resources](#restorable-graph-resources). If this value is empty, then the entire account is restored. |
+| ``tablesToRestore`` | List of `TableRestoreResource` objects to specify which tables should be restored. Each resource represents a table under that database. For more information, see [restorable Table resources](#restorable-table-resources). If this value is empty, then the entire account is restored. |
### Sample resource
The following JSON is a sample database account resource with continuous backup
}, "backupPolicy": { "type": "Continuous"
+ ....
} } } ``` - ## Restorable resources
-A set of new resources and APIs is available to help you discover critical information about resources, which can be restored, locations where they can be restored from, and the timestamps when key operations were performed on these resources.
+A set of new resources and APIs is available to help you discover critical information about resources, which includes:
+
+* Where the resources can be restored
+* Locations where the resources can be restored from
+* Timestamps when key operations were performed on these resources.
> [!NOTE] > All the API used to enumerate these resources require the following permissions:
+>
> * `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` > * `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read`
+>
### Restorable database account This resource contains a database account instance that can be restored. The database account can either be a deleted or a live account. It contains information that allows you to find the source database account that you want to restore.
-|Property Name |Description |
-|||
-| ID | The unique identifier of the resource. |
-| accountName | The global database account name. |
-| creationTime | The time in UTC when the account was created or migrated. |
-| deletionTime | The time in UTC when the account was deleted. This value is empty if the account is live. |
-| apiType | The API type of the Azure Cosmos DB account. |
-| restorableLocations | The list of locations where the account existed. |
-| restorableLocations: locationName | The region name of the regional account. |
-| restorableLocations: regionalDatabaseAccountInstanceId | The GUID of the regional account. |
-| restorableLocations: creationTime | The time in UTC when the regional account was created r migrated.|
-| restorableLocations: deletionTime | The time in UTC when the regional account was deleted. This value is empty if the regional account is live.|
+| Property Name | Description |
+| | |
+| ``ID`` | The unique identifier of the resource. |
+| ``accountName`` | The global database account name. |
+| ``creationTime`` | The time in UTC when the account was created or migrated. |
+| ``deletionTime`` | The time in UTC when the account was deleted. This value is empty if the account is live. |
+| ``apiType`` | The API type of the Azure Cosmos DB account. |
+| ``restorableLocations`` | The list of locations where the account existed. |
+| ``restorableLocations: locationName`` | The region name of the regional account. |
+| ``restorableLocations: regionalDatabaseAccountInstanceId`` | The GUID of the regional account. |
+| ``restorableLocations: creationTime`` | The time in UTC when the regional account was created r migrated.|
+| ``restorableLocations: deletionTime`` | The time in UTC when the regional account was deleted. This value is empty if the regional account is live.|
+| ``OldestRestorableTimeStamp`` | The earliest time in UTC to which restore can be performed. For the 30 day tier, this time can be maximum 30 days from now, for the seven days tier, this time can be up to seven days from now. |
To get a list of all restorable accounts, see [Restorable Database Accounts - list](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-database-accounts/list) or [Restorable Database Accounts- list by location](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-database-accounts/list-by-location) articles.
To get a list of all restorable accounts, see [Restorable Database Accounts - li
Each resource contains information of a mutation event such as creation and deletion that occurred on the SQL Database. This information can help in scenarios where the database was accidentally deleted and if you need to find out when that event happened.
-|Property Name |Description |
-|||
-| eventTimestamp | The time in UTC when the database is created or deleted. |
-| ownerId | The name of the SQL database. |
-| ownerResourceId | The resource ID of the SQL database|
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event isn't initiated by the user</li></ul> |
-| database |The properties of the SQL database at the time of the event|
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when the database is created or deleted. |
+| ``ownerId`` | The name of the SQL database. |
+| ``ownerResourceId`` | The resource ID of the SQL database, |
+| ``operationType`` | The operation type of this database event. |
+| ``database`` | The properties of the SQL database at the time of the event, |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: database creation event
+> * ``Delete``: database deletion event
+> * ``Replace``: database modification event
+> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user
+>
To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-databases/list) article.
To get a list of all database mutations, see [Restorable Sql Databases - List](/
Each resource contains information of a mutation event such as creation and deletion that occurred on the SQL container. This information can help in scenarios where the container was modified or deleted, and if you need to find out when that event happened.
-|Property Name |Description |
-|||
-| eventTimestamp | The time in UTC when this container event happened.|
-| ownerId| The name of the SQL container.|
-| ownerResourceId | The resource ID of the SQL container.|
-| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event isn't initiated by the user</li></ul> |
-| container | The properties of the SQL container at the time of the event.|
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this container event happened. |
+| ``ownerId`` | The name of the SQL container. |
+| ``ownerResourceId`` | The resource ID of the SQL container.|
+| ``operationType`` | The operation type of this container event. |
+| ``container`` | The properties of the SQL container at the time of the event. |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: container creation event
+> * ``Delete``: container deletion event
+> * ``Replace``: container modification event
+> * ``SystemOperation``: container modification event triggered by the system. This event isn't initiated by the user
+>
To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-containers/list) article.
Each resource represents a single database and all the containers under that dat
|Property Name |Description | |||
-| databaseName | The name of the SQL database.
-| collectionNames | The list of SQL containers under this database.|
+| ``databaseName`` | The name of the SQL database.
+| ``collectionNames`` | The list of SQL containers under this database.|
To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable Sql Resources - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-resources/list) article.
To get a list of SQL database and container combo that exist on the account at t
Each resource contains information of a mutation event such as creation and deletion that occurred on the MongoDB Database. This information can help in the scenario where the database was accidentally deleted and user needs to find out when that event happened.
-|Property Name |Description |
-|||
-|eventTimestamp| The time in UTC when this database event happened.|
-| ownerId| The name of the MongoDB database. |
-| ownerResourceId | The resource ID of the MongoDB database. |
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user </li></ul> |
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this database event happened. |
+| ``ownerId`` | The name of the MongoDB database. |
+| ``ownerResourceId`` | The resource ID of the MongoDB database. |
+| ``operationType`` | The operation type of this database event. |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: database creation event
+> * ``Delete``: database deletion event
+> * ``Replace``: database modification event
+> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user
+>
To get a list of all database mutation, see [Restorable Mongodb Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-databases/list) article.
To get a list of all database mutation, see [Restorable Mongodb Databases - List
Each resource contains information of a mutation event such as creation and deletion that occurred on the MongoDB Collection. This information can help in scenarios where the collection was modified or deleted, and user needs to find out when that event happened.
-|Property Name |Description |
-|||
-| eventTimestamp |The time in UTC when this collection event happened. |
-| ownerId| The name of the MongoDB collection. |
-| ownerResourceId | The resource ID of the MongoDB collection. |
-| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event isn't initiated by the user</li></ul> |
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this collection event happened. |
+| ``ownerId`` | The name of the MongoDB collection. |
+| ``ownerResourceId`` | The resource ID of the MongoDB collection. |
+| ``operationType`` | The operation type of this collection event. |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: collection creation event
+> * ``Delete``: collection deletion event
+> * ``Replace``: collection modification event
+> * ``SystemOperation``: collection modification event triggered by the system. This event isn't initiated by the user
+>
-To get a list of all container mutations under the same database see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list) article.
+To get a list of all container mutations under the same database, see [restorable MongoDB resources - list](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list).
### Restorable MongoDB resources Each resource represents a single database and all the collections under that database.
-|Property Name |Description |
-|||
-| databaseName |The name of the MongoDB database. |
-| collectionNames | The list of MongoDB collections under this database. |
+| Property Name | Description |
+| | |
+| ``databaseName`` |The name of the MongoDB database. |
+| ``collectionNames`` | The list of MongoDB collections under this database. |
-To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [Restorable Mongodb Resources - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-resources/list) article.
+To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [restorable MongoDB resources - list](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-resources/list).
### Restorable Graph resources
-Each resource represents a single database and all the graphs under that database.
+Each resource represents a single database and all the graphs under that database.
-|Property Name |Description |
-|||
-| gremlinDatabaseName | The name of the Graph database. |
-| graphNames | The list of Graphs under this database. |
+| Property Name | Description |
+| | |
+| ``gremlinDatabaseName`` | The name of the Graph database. |
+| ``graphNames`` | The list of Graphs under this database. |
To get a list of all Gremlin database and graph combinations that exist on the account at the given timestamp and location, see [Restorable Graph Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-resources/list) article.
-### Restorable Graph database
+### Restorable Graph database
-Each resource contains information about a mutation event, such as a creation and deletion, that occurred on the Graph database. This information can help in the scenario where the database was accidentally deleted and user needs to find out when that event happened.
+Each resource contains information about a mutation event, such as a creation and deletion that occurred on the Graph database. This information can help in the scenario where the database was accidentally deleted and user needs to find out when that event happened.
-|Property Name |Description |
-|||
-|eventTimestamp| The time in UTC when this database event happened.|
-| ownerId| The name of the Graph database. |
-| ownerResourceId | The resource ID of the Graph database. |
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user. </li></ul> |
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this database event happened. |
+| ``ownerId`` | The name of the Graph database. |
+| ``ownerResourceId`` | The resource ID of the Graph database. |
+| ``operationType`` | The operation type of this database event. |
-To get an event feed of all mutations on the Gremlin database for the account, see theΓÇ»[Restorable Graph Databases - List]( /rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-databases/list) article.
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: database creation event
+> * ``Delete``: database deletion event
+> * ``Replace``: database modification event
+> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user.
+>
-### Restorable Graphs
+To get an event feed of all mutations on the Gremlin database, see [restorable graph databases - list]( /rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-databases/list).
-Each resource contains information of a mutation event such as creation and deletion that occurred on the Graph. This information can help in scenarios where the graph was modified or deleted, and if you need to find out when that event happened.
+### Restorable Graphs
-|Property Name |Description |
-|||
-| eventTimestamp |The time in UTC when this collection event happened. |
-| ownerId| The name of the Graph collection. |
-| ownerResourceId | The resource ID of the Graph collection. |
-| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: Graph creation event</li><li>Delete: Graph deletion event</li><li>Replace: Graph modification event</li><li>SystemOperation: collection modification event triggered by the system. This event isn't initiated by the user.</li></ul> |
+Each resource contains information of a mutation event such as creation and deletion that occurred on the Graph. This information can help in scenarios where the graph was modified or deleted, and if you need to find out when that event happened.
+
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this collection event happened. |
+| ``ownerId`` | The name of the Graph collection. |
+| ``ownerResourceId`` | The resource ID of the Graph collection. |
+| ``operationType`` | The operation type of this collection event. |
+
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: Graph creation event
+> * ``Delete``: Graph deletion event
+> * ``Replace``: Graph modification event
+> * ``SystemOperation``: collection modification event triggered by the system. This event isn't initiated by the user.
+>
To get a list of all container mutations under the same database, see graph [Restorable Graphs - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-graphs/list) article.
-### Restorable Table resources
+### Restorable Table resources
Lists all the restorable Azure Cosmos DB Tables available for a specific database account at a given time and location. Note the Table API doesn't specify an explicit database.
-|Property Name |Description |
-|||
-| TableNames | The list of Table containers under this account. |
+| Property Name | Description |
+| | |
+| ``TableNames`` | The list of Table containers under this account. |
-To get a list of tables that exist on the account at the given timestamp and location, see [Restorable Table Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-table-resources/list) article.
+To get a list of tables that exist on the account at the given timestamp and location, see [Restorable Table Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-table-resources/list) article.
### Restorable Table
-Each resource contains information of a mutation event such as creation and deletion that occurred on the Table. This information can help in scenarios where the table was modified or deleted, and if you need to find out when that event happened.
+Each resource contains information of a mutation event such as creation and deletion that occurred on the Table. This information can help in scenarios where the table was modified or deleted, and if you need to find out when that event happened.
-|Property Name |Description |
-|||
-|eventTimestamp| The time in UTC when this database event happened.|
-| ownerId| The name of the Table database. |
-| ownerResourceId | The resource ID of the Table resource. |
-| operationType | The operation type of this Table event. Here are the possible values:<br/><ul><li> Create: Table creation event</li><li> Delete: Table deletion event</li><li> Replace: Table modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user </li></ul> |
-
-To get a list of all table mutations under the same database, see [Restorable Table - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-tables/list) article.
+| Property Name | Description |
+| | |
+| ``eventTimestamp`` | The time in UTC when this database event happened. |
+| ``ownerId`` | The name of the Table database. |
+| ``ownerResourceId`` | The resource ID of the Table resource. |
+| ``operationType`` | The operation type of this Table event. |
+> [!NOTE]
+> Possible values for ``operationType`` include:
+>
+> * ``Create``: Table creation event
+> * ``Delete``: Table deletion event
+> * ``Replace``: Table modification event
+> * ``SystemOperation``: database modification event triggered by the system. This event isn't initiated by the user
+>
+
+To get a list of all table mutations under the same database, see [Restorable Table - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-tables/list) article.
## Next steps
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Previously updated : 04/08/2022 Last updated : 06/28/2022 # Migrate an Azure Cosmos DB account from periodic to continuous backup mode+ [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)] Azure Cosmos DB accounts with periodic mode backup policy can be migrated to continuous mode using [Azure portal](#portal), [CLI](#cli), [PowerShell](#powershell), or [Resource Manager templates](#ARM-template). Migration from periodic to continuous mode is a one-way migration and itΓÇÖs not reversible. After migrating from periodic to continuous mode, you can apply the benefits of continuous mode.
Azure Cosmos DB accounts with periodic mode backup policy can be migrated to con
The following are the key reasons to migrate into continuous mode: * The ability to do self-service restore using Azure portal, CLI, or PowerShell.
-* The ability to restore at time granularity of the second within the last 30-day window.
+* The ability to restore at time granularity of a second within the last 30-day or 7-day window.
* The ability to make sure that the backup is consistent across shards or partition key ranges within a period. * The ability to restore container, database, or the full account when it's deleted or modified. * The ability to choose the events on the container, database, or account and decide when to initiate the restore.
+> [!IMPORTANT]
+> Support for 7-day continous backup in both provisioning and migration scenarios is still in preview. Please use PowerShell and Azure CLI to migrate or provision an account with continous backup configured at the 7-day tier.
+ > [!NOTE] > The migration capability is one-way only and it's an irreversible action. Which means once you migrate from periodic mode to continuous mode, you canΓÇÖt switch back to periodic mode. >
To perform the migration, you need `Microsoft.DocumentDB/databaseAccounts/write`
## Pricing after migration
-After you migrate your account to continuous backup mode, the cost with this mode is different when compared to the periodic backup mode. The continuous mode backup cost can vary from periodic mode. To learn more, see [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing).
+After you migrate your account to continuous backup mode, the costs change when compared to the periodic backup mode. The tier choice of 30 days versus seven days will also have an influence on the cost of the backup. To learn more, see [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing).
## <a id="portal"></a> Migrate using portal
Use the following steps to migrate your account from periodic backup to continuo
## <a id="powershell"></a>Migrate using PowerShell
-Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or version higher than 6.2.0. Next, run the following steps:
+1. Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or any version higher than 6.2.0.
+1. To use ``Continous7Days`` mode for provisioning or migrating, you'll have to use preview of the ``cosmosdb`` extension. Use ``Install-Module -Name Az.CosmosDB -AllowPrerelease``
+1. Next, run the following steps:
-1. Connect to your Azure account:
+ 1. Connect to your Azure account:
- ```azurepowershell-interactive
- Connect-AzAccount
- ```
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
-1. Migrate your account from periodic to continuous backup mode:
+ 1. Migrate your account from periodic to continuous backup mode with ``continuous30days`` tier or ``continuous7days`` days. If a tier value isn't provided, it's assumed to be ``continous30days``:
- ```azurepowershell-interactive
- Update-AzCosmosDBAccount `
- -ResourceGroupName "myrg" `
- -Name "myAccount" `
- -BackupPolicyType Continuous
- ```
+ ```azurepowershell-interactive
+ Update-AzCosmosDBAccount `
+ -ResourceGroupName "myrg" `
+ -Name "myAccount" `
+ -BackupPolicyType "Continuous"
+ ```
+
+ ```azurepowershell-interactive
+ Update-AzCosmosDBAccount `
+ -ResourceGroupName "myrg" `
+ -Name "myAccount" `
+ -BackupPolicyType "Continuous" `
+ -ContinuousTier "Continuous7Days"
+ ```
## <a id="cli"></a>Migrate using CLI 1. Install the latest version of Azure CLI:
- * If you donΓÇÖt have CLI, [install](/cli/azure/) the latest version of Azure CLI or version higher than 2.26.0.
- * If you already have Azure CLI installed, use `az upgrade` command to upgrade to the latest version.
- * Alternatively, user can also use Cloud Shell from Azure portal.
+ * If you donΓÇÖt have the Azure CLI already installed, see [install Azure CLI](/cli/azure/). Install the latest version of Azure CLI or any version higher than 2.26.0.
+ * If you already have Azure CLI installed, use the ``az upgrade`` command to upgrade to the latest version. Alternatively, you can also use the Azure Cloud Shell from the Azure portal.
+ * To use ``Continous7Days`` mode for provisioning or migrating, you'll have to use preview of the ``cosmosdb`` extension. Use ``az extension update --name cosmosdb-preview`` to manage the extension.
1. Sign in to your Azure account and run the following command to migrate your account to continuous mode: ```azurecli-interactive az login
+ ```
+
+1. Migrate the account to ``continuous30days`` or ``continuous7days`` tier. If tier value isn't provided, it's assumed to be ``continous30days``:
+ ```azurecli-interactive
az cosmosdb update -n <myaccount> -g <myresourcegroup> --backup-policy-type continuous ```
-1. After the migration completes successfully, the output shows the backupPolicy object has the type property set to Continuous.
+ ```azurecli-interactive
+ az cosmosdb update -g "my-rg" -n "my-continuous-backup-account" --backup-policy-type "Continuous" --continuous-tier "Continuous7Days"
+ ```
+
+1. After the migration completes successfully, the output shows the ``backupPolicy`` object, which includes ``type`` property with a value of ``Continuous``.
```console { "apiProperties": null, "backupPolicy": {
- "type": "Continuous"
- },
- "capabilities": [],
- "connectorOffer": null,
- "consistencyPolicy": {
- "defaultConsistencyLevel": "Session",
- "maxIntervalInSeconds": 5,
- "maxStalenessPrefix": 100
+ "continuousModeProperties": {
+ "tier": "Continuous7Days"
+ },
+ "migrationState": null,
+ "type": "Continuous"
},
- …
+ …
} ``` ### Check the migration status
-Run the following command and check the **status**, **targetType** properties of the **backupPolicy** object. The status shows in-progress after the migration starts:
+Run the following command and check the **status** and **targetType** properties of the **backupPolicy** object. The status shows *in-progress* after the migration starts:
```azurecli-interactive az cosmosdb show -n "myAccount" -g "myrg"
az cosmosdb show -n "myAccount" -g "myrg"
:::image type="content" source="./media/migrate-continuous-backup/migration-status-started-powershell.png" alt-text="Check the migration status using PowerShell command":::
-When the migration is complete, backup type changes to **Continuous**. Run the same command again to check the status:
+When the migration is complete, the backup type changes to **Continuous** and shows the chosen tier. If a tier wasn't provided, the tier would be set to ``Continuous30Days``. Run the same command again to check the status:
```azurecli-interactive az cosmosdb show -n "myAccount" -g "myrg"
az cosmosdb show -n "myAccount" -g "myrg"
:::image type="content" source="./media/migrate-continuous-backup/migration-status-complete-powershell.png" alt-text="Backup type changes to continuous after the migration is complete":::
-## <a id="ARM-template"></a> Migrate using Resource Manager template
+## <a id="ARM-template"></a> Migrate from periodic mode to Continuous mode using Resource Manager template
To migrate to continuous backup mode using ARM template, find the backupPolicy section of your template and update the `type` property. For example, if your existing template has backup policy like the following JSON object:
To migrate to continuous backup mode using ARM template, find the backupPolicy s
"backupIntervalInMinutes": 240, "backupRetentionIntervalInHours": 8 }
-},
+}
``` Replace it with the following JSON object: ```json
-"backupPolicy": {
- "type": "Continuous"
-},
+"backupPolicy":ΓÇ»{
+ΓÇ» "type":ΓÇ»"Continuous",
+   "continuousModeProperties": {
+    "tier": "Continuous7Days"
+    }
+}
``` Next deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
Next deploy the template by using Azure PowerShell or CLI. The following example
az deployment group create -g <ResourceGroup> --template-file <ProvisionTemplateFilePath> ```
+## Change Continuous Mode tiers
+
+You can switch between ``Continous30Days`` and ``Continous7Days`` in Azure PowerShell, Azure CLI or the Azure portal.
+
+The Following Azure CLI command illustrates switching an existing account to ``Continous7Days``:
+
+```azurecli-interactive
+az cosmosdb update \
+ΓÇ» ΓÇ» --resource-group "my-rg" \
+ΓÇ» ΓÇ» --name "my-continuous-backup-account" \
+ΓÇ» ΓÇ» --backup-policy-type "Continuous" \
+ΓÇ» ΓÇ» --continuous-tier "Continuous7Days"
+```
+
+The following Azure PowerShell command illustrates switching an existing account to ``Continous7Days``:
+
+```azurepowershell-interactive
+Update-AzCosmosDBAccount `
+ -ResourceGroupName "myrg" `
+ -Name "myAccount" `
+ -BackupPolicyType Continuous `
+ -ContinuousTier Continuous7Days
+```
+
+You can also use an ARM template in a method similar to using the Azure CLI and Azure PowerShell.
+
+> [!NOTE]
+> When changing from the 30 to 7 days tier, the ability to restore more than 7 days in history is immediately unavaiailable. When changing from 7 to 30 days tier, you will not be able to restore more than 7 days immediately. The earliest time to restore can be extracted from the account metadata available via Azure Powershell or Azure CLI. The price impact of switching between the 7 and 30 days tiers would also be immediately visible.
+ ## What to expect during and after migration? When migrating from periodic mode to continuous mode, you can't run any control plane operations that performs account level updates or deletes. For example, operations such as adding or removing regions, account failover, updating backup policy etc. can't be run while the migration is in progress. The time for migration depends on the size of data and the number of regions in your account. Restore action on the migrated accounts only succeeds from the time when migration successfully completes.
You can restore your account after the migration completes. If the migration com
## Frequently asked questions
-#### Does the migration only happen at the account level?
+### Does the migration only happen at the account level?
+ Yes.
-#### Which accounts can be targeted for backup migration?
+### Which accounts can be targeted for backup migration?
+ Currently, SQL API and API for MongoDB accounts with single write region that have shared, provisioned, or autoscale provisioned throughput support migration. Table API and Gremlin API are in preview. Accounts enabled with analytical storage and multiple-write regions aren't supported for migration.
-#### Does the migration take time? What is the typical time?
-Migration takes time and it depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with tens of terabytes of data, the migration can take up to few days to complete.
+### Does the migration take time? What is the typical time?
+
+Migration takes a varying amount of time that largely depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with tens of terabytes of data, the migration can take up to few days to complete.
-#### Does the migration cause any availability impact/downtime?
-No, the migration operation takes place in the background, so the client requests aren't impacted. However, we need to perform some backend operations during the migration, and it might take extra time if the account is under heavy load.
+### Does the migration cause any availability impact/downtime?
-#### What happens if the migration fails? Will I still get the periodic backups or get the continuous backups?
-Once the migration process is started, the account will start to become a continuous mode. If the migration fails, you must initiate migration again until it succeeds.
+No, the migration operation takes place in the background. So, client requests aren't impacted. However, we need to perform some backend operations during the migration, and it may take extra time if the account is under heavy load.
-#### How do I perform a restore to a timestamp before/during/after the migration?
-Assume that you started migration at t1 and finished at t5, you canΓÇÖt use a restore timestamp between t1 and t5.
+### What happens if the migration fails? Will I still get the periodic backups or get the continuous backups?
-To restore to a time after t5 because your account is now in continuous mode, you can perform the restore using Azure portal, CLI, or PowerShell like you normally do with continuous account. This self-service restore request can only be done after the migration is complete.
+Once the migration process is started, the account will be enabled in continuous mode. If the migration fails, you must initiate migration again until it succeeds.
-To restore to a time before t1, you can open a support ticket like you normally do with the periodic backup account. After the migration, you have up to 30 days to perform the periodic restore. During these 30 days, you can restore based on the backup retention/interval of your account before the migration. For example, if the backup config was to retain 24 copies at 1 hour interval, then you can restore to anytime between [t1 ΓÇô 24 hours] and [t1].
+### How do I perform a restore to a timestamp before/during/after the migration?
-#### Which account level control plane operations are blocked during migration?
-Operations such as add/remove region, failover, changing backup policy, throughput changes resulting in data movement are blocked during migration.
+Assume that you started migration at ``t1`` and finished at ``t5``, you canΓÇÖt use a restore timestamp between ``t1`` and ``t5``.
+
+Also assume that your account is now in continuous mode. To restore to a time after ``t5``, perform the restore using Azure portal, CLI, or PowerShell like normally with a continuous account. This self-service restore request can only be done after the migration is complete.
+
+To restore to a time before ``t1``, you can open a support ticket like you normally would with a periodic backup account. After the migration, you have up to 30 days to perform the periodic restore. During these 30 days, you can restore based on the backup retention/interval of your account before the migration. For example, if the backup was configured to retain 24 copies at 1 hour intervals, then you can restore to anytime between ``(t1 ΓÇô 24 hours)`` and ``t1``.
+
+### Which account level control plane operations are blocked during migration?
+
+Operations such as add/remove region, failover, changing backup policy, and any throughput changes resulting in data movement are blocked during migration.
+
+### If the migration fails for some underlying issue, would it still block the control plane operation until it's retried and completed successfully?
-#### If the migration fails for some underlying issue, would it still block the control plane operation until it's retried and completed successfully?
Failed migration won't block any control plane operations. If migration fails, itΓÇÖs recommended to retry until it succeeds before performing any other control plane operations.
-#### Is it possible to cancel the migration?
-It isn't possible to cancel the migration because it isn't a reversible operation.
+### Is it possible to cancel the migration?
-#### Is there a tool that can help estimate migration time based on the data usage and number of regions?
-There isn't a tool to estimate time. But our scale runs indicate single region with 1 TB of data takes roughly one and half hour.
+It isn't possible to cancel the migration because migrations aren't a reversible operation.
-For multi-region accounts, calculate the total data size as `Number_of_regions * Data_in_single_region`.
+### Is there a tool that can help estimate migration time based on the data usage and number of regions?
-#### Since the continuous backup mode is now GA, would you still recommend restoring a copy of your account and try migration on the copy before deciding to migrate the production account?
-ItΓÇÖs recommended to test the continuous backup mode feature to see it works as expected before migrating production accounts. Because migration is a one-way operation and itΓÇÖs not reversible.
+There isn't a tool to estimate time. Our testings and scale runs indicate that a single region account with 1 TB of data takes roughly 90 minutes.
+
+For multi-region accounts, calculate the total data size as ``Number_of_regions * Data_in_single_region``.
+
+### Since the continuous backup mode is now GA, do you still recommend restoring a copy of your account? Would you recommend trying migration on the copy before deciding to migrate the production account?
+
+ItΓÇÖs recommended to test the continuous backup mode feature to see it works as expected before migrating production accounts. Migration is a one-way operation and itΓÇÖs not reversible.
## Next steps
To learn more about continuous backup mode, see the following articles:
* Restore an account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Online Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/online-backup-and-restore.md
description: This article describes how automatic backup, on-demand data restore
Previously updated : 11/15/2021 Last updated : 06/28/2022 - # Online backup and on-demand data restore in Azure Cosmos DB+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service. The automatic backups are helpful in scenarios when you accidentally delete or update your Azure Cosmos account, database, or container and later require the data recovery. Azure Cosmos DB backups are encrypted with Microsoft managed service keys. These backups are transferred over a secure non-public network. Which means, backup data remains encrypted while transferred over the wire and at rest. Backups of an account in a given region are uploaded to storage accounts in the same region.
Azure Cosmos DB automatically takes backups of your data at regular intervals. T
There are two backup modes:
-* **Continuous backup mode** ΓÇô This mode allows you to do restore to any point of time within the last 30 days. You can choose this mode while creating the Azure Cosmos DB account. To learn more, see the [Introduction to Continuous backup mode](continuous-backup-restore-introduction.md), provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template) articles. You can also [migrate the accounts from periodic to continuous mode](migrate-continuous-backup.md).
-* **Periodic backup mode** - This mode is the default backup mode for all existing accounts. In this mode, backup is taken at a periodic interval and the data is restored by creating a request with the support team. In this mode, you configure a backup interval and retention for your account. The maximum retention period extends to a month. The minimum backup interval can be one hour. To learn more, see the [Periodic backup mode](configure-periodic-backup-restore.md) article.
+* **Continuous backup mode** ΓÇô This mode has two tiers. One tier includes 7-day retention and the second includes 30-day retention. Continuous backup allows you to restore to any point of time within either 7 or 30 days. You can choose this appropriate tier while creating an Azure Cosmos DB account. For more information about the tiers, see [introduction to continuous backup mode](continuous-backup-restore-introduction.md). To enable continuous backup, see the appropriate articles using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). You can also [migrate the accounts from periodic to continuous mode](migrate-continuous-backup.md).
+
+ > [!NOTE]
+ > The 7-day retention tier is currently in preview.
+
+* **Periodic backup mode** - This mode is the default backup mode for all existing accounts. In this mode, backup is taken at a periodic interval and the data is restored by creating a request with the support team. In this mode, you configure a backup interval and retention for your account. The maximum retention period extends to a month. The minimum backup interval can be one hour. To learn more, see [periodic backup mode](configure-periodic-backup-restore.md).
> [!NOTE] > If you configure a new account with continuous backup, you can do self-service restore via Azure portal, PowerShell, or CLI. If your account is configured in continuous mode, you canΓÇÖt switch it back to periodic mode.
-For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time.
+For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Azure Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Within an analytical store, automatic backup and restore of your data isn't supported at this time.
## Frequently asked questions
No. You can't restore into an account with lower RU/s or fewer partitions.
### Is periodic backup mode supported for Azure Synapse Link enabled accounts?
-Yes. However, analytical store data isn't included in backups and restores. When Synapse Link is enabled on a database account, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at scheduled backup interval, as always.
+Yes. However, analytical store data isn't included in backups and restores. When Azure Synapse Link is enabled on a database account, Azure Cosmos DB will automatically back up your data in the transactional store at the scheduled backup interval.
### Is periodic backup mode supported for analytical store enabled containers?
-Yes, but only for the regular transactional data. Backup and restore of your data in the analytical store is not supported at this time.
+Yes, but only for the regular transactional data. Within an analytical store, backup and restore of your data isn't supported at this time.
## Next steps
Next you can learn about how to configure and manage periodic and continuous bac
* [Configure and manage periodic backup](configure-periodic-backup-restore.md) policy. * What is [continuous backup](continuous-backup-restore-introduction.md) mode?
-* Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
+* Enable continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
* Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). * [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md). * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
description: Learn how to provision an account with continuous backup and point
Previously updated : 04/18/2022 Last updated : 06/28/2022
ms.devlang: azurecli
-# Provision an Azure Cosmos DB account with continuous backup and point in time restore
+# Provision an Azure Cosmos DB account with continuous backup and point in time restore
+ [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-Azure Cosmos DB's point-in-time restore feature helps you to recover from an accidental change within a container, to restore a deleted account, database, or a container or to restore into any region (where backups existed). The continuous backup mode allows you to do restore to any point of time within the last 30 days.
+Azure Cosmos DB's point-in-time restore feature helps you to recover from an accidental change within a container, restore a deleted resource, or restore into any region where backups existed. The continuous backup mode allows you to restore to any point of time within the last 30 or 7 days. How far back you can go in time depends on the tier of the continuous mode for the account.
This article explains how to provision an account with continuous backup and point in time restore using [Azure portal](#provision-portal), [PowerShell](#provision-powershell), [CLI](#provision-cli) and [Resource Manager templates](#provision-arm-template).
+> [!IMPORTANT]
+> Support for 7-day continous backup in both provisioning and migration scenarios is still in preview. Please use PowerShell and Azure CLI to migrate or provision an account with continous backup configured at the 7-day tier.
+ > [!NOTE] > You can provision continuous backup mode account only if the following conditions are true: >
This article explains how to provision an account with continuous backup and poi
> * If the account is of type Table API or Gremlin API. > * If the account has a single write region. - ## <a id="provision-portal"></a>Provision using Azure portal When creating a new Azure Cosmos DB account, in the **Backup policy** tab, choose **continuous** mode to enable the point in time restore functionality for the new account. With the point-in-time restore, data is restored to a new account, currently you can't restore to an existing account.
Table API and Gremlin API are in preview and can be provisioned with PowerShell
## <a id="provision-powershell"></a>Provision using Azure PowerShell
-Before provisioning the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or version higher than 6.2.0. Next connect to your Azure account and select the required subscription with the following commands:
+For PowerShell and CLI commands, the tier value is optional, if it isn't already provided. If not provided the account backup will be retained for 30 days. The tiers are represented by the values ``Continuous7Days`` or ``Continuous30Days``.
+
+1. Install the latest version of Azure PowerShell
+
+ * Before provisioning the account, install any version of Azure PowerShell higher than 6.2.0. For more information about the latest version of Azure PowerShell, see [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true).
+ * For provisioning the ``Continuous7Days`` tier, you'll need to install the preview version of the module by running ``Install-Module -Name Az.CosmosDB -AllowPrerelease``.
+ * Next connect to your Azure account and select the required subscription with the following commands:
-1. Sign into Azure using the following command:
+ 1. Sign into Azure using the following command:
- ```azurepowershell
- Connect-AzAccount
- ```
+ ```azurepowershell
+ Connect-AzAccount
+ ```
-1. Select a specific subscription with the following command:
+ 1. Select a specific subscription with the following command:
- ```azurepowershell
- Select-AzSubscription -Subscription <SubscriptionName>
- ```
+ ```azurepowershell
+ Select-AzSubscription -Subscription <SubscriptionName>
+ ```
-#### <a id="provision-powershell-sql-api"></a>SQL API account
+### <a id="provision-powershell-sql-api"></a>SQL API account
To provision an account with continuous backup, add the argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
+The following cmdlet assumes a single region write account, *Pitracct*, in the in *West US* region in the *MyRG* resource group. The account has continuous backup policy enabled. Continuous backup is configured at the ``Continous7days`` tier:
```azurepowershell- New-AzCosmosDBAccount ` -ResourceGroupName "MyRG" ` -Location "West US" ` -BackupPolicyType Continuous `
+ -ContinuousTier Continuous7Days `
-Name "pitracct" ` -ApiKind "Sql"
-
```
-#### <a id="provision-powershell-mongodb-api"></a>API for MongoDB
+### <a id="provision-powershell-mongodb-api"></a>API for MongoDB
-The following cmdlet is an example of continuous backup account *Pitracct* created in *West US* region under *MyRG* resource group:
+The following cmdlet is an example of continuous backup account configured with the ``Continous30days`` tier:
```azurepowershell- New-AzCosmosDBAccount ` -ResourceGroupName "MyRG" ` -Location "West US" ` -BackupPolicyType Continuous `
+ -ContinuousTier Continuous30Days `
-Name "Pitracct" ` -ApiKind "MongoDB" ` -ServerVersion "3.6"- ```
-#### <a id="provision-powershell-table-api"></a>Table API account
+### <a id="provision-powershell-table-api"></a>Table API account
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
+The following cmdlet is an example of continuous backup policy with the ``Continous7days`` tier:
```azurepowershell- New-AzCosmosDBAccount ` -ResourceGroupName "MyRG" ` -Location "West US" ` -BackupPolicyType Continuous `
+ -ContinuousTier Continuous7Days `
-Name "pitracct" ` -ApiKind "Table"
-
```
-#### <a id="provision-powershell-graph-api"></a>Gremlin API account
+### <a id="provision-powershell-graph-api"></a>Gremlin API account
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
+The following cmdlet is an example of an account with continuous backup policy configured with the ``Continous30days`` tier:
```azurepowershell- New-AzCosmosDBAccount ` -ResourceGroupName "MyRG" ` -Location "West US" ` -BackupPolicyType Continuous `
+ -ContinuousTier Continuous30Days `
-Name "pitracct" `
- -ApiKind "Gremlin"
-
+ -ApiKind "Gremlin"
``` ## <a id="provision-cli"></a>Provision using Azure CLI
+For PowerShell and CLI commands tier value is optional, if it isn't provided ΓÇô the account backup will be retained for 30 days. The tiers are represented by ``Continuous7Days`` or ``Continuous30Days``.
+ Before provisioning the account, install Azure CLI with the following steps: 1. Install the latest version of Azure CLI
- * Install the latest version of [Azure CLI](/cli/azure/install-azure-cli) or version higher than 2.26.0
- * If you have already installed CLI, run `az upgrade` command to update to the latest version. This command will only work with CLI version higher than 2.11. If you have an earlier version, use the above link to install the latest version.
+ * Install a version of the Azure CLI higher than 2.26.0. For more information about the latest version of the Azure CLI, see [Azure CLI](/cli/azure/install-azure-cli).
+ * If you have already installed CLI, run ``az upgrade`` command to update to the latest version. This command will only work with CLI version higher than 2.11. If you have an earlier version, use the above link to install the latest version.
+ * For provisioning the ``Continuous7Days`` tier, you'll need to install the preview version of the extension by ``az extension update --name cosmosdb-preview``
1. Sign in and select your subscription
- * Sign into your Azure account with `az login` command.
- * Select the required subscription using `az account set -s <subscriptionguid>` command.
+ * Sign into your Azure account with ``az login`` command.
+ * Select the required subscription using ``az account set -s <subscriptionguid>`` command.
### <a id="provision-cli-sql-api"></a>SQL API account
-To provision a SQL API account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named *Pitracct* with continuous backup policy created in the *West US* region under *MyRG* resource group:
+To provision a SQL API account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous7days`` tier:
```azurecli-interactive
az cosmosdb create \
--name Pitracct \ --resource-group MyRG \ --backup-policy-type Continuous \
+ --continuous-tier "Continuous7Days" \
--default-consistency-level Session \ --locations regionName="West US"
az cosmosdb create \
### <a id="provision-cli-mongo-api"></a>API for MongoDB
-The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created in the *West US* region under *MyRG* resource group:
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous30days`` tier:
```azurecli-interactive- az cosmosdb create \ --name Pitracct \ --kind MongoDB \ --resource-group MyRG \ --server-version "3.6" \ --backup-policy-type Continuous \
+ --continuous-tier "Continuous30Days" \
--default-consistency-level Session \ --locations regionName="West US"- ```+ ### <a id="provision-cli-table-api"></a>Table API account
-The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created in the *West US* region under *MyRG* resource group:
-```azurecli-interactive
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous30days`` tier:
+```azurecli-interactive
az cosmosdb create \ --name Pitracct \ --kind GlobalDocumentDB \ --resource-group MyRG \ --capabilities EnableTable \ --backup-policy-type Continuous \
+ --continuous-tier "Continuous30Days" \
--default-consistency-level Session \ --locations regionName="West US" ```+ ### <a id="provision-cli-graph-api"></a>Gremlin API account
-The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created the *West US* region under *MyRG* resource group:
-```azurecli-interactive
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy and ``Continuous7days`` tier created in *West US* region under *MyRG* resource group:
+```azurecli-interactive
az cosmosdb create \ --name Pitracct \ --kind GlobalDocumentDB \ --resource-group MyRG \ --capabilities EnableGremlin \ --backup-policy-type Continuous \
+ --continuous-tier "Continuous7Days" \
--default-consistency-level Session \ --locations regionName="West US" ``` ## <a id="provision-arm-template"></a>Provision using Resource Manager template
-You can use Azure Resource Manager templates to deploy an Azure Cosmos DB account with continuous mode. When defining the template to provision an account, include the `backupPolicy` parameter as shown in the following example:
+You can use Azure Resource Manager templates to deploy an Azure Cosmos DB account with continuous mode. When defining the template to provision an account, include the `backupPolicy` and tier parameter as shown in the following example, tier can be ``Continuous7Days`` or ``Continuous30Days`` :
```json {
You can use Azure Resource Manager templates to deploy an Azure Cosmos DB accoun
"locationName": "West US" } ],
- "backupPolicy": {
- "type": "Continuous"
- },
+ "backupPolicy":{
+ "type":"Continuous",
+ "continuousModeProperties":{
+ "tier":"Continuous7Days"
+ }
+ }
"databaseAccountOfferType": "Standard"
- }
- }
- ]
-}
+ }
+ ]
+ }
+ ``` Next, deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v3.md
Release history is maintained in the azure-sdk-for-java repo, for detailed list
## Recommended version
-It's strongly recommended to use version 3.10.0 and above.
+It's strongly recommended to use version 3.22.0 and above.
## Additional notes
cosmos-db Sql Api Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-v4.md
Release history is maintained in the azure-sdk-for-java repo, for detailed list
## Recommended version
-It's strongly recommended to use version 4.18.0 and above.
+It's strongly recommended to use version 4.31.0 and above.
## FAQ [!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)]
cost-management-billing Analyze Cost Data Azure Cost Management Power Bi Template App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md
Here's how values in the overview tiles are calculated.
- The value shown in the **New purchase amount** tile is calculated as the sum of `newPurchases`. - The value shown in the **Total charges** tile is calculated as the sum of (`adjustments` + `ServiceOverage` + `chargesBilledseparately` + `azureMarketplaceServiceCharges`).
-The EA portal doesn't the Total charges column. The Power BI template app includes Adjustments, Service Overage, Charges billed separately, and Azure marketplace service charges as Total charges.
+The EA portal doesn't show the Total charges column. The Power BI template app includes Adjustments, Service Overage, Charges billed separately, and Azure marketplace service charges as Total charges.
The Prepayment Usage shown in the EA portal isn't available in the Template app as part of the total charges.
For more information about configuring data, refresh, sharing reports, and addit
- [Subscribe yourself and others to reports and dashboards in the Power BI service](/power-bi/service-report-subscribe) - [Download a report from the Power BI service to Power BI Desktop](/power-bi/service-export-to-pbix) - [Save a report in Power BI service and Power BI Desktop](/power-bi/service-report-save)-- [Create a report in the Power BI service by importing a dataset](/power-bi/service-report-create-new)
+- [Create a report in the Power BI service by importing a dataset](/power-bi/service-report-create-new)
cost-management-billing Link Partner Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md
Title: Link an Azure account to a partner ID
+ Title: Link a partner ID to your account thatΓÇÖs used to manage customers
description: Track engagements with Azure customers by linking a partner ID to the user account that you use to manage the customer's resources. Previously updated : 11/04/2021 Last updated : 06/28/2022
-# Link a partner ID to your Azure accounts
+# Link a partner ID to your account thatΓÇÖs used to manage customers
Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When acting on behalf of the customer managing, configuring, and supporting Azure services, the partner users will need access to the customerΓÇÖs environment. Using Partner Admin Link (PAL), partners can associate their partner network ID with the credentials used for service delivery.
cost-management-billing Subscription States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-states.md
tags: billing
Previously updated : 09/15/2021 Last updated : 06/28/2022
This article describes the various states that an Azure subscription may have. Y
| **Disabled** | Your Azure subscription is disabled and can no longer be used to create or manage Azure resources. While in this state, your virtual machines are de-allocated, temporary IP addresses are freed, storage is read-only and other services are disabled. A subscription can get disabled because of the following reasons: Your credit may have expired. You may have reached your spending limit. You have a past due bill. Your credit card limit was exceeded. Or, it was explicitly disabled or canceled. Depending on the subscription type, a subscription may remain disabled between 1 - 90 days. After which, it's permanently deleted. For more information, see [Reactivate a disabled Azure subscription](subscription-disabled.md).<br><br>Operations to create or update resources (PUT, PATCH) are disabled. Operations that take an action (POST) are also disabled. You can retrieve or delete resources (GET, DELETE). Your resources are still available. | | **Expired** | Your Azure subscription is expired because it was canceled. You can reactivate an expired subscription. For more information, see [Reactivate a disabled Azure subscription](subscription-disabled.md).<br><br>Operations to create or update resources (PUT, PATCH) are disabled. Operations that take an action (POST) are also disabled. You can retrieve or delete resources (GET, DELETE).| | **Past Due** | Your Azure subscription has an outstanding payment pending. Your subscription is still active but failure to pay the dues may result in subscription being disabled. For more information, see [Resolve past due balance for your Azure subscription.](resolve-past-due-balance.md).<br><br>All operations are available. |
-| **Warned** | Your Azure subscription is in a warned state and will be disabled shortly if the warning reason isn't addressed. A subscription may be in warned state if its past due, canceled by user, or if the subscription has expired.<br><br>You can retrieve or delete resources (GET/DELETE), but you can't create any resources (PUT/PATCH/POST) |
+| **Warned** | Your Azure subscription is in a warned state and will be disabled shortly if the warning reason isn't addressed. A subscription may be in warned state if its past due, canceled by user, or if the subscription has expired.<br><br>You can retrieve or delete resources (GET/DELETE), but you can't create any resources (PUT/PATCH/POST) <p> Resources in this state go offline but can be recovered when the subscription resumes an active/enabled state. A subscription in this state isn't charged. |
data-factory Connector Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-service-cloud.md
Previously updated : 09/09/2021 Last updated : 06/23/2022 # Copy data from and to Salesforce Service Cloud using Azure Data Factory or Synapse Analytics
To copy data from Salesforce Service Cloud, the following properties are support
] ```
+> [!Note]
+> Salesforce Service Cloud source doesn't support proxy settings in the self-hosted integration runtime, but sink does.
+ ### Salesforce Service Cloud as a sink type To copy data to Salesforce Service Cloud, the following properties are supported in the copy activity **sink** section.
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
Previously updated : 06/10/2022 Last updated : 06/23/2022 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
To copy data from Salesforce, set the source type in the copy activity to **Sale
>[!NOTE] >For backward compatibility: When you copy data from Salesforce, if you use the previous "RelationalSource" type copy, the source keeps working while you see a suggestion to switch to the new "SalesforceSource" type.
+> [!Note]
+> Salesforce source doesn't support proxy settings in the self-hosted integration runtime, but sink does.
+ ### Salesforce as a sink type To copy data to Salesforce, set the sink type in the copy activity to **SalesforceSink**. The following properties are supported in the copy activity **sink** section.
When you copy data from Salesforce, the following mappings are used from Salesfo
| Text (Encrypted) |String | | URL |String |
+> [!Note]
+> Salesforce Number type is mapping to Decimal type in Azure Data Factory and Azure Synapse pipelines as a service interim data type. Decimal type honors the defined precision and scale. For data whose decimal places exceeds the defined scale, its value will be rounded off in preview data and copy. To avoid getting such precision loss in Azure Data Factory and Azure Synapse pipelines, consider increasing the decimal places to a reasonably large value in **Custom Field Definition Edit** page of Salesforce.
+ ## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 06/23/2022 Last updated : 06/28/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
1. In the banner seen at the top of the screen, you can click **Open settings to learn more and opt in**.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-1.png" alt-text="Screenshot of Azure Data Factory home page with an Opt in option in a banner at the top of the screen.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-1.png" alt-text="Screenshot of Azure Data Factory home page with an Opt-in option in a banner at the top of the screen.":::
2. Alternatively, you can click the **Settings** button.
There are two ways to enable preview experiences.
Similarly, you can disable preview features with the same steps. Click **Open settings to opt out** or click the **Settings** button and unselect **Azure Data Factory Studio preview update**.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-5.png" alt-text="Screenshot of Azure Data Factory home page with an Opt out option in a banner at the top of the screen and Settings gear in the top right corner of the screen.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-5.png" alt-text="Screenshot of Azure Data Factory home page with an Opt-out option in a banner at the top of the screen and Settings gear in the top right corner of the screen.":::
> [!NOTE] > Enabling/disabling preview updates will discard any unsaved changes.
There are two ways to enable preview experiences.
[**Pipeline experimental view**](#pipeline-experimental-view) * [Adding activities](#adding-activities)
- * [ForEach activity container](#foreach-activity-container)
+ * [Iteration & conditionals container view](#iteration-and-conditionals-container-view)
### Dataflow data first experimental view
If no transformation is selected, the panel will show the pre-existing data flow
#### Transformation settings
-Settings specific to a transformation will now show in a pop up instead of the configuration panel. With each new transformation, a corresponding pop-up will automatically appear.
+Settings specific to a transformation will now show in a pop-up instead of the configuration panel. With each new transformation, a corresponding pop-up will automatically appear.
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-7.png" alt-text="Screenshot of a pop-up with settings specific to the data flow transformation.":::
Columns can be rearranged by dragging a column by its header. You can also sort
UI (user interface) changes have been made to activities in the pipeline editor canvas. These changes were made to simplify and streamline the pipeline creation process. - #### Adding activities
-You now have the option to add an activity using the add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
+You now have the option to add an activity using the Add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas and automatically linked with the previous activity on success.
-#### ForEach activity container
+#### Iteration and conditionals container view
-You can now view the activities contained in your ForEach activity.
+You can now view the activities contained iteration and conditional activities.
-You have two options to add activities to your ForEach loop.
+You have two options to add activities to your iteration and conditional activities.
-1. Use the + button in your ForEach container to add an activity.
+1. Use the + button in your container to add an activity.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-12.png" alt-text="Screenshot of new ForEach activity container with the add button highlighted on the left side of the center of the screen.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-12.png" alt-text="Screenshot of new activity container with the add button highlighted on the left side of the center of the screen.":::
Clicking this button will bring up a drop-down list of all activities that you can add.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-13.png" alt-text="Screenshot of a drop-down list in the ForEach container with all the activities listed.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-13.png" alt-text="Screenshot of a drop-down list in the activity container with all the activities listed.":::
- Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the ForEach container.
+ Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the container.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-14.png" alt-text="Screenshot of the ForEach container with three activities in the center of the container.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-14.png" alt-text="Screenshot of the container with three activities in the center of the container.":::
> [!NOTE]
-> If your ForEach container includes more than 5 activities, only the first 4 will be shown in the container preview.
+> If your container includes more than 5 activities, only the first 4 will be shown in the container preview.
-2. Use the edit button in your ForEach container to see everything within the container. You can use the canvas to edit or add to your pipeline.
+2. Use the edit button in your container to see everything within the container. You can use the canvas to edit or add to your pipeline.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-15.png" alt-text="Screenshot of the ForEach container with the edit button highlighted on the right side of a box in the center of the screen.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-15.png" alt-text="Screenshot of the container with the edit button highlighted on the right side of a box in the center of the screen.":::
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-16.png" alt-text="Screenshot of the inside of the ForEach container with three activities linked together.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-16.png" alt-text="Screenshot of the inside of the container with three activities linked together.":::
- Add additional activities by dragging new activities to the canvas or click the add button on the right most activity to bring up a drop-down list of activities.
+ Add additional activities by dragging new activities to the canvas or click the add button on the right-most activity to bring up a drop-down list of all activities.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-17.png" alt-text="Screenshot of the Add activity button in the bottom left corner of the right most activity.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-17.png" alt-text="Screenshot of the Add activity button in the bottom left corner of the right-most activity.":::
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-18.png" alt-text="Screenshot of the drop-down list of activities in the right most activity.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-18.png" alt-text="Screenshot of the drop-down list of activities in the right-most activity.":::
- Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the ForEach container.
+ Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the container.
## Provide feedback
-We want to hear from you! If you see this pop-up, please provide feedback, and let us know your thoughts.
+We want to hear from you! If you see this pop-up, please let us know your thoughts by providing feedback on the updates you've tested.
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-19.png" alt-text="Screenshot of the feedback survey where user can select between one and five stars.":::
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Digital currency mining activity**<br>(K8S.NODE_CurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of DNS transactions detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low | | **Access to kubelet kubeconfig file detected**<br>(K8S.NODE_KubeConfigAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running on a Kubernetes cluster node detected access to kubeconfig file on the host. The kubeconfig file, normally used by the Kubelet process, contains credentials to the Kubernetes cluster API server. Access to this file is often associated with attackers attempting to access those credentials, or with security scanning tools which check if the file is accessible. | CredentialAccess | Medium | | **Access to cloud metadata service detected**<br>(K8S.NODE_ImdsCall) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected access to the cloud metadata service for acquiring identity token. The container doesn't normally perform such operation. While this behavior might be legitimate, attackers might use this technique to access cloud resources after gaining initial access to a running container. | CredentialAccess | Medium |
+| **MITRE Caldera agent detected**<br>(K8S.NODE_MitreCalderaTools) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
<sup><a name="footnote1"></a>1</sup>: **Preview for non-AKS clusters**: This alert is generally available for AKS clusters, but it is in preview for other environments, such as Azure Arc, EKS and GKE.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
The following table describes what's included in each plan at a high level.
| Microsoft threat and vulnerability management | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Flexibility to use Microsoft Defender for Cloud or Microsoft 365 Defender portal | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Integration of Microsoft Defender for Cloud and Microsoft Defender for Endpoint (alerts, software inventory, Vulnerability Assessment) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Security Policy and Regulatory Compliance | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| Log-analytics (500 MB free) | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Vulnerability Assessment using Qualys | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Threat detections: OS level, network layer, control plane | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--| | Compliance | Docker CIS | VM, VMSS | GA | X | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Γ£ô (Preview) | Defender profile | Defender for Containers | Commercial clouds |
-| Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Preview | Defender profile | Defender for Containers | Commercial clouds |
+| Hardening | Control plane recommendations | ACR, AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Hardening | Kubernetes data plane recommendations | AKS | GA | X | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime protection| Threat detection (control plane)| AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime protection| Threat detection (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Runtime protection| Threat detection (workload) | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
-| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Collection of control plane threat data | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Collection of control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Discovery and provisioning | Auto provisioning of Defender profile | AKS | Preview | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
The **tabs** below show the features that are available, by environment, for Mic
| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime protection| Threat detection (control plane)| EKS | Preview | Γ£ô | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| EKS | Preview | Preview | Agentless | Defender for Containers |
| Runtime protection| Threat detection (workload) | EKS | Preview | X | Defender extension | Defender for Containers | | Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | X | Agentless | Free |
-| Discovery and provisioning | Collection of control plane threat data | EKS | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Collection of control plane threat data | EKS | Preview | Preview | Agentless | Defender for Containers |
| Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - | | Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
The **tabs** below show the features that are available, by environment, for Mic
| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime protection| Threat detection (control plane)| GKE | Preview | Γ£ô | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| GKE | Preview | Preview | Agentless | Defender for Containers |
| Runtime protection| Threat detection (workload) | GKE | Preview | X | Defender extension | Defender for Containers | | Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | X | Agentless | Free |
-| Discovery and provisioning | Collection of control plane threat data | GKE | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Collection of control plane threat data | GKE | Preview | Preview | Agentless | Defender for Containers |
| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | X | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
-| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
| Runtime protection| Threat detection (workload) | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers | | Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | X | Agentless | Free |
-| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
-| Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers |
| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers | <sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 05/31/2022 Last updated : 06/28/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [GA support for Arc-enabled Kubernetes clusters](#ga-support-for-arc-enabled-kubernetes-clusters) | July 2022 | | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | June 2022 | | [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | June 2022 |
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | June 2022 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | July 2022 |
| [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022| | [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service) | July 2022 |
The Key Vault recommendations listed here are currently disabled so that they do
### Multiple changes to identity recommendations
-**Estimated date for change:** June 2022
+**Estimated date for change:** July 2022
Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In June, we'll be making the changes outlined below.
event-grid Storage Upload Process Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/storage-upload-process-images.md
The *images* container's public access is set to `off`. The *thumbnails* contain
Get the storage account key by using the [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) command. Then, use this key to create two containers with the [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) command. ```powershell
-$blobStorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -Name $blobStorageAccount).Key1
+$blobStorageAccountKey = ((Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -Name $blobStorageAccount)| Where-Object {$_.KeyName -eq "key1"}).Value
$blobStorageContext = New-AzStorageContext -StorageAccountName $blobStorageAccount -StorageAccountKey $blobStorageAccountKey New-AzStorageContainer -Name images -Context $blobStorageContext
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Az
Add-AzExpressRoutePortAuthorization -Name $Name -ExpressRoutePort $ERPort Set-AzExpressRoutePort -ExpressRoutePort $ERPort ```
+
+ Sample output:
+ ```powershell
+ Name : ERDirectAuthorization_1
+ Id : /subscriptions/72882272-d67e-4aec-af0b-4ab6e110ee46/resourceGroups/erdirect- rg/providers/Microsoft.Network/expressRoutePorts/erdirect/authorizations/ERDirectAuthorization_1
+ Etag : W/"24cac874-dfb4-4931-9447-28e67edd5155"
+ AuthorizationKey : 6e1fc16a-0777-4cdc-a206-108f2f0f67e8
+ AuthorizationUseStatus : Available
+ ProvisioningState : Succeeded
+ CircuitResourceUri :
+ ```
1. Verify the authorization was created successfully and store ExpressRoute Direct authorization into a variable: ```powershell
- $ERDirect = Get-AzExpressRoutePort -Name $Name -ResourceGroupName $ResourceGroupName
- $ERDirect
+ $ERDirectAuthorization = Get-AzExpressRoutePortAuthorization -ExpressRoutePortObject $ERDirect
+ $ERDirectAuthorization
+ ```
+
+ Sample output:
+ ```powershell
+ Name : ERDirectAuthorization_1
+ Id : /subscriptions/72882272-d67e-4aec-af0b-4ab6e110ee46/resourceGroups/erdirect- rg/providers/Microsoft.Network/expressRoutePorts/erdirect/authorizations/ERDirectAuthorization_1
+ Etag : W/"24cac874-dfb4-4931-9447-28e67edd5155"
+ AuthorizationKey : 6e1fc16a-0777-4cdc-a206-108f2f0f67e8
+ AuthorizationUseStatus : Available
+ ProvisioningState : Succeeded
+ CircuitResourceUri :on
``` 1. Redeem the authorization to create the ExpressRoute Direct circuit with the following command:
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
You can view the Rx light level (the light level that the ExpressRoute Direct po
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/rxlight-level-per-link.jpg" alt-text="ER Direct line Rx Light Level":::
+>[!NOTE]
+> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Rx light levels by lane. However, this is not supported on all deployments.
+>
+ ### <a name = "txlight"></a>Tx Light Level - Split by link Aggregation type: *Avg*
You can view the Tx light level (the light level that the ExpressRoute Direct po
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/txlight-level-per-link.jpg" alt-text="ER Direct line Tx Light Level":::
+>[!NOTE]
+> ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Tx light levels by lane. However, this is not supported on all deployments.
+>
+ ## ExpressRoute Virtual Network Gateway Metrics Aggregation type: *Avg*
firewall-manager Secure Cloud Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network-powershell.md
In this tutorial, you learn how to:
> * Deploy Azure Firewall and configure custom routing > * Test connectivity
+> [!IMPORTANT]
+> A Virtual WAN is a collection of hubs and services made available inside the hub. You can deploy as many Virtual WANs that you need. In a Virtual WAN hub, there are multiple services such as VPN, ExpressRoute, and so on. Each of these services is automatically deployed across **Availability Zones** *except* Azure Firewall, if the region supports Availability Zones. To upgrade an existing Azure Virtual WAN Hub to a Secure Hub and have the Azure Firewall use Availability Zones, you must use Azure PowerShell, as described later in this article.
+ ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
In this tutorial, you learn how to:
This tutorial requires that you run Azure PowerShell locally on PowerShell 7. To install PowerShell 7, see [Migrating from Windows PowerShell 5.1 to PowerShell 7](/powershell/scripting/install/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7&preserve-view=true).
+- "Az.Network" module version must be 4.17.0 or higher.
+ ## Sign in to Azure ```azurepowershell
$RG = "vwan-rg"
$Location = "westeurope" $VwanName = "vwan" $HubName = "hub1"
+$FirewallTier = "Standard" # or "Premium"
+ # Create Resource Group, Virtual WAN and Virtual Hub New-AzResourceGroup -Name $RG -Location $Location $Vwan = New-AzVirtualWan -Name $VwanName -ResourceGroupName $RG -Location $Location -AllowVnetToVnetTraffic -AllowBranchToBranchTraffic -VirtualWANType "Standard"
$AzFWHubIPs = New-AzFirewallHubIpAddress -PublicIP $AzFWPIPs
# New Firewall $AzFW = New-AzFirewall -Name "azfw1" -ResourceGroupName $RG -Location $Location ` -VirtualHubId $Hub.Id -FirewallPolicyId $FWPolicy.Id `
- -Sku AZFW_Hub -HubIPAddress $AzFWHubIPs
+ -SkuName "AZFW_Hub" -HubIPAddress $AzFWHubIPs `
+ -SkuTier $FirewallTier
```
+> [!NOTE]
+> The following Firewall creation command does **not** use Availability Zones. If you want to use this feature, an additional parameter **-Zone** is required. An example is provided in the upgrade section at the end of this article.
+ Enabling logging from the Azure Firewall to Azure Monitor is optional, but in this example you use the Firewall logs to prove that traffic is traversing the firewall: ```azurepowershell
Get-AzEffectiveRouteTable -ResourceGroupName $RG -NetworkInterfaceName $NIC2.Nam
Now generate traffic from one Virtual Machine to the other, and verify that it's dropped in the Azure Firewall. In the following SSH commands you need to accept the virtual machines fingerprints, and provide the password that you defined when you created the virtual machines. In this example, you're going to send five ICMP echo request packets from the virtual machine in spoke1 to spoke2, plus a TCP connection attempt on port 22 using the Linux utility `nc` (with the `-vz` flags it just sends a connection request and shows the result). You should see the ping failing, and the TCP connection attempt on port 22 succeeding, since it's allowed by the network rule you configured previously: ```azurepowershell
-# Connect to one VM and ping the other. It shouldnt work, because the firewall should drop the traffic, since no rule for ICMP is configured
+# Connect to one VM and ping the other. It should not work, because the firewall should drop the traffic, since no rule for ICMP is configured
ssh $AzFWPublicAddress -p 10001 -l $VMLocalAdminUser "ping $Spoke2VMPrivateIP -c 5" # Connect to one VM and send a TCP request on port 22 to the other. It should work, because the firewall is configured to allow SSH traffic (port 22) ssh $AzFWPublicAddress -p 10001 -l $VMLocalAdminUser "nc -vz $Spoke2VMPrivateIP 22"
To delete the test environment, you can remove the resource group with all conta
Remove-AzResourceGroup -Name $RG ```
+## Upgrade an existing Hub with Availability Zones
+
+The previous procedure uses Azure PowerShell to create a **new** Azure Virtual WAN Hub, and then immediately converts it to a Secured Hub using Azure Firewall.
+A similar approach can be applied to an **existing** Azure Virtual WAN Hub. Firewall Manager can be also used for the conversion, but it isn't possible to deploy Azure Firewall across Availability Zones without a script-based approach.
+You can use the following code snippet to convert an existing Azure Virtual WAN Hub to a Secured Hub, using an Azure Firewall deployed across all three Availability Zones.
+
+```azurepowershell
+# Variable definition
+$RG = "vwan-rg"
+$Location = "westeurope"
+$VwanName = "vwan"
+$HubName = "hub1"
+$FirewallName = "azfw1"
+$FirewallTier = "Standard" # or "Premium"
+$FirewallPolicyName = "VwanFwPolicy"
+
+# Get references to vWAN and vWAN Hub to convert #
+$Vwan = Get-AzVirtualWan -ResourceGroupName $RG -Name $VwanName
+$Hub = Get-AzVirtualHub -ResourceGroupName $RG -Name $HubName
+
+# Create a new Firewall Policy #
+$FWPolicy = New-AzFirewallPolicy -Name $FirewallPolicyName -ResourceGroupName $RG -Location $Location
+
+# Create a new Firewall Public IP #
+$AzFWPIPs = New-AzFirewallHubPublicIpAddress -Count 1
+$AzFWHubIPs = New-AzFirewallHubIpAddress -PublicIP $AzFWPIPs
+
+# Create Firewall instance #
+$AzFW = New-AzFirewall -Name $FirewallName -ResourceGroupName $RG -Location $Location `
+ -VirtualHubId $Hub.Id -FirewallPolicyId $FWPolicy.Id `
+ -SkuName "AZFW_Hub" -HubIPAddress $AzFWHubIPs `
+ -SkuTier $FirewallTier `
+ -Zone 1,2,3
+```
+After you run this script, Availability Zones should appear in the secured hub properties as shown in the following screenshot:
++
+After the Azure Firewall is deployed, a configuration procedure must be completed as described in the previous *Deploy Azure Firewall and configure custom routing* section.
+ ## Next steps > [!div class="nextstepaction"]
firewall-manager Secure Cloud Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network.md
In this tutorial, you learn how to:
> * Create a firewall policy and secure your hub > * Test the firewall
+> [!IMPORTANT]
+> The procedure in this tutorial uses Azure Firewall Manager to create a new Azure Virtual WAN secured hub.
+> You can use Firewall Manager to upgrade an existing hub, but you can't configure Azure **Availability Zones** for Azure Firewall.
+> It is also possible to convert an existing hub to a secured hub using the Azure portal, as described in [Configure Azure Firewall in a Virtual WAN hub](../virtual-wan/howto-firewall.md). But like Azure Firewall Manager, you can't configure **Availability Zones**.
+> To upgrade an existing hub and specify **Availability Zones** for Azure Firewall (recommended) you must follow the upgrade procedure in [Tutorial: Secure your virtual hub using Azure PowerShell](secure-cloud-network-powershell.md). secure-cloud-network-powershell).
+ ## Prerequisites If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
The two virtual networks will each have a workload server in them and will be pr
1. From the Azure portal home page, select **Create a resource**. 2. Search for **Virtual network**, and select **Create**.
-2. For **Subscription**, select your subscription.
-1. For **Resource group**, select **Create new**, and type **fw-manager-rg** for the name and select **OK**.
-2. For **Name**, type **Spoke-01**.
-3. For **Region**, select **(US) East US**.
-4. Select **Next: IP Addresses**.
-1. For **Address space**, type **10.0.0.0/16**.
-1. Select **Add subnet**.
-1. For **Subnet name**, type **Workload-01-SN**.
-1. For **Subnet address range**, type **10.0.1.0/24**.
-1. Select **Add**.
-1. Select **Review + create**.
-1. Select **Create**.
+3. For **Subscription**, select your subscription.
+4. For **Resource group**, select **Create new**, and type **fw-manager-rg** for the name and select **OK**.
+5. For **Name**, type **Spoke-01**.
+6. For **Region**, select **(US) East US**.
+7. Select **Next: IP Addresses**.
+8. For **Address space**, type **10.0.0.0/16**.
+9. Select **Add subnet**.
+10. For **Subnet name**, type **Workload-01-SN**.
+11. For **Subnet address range**, type **10.0.1.0/24**.
+12. Select **Add**.
+13. Select **Review + create**.
+14. Select **Create**.
Repeat this procedure to create another similar virtual network:
Create your secured virtual hub using Firewall Manager.
2. In the search box, type **Firewall Manager** and select **Firewall Manager**. 3. On the **Firewall Manager** page under **Deployments**, select **Virtual hubs**. 4. On the **Firewall Manager | Virtual hubs** page, select **Create new secured virtual hub**.+
+ :::image type="content" source="./media/secure-cloud-network/1-create-new-secured-virtual-hub.jpg" alt-text="Screenshot of creating a new secured virtual hub." lightbox="./media/secure-cloud-network/1-create-new-secured-virtual-hub.jpg":::
+ 5. For **Resource group**, select **fw-manager-rg**.
-7. For **Region**, select **East US**.
-1. For the **Secured virtual hub name**, type **Hub-01**.
-2. For **Hub address space**, type **10.2.0.0/16**.
-3. For the new virtual WAN name, type **Vwan-01**.
-4. Leave the **Include VPN gateway to enable Trusted Security Partners** check box cleared.
-5. Select **Next: Azure Firewall**.
-6. Accept the default **Azure Firewall** **Enabled** setting.
-1. For **Azure Firewall tier**, select **Standard**.
-1. Select **Next: Trusted Security Partner**.
-1. Accept the default **Trusted Security Partner** **Disabled** setting, and select **Next: Review + create**.
-1. Select **Create**.
-
- It takes about 30 minutes to deploy.
+6. For **Region**, select **East US**.
+7. For the **Secured virtual hub name**, type **Hub-01**.
+8. For **Hub address space**, type **10.2.0.0/16**.
+9. For the new virtual WAN name, type **Vwan-01**.
+10. Select **New vWAN** and select **Standard** for "Type"
+11. Leave the **Include VPN gateway to enable Trusted Security Partners** check box cleared.
+
+ :::image type="content" source="./media/secure-cloud-network/2-create-new-secured-virtual-hub.png" alt-text="Screenshot of creating a new virtual hub with properties." lightbox="./media/secure-cloud-network/2-create-new-secured-virtual-hub.png":::
+
+12. Select **Next: Azure Firewall**.
+13. Accept the default **Azure Firewall** **Enabled** setting.
+14. For **Azure Firewall tier**, select **Standard**.
+15. Select the desired combination of **Availability Zones**.
+
+> [!IMPORTANT]
+> A Virtual WAN is a collection of hubs and services made available inside the hub. You can deploy as many Virtual WANs that you need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute, and so on. Each of these services is automatically deployed across Availability Zones except Azure Firewall, if the region supports Availability Zones. To align with Azure Virtual WAN resiliency, you should select all available Availability Zones.
+
+ :::image type="content" source="./media/secure-cloud-network/3-azure-firewall-parameters-with-zones.png" alt-text="Screenshot of configuring Azure Firewall parameters." lightbox="./media/secure-cloud-network/3-azure-firewall-parameters-with-zones.png":::
+
+16. Select the **Firewall Policy** to apply at the new Azure Firewall instance. Select **Default Deny Policy**, you will refine your settings later in this article.
+17. Select **Next: Trusted Security Partner**.
+
+ :::image type="content" source="./media/secure-cloud-network/4-trusted-security-partner.png" alt-text="Screenshot of configuring Trusted Partners parameters." lightbox="./media/secure-cloud-network/4-trusted-security-partner.png":::
+
+18. Accept the default **Trusted Security Partner** **Disabled** setting, and select **Next: Review + create**.
+19. Select **Create**.
+
+ :::image type="content" source="./media/secure-cloud-network/5-confirm-and-create.png" alt-text="Screenshot of creating the Firewall instance." lightbox="./media/secure-cloud-network/5-confirm-and-create.png":::
+
+> [!NOTE]
+> It may take up to 30 minutes to create a secured virtual hub.
You can get the firewall public IP address after the deployment completes. 1. Open **Firewall Manager**. 2. Select **Virtual hubs**. 3. Select **hub-01**.
-7. Select **Public IP configuration**.
-8. Note the public IP address to use later.
+4. Select **Public IP configuration**.
+5. Note the public IP address to use later.
### Connect the hub and spoke virtual networks
Now you can peer the hub and spoke virtual networks.
1. Select the **fw-manager-rg** resource group, then select the **Vwan-01** virtual WAN. 2. Under **Connectivity**, select **Virtual network connections**.+
+ :::image type="content" source="./media/secure-cloud-network/7b-connect-the-hub-and-spoke.png" alt-text="Screenshot of adding Virtual Network connections." lightbox="./media/secure-cloud-network/7b-connect-the-hub-and-spoke.png":::
+ 3. Select **Add connection**. 4. For **Connection name**, type **hub-spoke-01**. 5. For **Hubs**, select **Hub-01**. 6. For **Resource group**, select **fw-manager-rg**. 7. For **Virtual network**, select **Spoke-01**. 8. Select **Create**.-
-Repeat to connect the **Spoke-02** virtual network: connection name - **hub-spoke-02**
+9. Repeat to connect the **Spoke-02** virtual network: connection name - **hub-spoke-02**
## Deploy the servers
Repeat to connect the **Spoke-02** virtual network: connection name - **hub-spok
|Password |type a password| 4. Under **Inbound port rules**, for **Public inbound ports**, select **None**.
-6. Accept the other defaults and select **Next: Disks**.
-7. Accept the disk defaults and select **Next: Networking**.
-8. Select **Spoke-01** for the virtual network and select **Workload-01-SN** for the subnet.
-9. For **Public IP**, select **None**.
-11. Accept the other defaults and select **Next: Management**.
-12. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
-13. Review the settings on the summary page, and then select **Create**.
+5. Accept the other defaults and select **Next: Disks**.
+6. Accept the disk defaults and select **Next: Networking**.
+7. Select **Spoke-01** for the virtual network and select **Workload-01-SN** for the subnet.
+8. For **Public IP**, select **None**.
+9. Accept the other defaults and select **Next: Management**.
+10. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+11. Review the settings on the summary page, and then select **Create**.
Use the information in the following table to configure another virtual machine named **Srv-Workload-02**. The rest of the configuration is the same as the **Srv-workload-01** virtual machine.
After the servers are deployed, select a server resource, and in **Networking**
A firewall policy defines collections of rules to direct traffic on one or more Secured virtual hubs. You'll create your firewall policy and then secure your hub. 1. From Firewall Manager, select **Azure Firewall policies**.+
+ :::image type="content" source="./media/secure-cloud-network/6-create-azure-firewall-policy1.png" alt-text="Screenshot of creating an Azure Policy with first step." lightbox="./media/secure-cloud-network/6-create-azure-firewall-policy1.png":::
+ 2. Select **Create Azure Firewall Policy**.
-1. For **Resource group**, select **fw-manager-rg**.
-1. Under **Policy details**, for the **Name** type **Policy-01** and for **Region** select **East US**.
-1. For **Policy tier**, select **Standard**.
-1. Select **Next: DNS Settings**.
-1. Select **Next: TLS Inspection**.
-1. Select **Next : Rules**.
-1. On the **Rules** tab, select **Add a rule collection**.
-1. On the **Add a rule collection** page, type **App-RC-01** for the **Name**.
-1. For **Rule collection type**, select **Application**.
-1. For **Priority**, type **100**.
-1. Ensure **Rule collection action** is **Allow**.
-1. For the rule **Name** type **Allow-msft**.
-1. For the **Source type**, select **IP address**.
-1. For **Source**, type **\***.
-1. For **Protocol**, type **http,https**.
-1. Ensure **Destination type** is **FQDN**.
-1. For **Destination**, type **\*.microsoft.com**.
-1. Select **Add**.
-
-Add a DNAT rule so you can connect a remote desktop to the **Srv-Workload-01** virtual machine.
-
-1. Select **Add/Rule collection**.
-1. For **Name**, type **dnat-rdp**.
-1. For **Rule collection type**, select **DNAT**.
-1. For **Priority**, type **100**.
-1. For the rule **Name** type **Allow-rdp**.
-1. For the **Source type**, select **IP address**.
-1. For **Source**, type **\***.
-1. For **Protocol**, select **TCP**.
-1. For **Destination Ports**, type **3389**.
-1. For **Destination Type**, select **IP Address**.
-1. For **Destination**, type the firewall public IP address that you noted previously.
-1. For **Translated address**, type the private IP address for **Srv-Workload-01** that you noted previously.
-1. For **Translated port**, type **3389**.
-1. Select **Add**.
-
-Add a network rule so you can connect a remote desktop from **Srv-Workload-01** to **Srv-Workload-02**.
-
-1. Select **Add a rule collection**.
-2. For **Name**, type **vnet-rdp**.
-3. For **Rule collection type**, select **Network**.
-4. For **Priority**, type **100**.
-1. For **Rule collection action**, select **Allow**.
-1. For the rule **Name** type **Allow-vnet**.
-1. For the **Source type**, select **IP address**.
-1. For **Source**, type **\***.
-1. For **Protocol**, select **TCP**.
-1. For **Destination Ports**, type **3389**.
-1. For **Destination Type**, select **IP Address**.
-1. For **Destination**, type the **Srv-Workload-02** private IP address that you noted previously.
-1. Select **Add**.
-1. Select **Review + create**.
-1. Select **Create**.
+
+ :::image type="content" source="./media/secure-cloud-network/6-create-azure-firewall-policy-basics 2.png" alt-text="Screenshot of configuring Azure Policy settings in first step." lightbox="./media/secure-cloud-network/6-create-azure-firewall-policy-basics 2.png":::
+
+3. For **Resource group**, select **fw-manager-rg**.
+4. Under **Policy details**, for the **Name** type **Policy-01** and for **Region** select **East US**.
+5. For **Policy tier**, select **Standard**.
+6. Select **Next: DNS Settings**.
+
+ :::image type="content" source="./media/secure-cloud-network/6-create-azure-firewall-policy-dns3.png" alt-text="Screenshot of configuring DNS settings." lightbox="./media/secure-cloud-network/6-create-azure-firewall-policy-dns3.png":::
+
+7. Select **Next: TLS Inspection**.
+
+ :::image type="content" source="./media/secure-cloud-network/6-create-azure-firewall-policy-tls4.png" alt-text="Screenshot of configuring TLS settings." lightbox="./media/secure-cloud-network/6-create-azure-firewall-policy-tls4.png":::
+
+8. Select **Next : Rules**.
+9. On the **Rules** tab, select **Add a rule collection**.
+
+ :::image type="content" source="./media/secure-cloud-network/6-create-azure-firewall-policy-add-rule-collection6.png" alt-text="Screenshot of configuring Rule Collection." lightbox="./media/secure-cloud-network/6-create-azure-firewall-policy-add-rule-collection6.png":::
+
+10. On the **Add a rule collection** page, type **App-RC-01** for the **Name**.
+11. For **Rule collection type**, select **Application**.
+12. For **Priority**, type **100**.
+13. Ensure **Rule collection action** is **Allow**.
+14. For the rule **Name** type **Allow-msft**.
+15. For the **Source type**, select **IP address**.
+16. For **Source**, type **\***.
+17. For **Protocol**, type **http,https**.
+18. Ensure **Destination type** is **FQDN**.
+19. For **Destination**, type **\*.microsoft.com**.
+20. Select **Add**.
+
+21. Add a **DNAT rule** so you can connect a remote desktop to the **Srv-Workload-01** virtual machine.
+
+ 1. Select **Add/Rule collection**.
+ 2. For **Name**, type **dnat-rdp**.
+ 3. For **Rule collection type**, select **DNAT**.
+ 4. For **Priority**, type **100**.
+ 5. For the rule **Name** type **Allow-rdp**.
+ 6. For the **Source type**, select **IP address**.
+ 7. For **Source**, type **\***.
+ 8. For **Protocol**, select **TCP**.
+ 9. For **Destination Ports**, type **3389**.
+ 10. For **Destination Type**, select **IP Address**.
+ 11. For **Destination**, type the firewall public IP address that you noted previously.
+ 12. For **Translated address**, type the private IP address for **Srv-Workload-01** that you noted previously.
+ 13. For **Translated port**, type **3389**.
+ 14. Select **Add**.
+
+22. Add a **Network rule** so you can connect a remote desktop from **Srv-Workload-01** to **Srv-Workload-02**.
+
+ 1. Select **Add a rule collection**.
+ 2. For **Name**, type **vnet-rdp**.
+ 3. For **Rule collection type**, select **Network**.
+ 4. For **Priority**, type **100**.
+ 5. For **Rule collection action**, select **Allow**.
+ 6. For the rule **Name** type **Allow-vnet**.
+ 7. For the **Source type**, select **IP address**.
+ 8. For **Source**, type **\***.
+ 9. For **Protocol**, select **TCP**.
+ 10. For **Destination Ports**, type **3389**.
+ 11. For **Destination Type**, select **IP Address**.
+ 12. For **Destination**, type the **Srv-Workload-02** private IP address that you noted previously.
+ 13. Select **Add**.
+ 14. Select **Review + create**.
+ 15. Select **Create**.
+
+23. In the **IDPS** page, click on **Next: Threat Intelligence**
+
+ :::image type="content" source="./media/secure-cloud-network/6-create-azure-firewall-policy-idps7.png" alt-text="Screenshot of configuring IDPS settings." lightbox="./media/secure-cloud-network/6-create-azure-firewall-policy-idps7.png":::
+
+24. In the **Threat Intelligence** page, accept defaults and click on **Review and Create**:
+
+ :::image type="content" source="./media/secure-cloud-network/7a-create-azure-firewall-policy-threat-intelligence7.png" alt-text="Screenshot of configuring Threat Intelligence settings." lightbox="./media/secure-cloud-network/7a-create-azure-firewall-policy-threat-intelligence7.png":::
+
+25. Review and confirm your selection clicking on **Create** button.
## Associate policy Associate the firewall policy with the hub. 1. From Firewall Manager, select **Azure Firewall Policies**.
-1. Select the check box for **Policy-01**.
-1. Select **Manage associations**, **Associate hubs**.
-1. Select **hub-01**.
-1. Select **Add**.
+2. Select the check box for **Policy-01**.
+3. Select **Manage associations**, **Associate hubs**.
+
+ :::image type="content" source="./media/secure-cloud-network/8-associate-policy1.png" alt-text="Screenshot of configuring Policy association." lightbox="./media/secure-cloud-network/8-associate-policy1.png":::
+
+4. Select **hub-01**.
+5. Select **Add**.
+
+ :::image type="content" source="./media/secure-cloud-network/8-associate-policy2.png" alt-text="Screenshot of adding Policy and Hub settings." lightbox="./media/secure-cloud-network/8-associate-policy2.png":::
## Route traffic to your hub
Now you must ensure that network traffic gets routed through your firewall.
3. Under **Settings**, select **Security configuration**. 4. Under **Internet traffic**, select **Azure Firewall**. 5. Under **Private traffic**, select **Send via Azure Firewall**.
-1. Select **Save**.
-1. Select **OK** on the **Warning** dialog.
+6. Select **Save**.
+7. Select **OK** on the **Warning** dialog.
+
+ :::image type="content" source="./media/secure-cloud-network/9a-firewall-warning.png" alt-text="Screenshot of Secure Connections." lightbox="./media/secure-cloud-network/9a-firewall-warning.png":::
+
+ > [!NOTE]
+ > It takes a few minutes to update the route tables.
+8. Verify that the two connections show Azure Firewall secures both Internet and private traffic.
- It takes a few minutes to update the route tables.
-1. Verify that the two connections show Azure Firewall secures both Internet and private traffic.
+ :::image type="content" source="./media/secure-cloud-network/9b-secured-connections.png" alt-text="Screenshot of Secure Connections final status." lightbox="./media/secure-cloud-network/9b-secured-connections.png":::
## Test the firewall
Now, test the firewall rules to confirm that it works as expected.
1. Connect a remote desktop to firewall public IP address, and sign in.
-3. Open Internet Explorer and browse to `https://www.microsoft.com`.
-4. Select **OK** > **Close** on the Internet Explorer security alerts.
+2. Open Internet Explorer and browse to `https://www.microsoft.com`.
+3. Select **OK** > **Close** on the Internet Explorer security alerts.
You should see the Microsoft home page.
-5. Browse to `https://www.google.com`.
+4. Browse to `https://www.google.com`.
You should be blocked by the firewall.
firewall Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-bicep.md
+
+ Title: 'Quickstart: Create an Azure Firewall with Availability Zones - Bicep'
+description: In this quickstart, you deploy Azure Firewall using Bicep. The virtual network has one VNet with three subnets. Two Windows Server virtual machines, a jump box, and a server are deployed.
+++++ Last updated : 06/28/2022+++
+# Quickstart: Deploy Azure Firewall with Availability Zones - Bicep
+
+In this quickstart, you use Bicep to deploy an Azure Firewall in three Availability Zones.
++
+The Bicep file creates a test network environment with a firewall. The network has one virtual network (VNet) with three subnets: *AzureFirewallSubnet*, *ServersSubnet*, and *JumpboxSubnet*. The *ServersSubnet* and *JumpboxSubnet* subnet each have a single, two-core Windows Server virtual machine.
+
+The firewall is in the *AzureFirewallSubnet* subnet, and has an application rule collection with a single rule that allows access to `www.microsoft.com`.
+
+A user-defined route points network traffic from the *ServersSubnet* subnet through the firewall, where the firewall rules are applied.
+
+For more information about Azure Firewall, see [Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates an Azure Firewall with Availability Zones, along with the necessary resources to support the Azure Firewall.
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azurefirewall-with-zones-sandbox).
++
+Multiple Azure resources are defined in the Bicep file:
+
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts)
+- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `main.bicep` to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-user>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-user>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-user\>** with the administrator login username for the virtual machine. You'll be prompted to enter **adminPassword**.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to validate the deployment and review the deployed resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+To learn about the syntax and properties for a firewall in a Bicep file, see [Microsoft.Network/azureFirewalls](/azure/templates/microsoft.network/azurefirewalls).
+
+## Clean up resources
+
+When you no longer need them, use the Azure portal, Azure CLI, or Azure PowerShell to remove the resource group, firewall, and all related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+Next, you can monitor the Azure Firewall logs.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
Previously updated : 11/10/2021 Last updated : 05/25/2022 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
After deployment completes, select **Go to resource**.
Azure Firewall is actually a managed service, but virtual appliance works in this situation. 18. For **Next hop address**, type the private IP address for the firewall that you noted previously.
-19. Select **OK**.
+19. Select **Add**.
## Configure an application rule
You can keep your firewall resources to continue testing, or if no longer needed
## Next steps
-[Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
+[Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
governance Assign Policy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-javascript.md
wherever JavaScript can be used, including [bash on Windows 10](/windows/wsl/ins
1. Add a reference to the Azure authentication library. ```bash
- npm install @azure/ms-rest-nodeauth
+ npm install @azure/identity
``` > [!NOTE]
- > Verify in _package.json_ `@azure/arm-policy` is version **3.1.0** or higher,
- > `@azure/arm-policyinsights` is version **3.2.0** or higher, and `@azure/ms-rest-nodeauth` is
- > version **3.0.5** or higher.
+ > Verify in _package.json_ `@azure/arm-policy` is version **5.0.1** or higher,
+ > `@azure/arm-policyinsights` is version **5.0.0** or higher, and `@azure/identity` is
+ > version **2.0.4** or higher.
## Create a policy assignment
identifies resources that aren't compliant to the conditions set in the policy d
```javascript const argv = require("yargs").argv;
- const authenticator = require("@azure/ms-rest-nodeauth");
- const policyObjects = require("@azure/arm-policy");
+ const { DefaultAzureCredential } = require("@azure/identity");
+ const { PolicyClient } = require("@azure/arm-policy");
if (argv.subID && argv.name && argv.displayName && argv.policyDefID && argv.scope && argv.description) { const createAssignment = async () => {
- const credentials = await authenticator.interactiveLogin();
- const client = new policyObjects.PolicyClient(credentials, argv.subID);
- const assignments = new policyObjects.PolicyAssignments(client);
+ const credentials = new DefaultAzureCredential();
+ const client = new PolicyClient(credentials, argv.subID);
- const result = await assignments.create(
+ const result = await client.policyAssignments.create(
argv.scope, argv.name, {
Now that your policy assignment is created, you can identify resources that aren
```javascript const argv = require("yargs").argv;
- const authenticator = require("@azure/ms-rest-nodeauth");
- const policyInsights = require("@azure/arm-policyinsights");
+ const { DefaultAzureCredential } = require("@azure/identity");
+ const { PolicyInsightsClient } = require("@azure/arm-policyinsights");
if (argv.subID && argv.name) { const getStates = async () => {
- const credentials = await authenticator.interactiveLogin();
- const client = new policyInsights.PolicyInsightsClient(credentials);
- const policyStates = new policyInsights.PolicyStates(client);
- const result = await policyStates.listQueryResultsForSubscription(
+ const credentials = new DefaultAzureCredential();
+ const client = new PolicyInsightsClient(credentials);
+ const result = client.policyStates.listQueryResultsForSubscription(
"latest", argv.subID, {
Azure portal view.
- If you wish to remove the installed libraries from your application, run the following command. ```bash
- npm uninstall @azure/arm-policy @azure/arm-policyinsights @azure/ms-rest-nodeauth yargs
+ npm uninstall @azure/arm-policy @azure/arm-policyinsights @azure/identity yargs
``` ## Next steps
governance First Query Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-javascript.md
works wherever JavaScript can be used, including [bash on Windows 10](/windows/w
1. Add a reference to the Azure authentication library. ```bash
- npm install @azure/ms-rest-nodeauth
+ npm install @azure/identity
``` > [!NOTE]
- > Verify in _package.json_ `@azure/arm-resourcegraph` is version **2.0.0** or higher and
- > `@azure/ms-rest-nodeauth` is version **3.0.3** or higher.
+ > Verify in _package.json_ `@azure/arm-resourcegraph` is version **4.2.1** or higher and
+ > `@azure/identity` is version **2.0.4** or higher.
## Query the Resource Graph
works wherever JavaScript can be used, including [bash on Windows 10](/windows/w
```javascript const argv = require("yargs").argv;
- const authenticator = require("@azure/ms-rest-nodeauth");
- const resourceGraph = require("@azure/arm-resourcegraph");
+ const { DefaultAzureCredential } = require("@azure/identity");
+ const { ResourceGraphClient } = require("@azure/arm-resourcegraph");
if (argv.query) { const query = async () => {
- const credentials = await authenticator.interactiveLogin();
- const client = new resourceGraph.ResourceGraphClient(credentials);
+ const credentials = new DefaultAzureCredential();
+ const client = new ResourceGraphClient(credentials);
const result = await client.resources( { query: argv.query
top five results.
If you wish to remove the installed libraries from your application, run the following command. ```bash
-npm uninstall @azure/arm-resourcegraph @azure/ms-rest-nodeauth yargs
+npm uninstall @azure/arm-resourcegraph @azure/identity yargs
``` ## Next steps
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
Monitor.
> [Guest Configuration for VMs](../../policy/concepts/guest-configuration.md). To view examples of how to query Guest Configuration resources in Resource Graph, view [Azure Resource Graph queries by category - Azure Policy Guest Configuration](../samples/samples-by-category.md#azure-policy-guest-configuration). > [!IMPORTANT]
-> Resource configuration changes only supports changes to resource types from the [Resources table](..//reference/supported-tables-resources.md#resources) in Resource Graph. This does not yet include changes to the resource container resources, such as Subscriptions and Resource groups. Changes are queryable for fourteen days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export your query results to any of the Azure data stores (e.g., Log Analytics) for your desired retention.
+> Resource configuration changes only supports changes to resource types from the [Resources table](..//reference/supported-tables-resources.md#resources) in Resource Graph. This does not yet include changes to the resource container resources, such as Subscriptions and Resource groups. Changes are queryable for fourteen days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export query result to any of the Azure data stores (e.g., Log Analytics) for your desired retention.
## Find detected change events and view change details
resourcechanges
| project changeTime, changeType, id, resourceGroup, type, properties ```
+### Changes in virtual machine size 
+```kusto
+resourcechanges
+|extend vmSize = properties.changes["properties.hardwareProfile.vmSize"], changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType) 
+| where isnotempty(vmSize) 
+| order by changeTime desc 
+| project changeTime, targetResourceId, changeType, properties.changes, previousSize = vmSize.previousValue, newSize = vmSize.newValue
+```
+
+### Count of changes by change type and subscription name
+```kusto
+resourcechanges  
+|extend changeType = tostring(properties.changeType), changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceType=tostring(properties.targetResourceType)  
+| summarize count() by changeType, subscriptionId 
+| join (resourcecontainers | where type=='microsoft.resources/subscriptions' | project SubscriptionName=name, subscriptionId) on subscriptionId 
+| project-away subscriptionId, subscriptionId1
+| order by count_ desc  
+```
++
+### Query the latest resource configuration for resources created with a certain tag
+```kusto
+resourcechangesΓÇ»
+|extend targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType), createTime = todatetime(properties.changeAttributes.timestamp) 
+| where createTime > ago(7d) and changeType == "Create" 
+| project  targetResourceId, changeType, createTime 
+| join ( resources | extend targetResourceId=id) on targetResourceId 
+| where tags[ΓÇ£EnvironmentΓÇ¥] =~ ΓÇ£prodΓÇ¥ΓÇ»
+| order by createTime desc 
+| project createTime, id, resourceGroup, type
+```
+ ## Next steps - See the language in use in [Starter queries](../samples/starter.md).
iot-edge Deploy Confidential Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/deploy-confidential-applications.md
The Open Enclave repository also includes samples to help developers get started
## Hardware
-Currently, [TrustBox by Scalys](https://scalys.com/trustbox-industrial/) is the only device supported with manufacturer service agreements for deploying confidential applications as IoT Edge modules. The TrustBox is built on The TrustBox Edge and TrustBox EdgeXL devices both come pre-loaded with the Open Enclave SDK and Azure IoT Edge.
+Currently, [TrustBox by Scalys](https://scalys.com/) is the only device supported with manufacturer service agreements for deploying confidential applications as IoT Edge modules. The TrustBox is built on The TrustBox Edge and TrustBox EdgeXL devices both come pre-loaded with the Open Enclave SDK and Azure IoT Edge.
For more information, see [Getting started with Open Enclave for the Scalys TrustBox](https://aka.ms/scalys-trustbox-edge-get-started).
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
IoT Edge modules that connect directly to cloud services, including the runtime
**Root cause:**
-Containers rely on IP packet forwarding in order to connect to the internet so that they can communicate with cloud services. IP packet forwarding is enabled by default in Docker, but if it gets disabled then any modules that connect to cloud services will not work as expected. For more information, see [Understand container communication](https://apimirror.com/docker~1.12/engine/userguide/networking/default_network/container-communication/index) in the Docker documentation.
+Containers rely on IP packet forwarding in order to connect to the internet so that they can communicate with cloud services. IP packet forwarding is enabled by default in Docker, but if it gets disabled then any modules that connect to cloud services will not work as expected. For more information, see [Understand container communication](https://docs.docker.com/config/containers/container-networking/) in the Docker documentation.
**Resolution:**
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-go.md
In the following sections, you create a client, set a secret, retrieve a secret,
### Authenticate and create a client ```go
+vaultURI := os.Getenv("AZURE_KEY_VAULT_URI")
+ cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) }
-client, err := azsecrets.NewClient("https://quickstart-kv.vault.azure.net/", cred, nil)
+client, err := azsecrets.NewClient(vaultURI, cred, nil)
if err != nil { log.Fatalf("failed to create a client: %v", err) } ```
-If you used a different key vault name, replace `quickstart-kv` with that name.
- ### Create a secret ```go
if err != nil {
fmt.Printf("secretValue: %s\n", *getResp.Value) ```
-### List secrets
+### List properties of secrets
```go
-pager := client.ListSecrets(nil)
-for pager.NextPage(context.TODO()) {
- resp := pager.PageResponse()
- for _, secret := range resp.Secrets {
- fmt.Printf("Secret ID: %s\n", *secret.ID)
- }
-}
-
-if pager.Err() != nil {
- log.Fatalf("failed to get list secrets: %v", err)
+pager := client.ListPropertiesOfSecrets(nil)
+for pager.More() {
+ page, err := pager.NextPage(context.TODO())
+ if err != nil {
+ panic(err)
+ }
+ for _, v := range page.Secrets {
+ fmt.Printf("Secret Name: %s\tSecret Tags: %v\n", *v.ID, v.Tags)
+ }
} ```
import (
func main() {
- mySecretName := "quickstart-secret"
- mySecretValue := "createdWithGO"
- keyVaultName := os.Getenv("KEY_VAULT_NAME")
- keyVaultUrl := fmt.Sprintf("https://%s.vault.azure.net/", keyVaultName)
+ mySecretName := "secretName01"
+ mySecretValue := "secretValue"
+ vaultURI := os.Getenv("AZURE_KEY_VAULT_URI")
//Create a credential using the NewDefaultAzureCredential type. cred, err := azidentity.NewDefaultAzureCredential(nil)
func main() {
} //Establish a connection to the Key Vault client
- client, err := azsecrets.NewClient(keyVaultURL, cred, nil)
+ client, err := azsecrets.NewClient(vaultURI, cred, nil)
if err != nil { log.Fatalf("failed to connect to client: %v", err) }
load-balancer Monitor Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md
The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of pla
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Load Balancer data reference](monitor-load-balancer-reference.md#azure-monitor-logs-tables)
-### Sample Kusto queries
-
-> [!NOTE]
-> There is currently an issue with Kusto queries that prevents data from being retrieved from load balancer logs.
-- ## Alerts Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] - > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"] > * [v1](v1/concept-mlflow-v1.md) > * [v2 (current version)](concept-mlflow.md)
-Azure Machine Learning only uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiment via the Azure Machine Learning Python SDK, Azure Machine Learning CLI or the Azure Machine Learning studio.
+[MLflow](https://www.mlflow.org) is an open-source framework, designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows you to use a consistent set of tools regardless of where your experiments are running: locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
+
+MLflow can manage the complete machine learning lifecycle using four core capabilities:
+
+* [Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training job metrics, parameters and model artifacts; no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
+* [Model Registries](https://mlflow.org/docs/latest/model-registry.html) is a component of MLflow that manage model's versions in a centralized repository.
+* [Model Deployments](https://mlflow.org/docs/latest/models.html#deploy-a-python-function-model-on-microsoft-azure-ml) is a component of MLflow that deploys models registered using the MLflow format to different compute targets. Because of how MLflow models are stored, there's no need to provide scoring scripts for models in such format.
+* [Projects](https://mlflow.org/docs/latest/projects.html) is a format for packaging data science code in a reusable and reproducible way, based primarily on conventions. It's supported on preview on Azure Machine Learning.
> [!NOTE]
-> Unlike the Azure Machine Learning SDK v1, there is no logging functionality in the SDK v2 (preview), and it is recommended to use MLflow for logging and tracking.
+> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 (preview), and it is recommended to use MLflow for logging and tracking.
+
+## Tracking with MLflow
-[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training job metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
+With MLflow Tracking you can connect Azure Machine Learning as the backend of your MLflow experiments. By doing so, you can:
-## Track experiments
++ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](./concept-azure-machine-learning-v2.md#workspace).
+ + If you're using Azure Machine Learning computes, they're already configured to work with MLflow for tracking. You don't need to configure the MLflow tracking URI to start working with them. Just import `mlflow` in your training routine and start using it
+ + Azure Machine Learning also supports remote tracking of experiments by configuring MLflow to point to the Azure Machine Learning workspace. By doing so, you can leverage the capabilities of Azure Machine Learning while keeping your experiments where they are.
++ Lift and shift existing MLflow experiments to Azure Machine Learning. The workspace provides a centralized, secure, and scalable location to store training metrics and models.
-With MLflow Tracking you can connect Azure Machine Learning as the backend of your MLflow experiments. By doing so, you can
+Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiment via the Azure Machine Learning Python SDK, Azure Machine Learning CLI or the Azure Machine Learning studio. Learn more at [Track experiments with MLflow](how-to-use-mlflow-cli-runs.md).
-+ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](./concept-azure-machine-learning-v2.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models. Learn more at [Track ML models with MLflow and Azure Machine Learning CLI v2](how-to-use-mlflow-cli-runs.md).
+## Model Registries with MLflow
-+ Model management in MLflow or Azure Machine Learning model registry.
+Azure Machine Learning supports MLflow for model management. This represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client. The following article describes the different capabilities and how it compares with other options.
-## Deploy MLflow experiments
+To learn more about how you can manage models using the MLflow API in Azure Machine Learning, view [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
-You can [Deploy MLflow models to an online endpoint](how-to-deploy-mlflow-models-online-endpoints.md), so you can leverage and apply Azure Machine Learning's model management capabilities and no-code deployment offering.
+## Model Deployments of MLflow models
+
+You can [deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md), so you can leverage and apply Azure Machine Learning's model management capabilities and no-code deployment offering. We support deploying MLflow models to both real-time and batch endpoints. You can use the `azureml-mlflow` MLflow plugin, the Azure ML CLI v2, and using the user interface in Azure Machine Learning studio.
+
+Learn more at [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md).
## Train MLflow projects (preview) [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-You can use MLflow's tracking URI and logging API, collectively known as MLflow Tracking, to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
+You can submit training jobs to Azure Machine Learning using [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
Learn more at [Train ML models with MLflow projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md). +
+## MLflow SDK, Azure ML v2 and Azure ML Studio capabilities
+
+The following table shows which operations are supported by each of the tools available in the ML lifecycle.
+
+| Feature | MLflow SDK | Azure ML v2 (CLI/SDK) | Azure ML Studio |
+| :- | :-: | :-: | :-: |
+| Track and log metrics, parameters and models | **&check;** | | |
+| Retrieve metrics, parameters and models | **&check;**<sup>1</sup> | <sup>2</sup> | **&check;** |
+| Submit training jobs with MLflow projects | **&check;** | | |
+| Submit training jobs with inputs and outputs | | **&check;** | |
+| Submit training pipelines | | **&check;** | |
+| Manage experiments runs | **&check;**<sup>1</sup> | **&check;** | **&check;** |
+| Manage MLflow models | **&check;**<sup>3</sup> | **&check;** | **&check;** |
+| Manage non-MLflow models | **&check;**<sup>4</sup> | **&check;** | **&check;** |
+| Deploy MLflow models to Azure Machine Learning | **&check;**<sup>5</sup> | **&check;** | **&check;** |
+| Deploy non-MLflow models to Azure Machine Learning | | **&check;** | **&check;** |
+
+> [!NOTE]
+> - <sup>1</sup> View [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md) for details.
+> - <sup>2</sup> Only artifacts and models can be downloaded.
+> - <sup>3</sup> View [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for details.
+> - <sup>4</sup> Loading models using the syntax `models:/model-name/version` is not supported for non-MLflow models.
+> - <sup>5</sup> View [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) for details. Deployment of MLflow models to batch inference using the MLflow SDK is not possible by the moment.
++ ## Next steps * [Track ML models with MLflow and Azure Machine Learning CLI v2](how-to-use-mlflow-cli-runs.md) * [Convert your custom model to MLflow model format for no code deployments](how-to-convert-custom-model-to-mlflow.md)
-* [Deploy MLflow models to an online endpoint](how-to-deploy-mlflow-models-online-endpoints.md)
+* [Deploy MLflow models](how-to-deploy-mlflow-models.md)
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Registered models are identified by name and version. Each time you register a m
> [!TIP] > You can also register models trained outside Machine Learning.
-You can't delete a registered model that's being used in an active deployment.
-For more information, see the "Register model" section of [Deploy models](how-to-deploy-and-where.md#registermodel).
- > [!IMPORTANT]
-> When you use the **Filter by** `Tags` option on the **Models** page of Azure Machine Learning Studio, instead of using `TagName : TagValue`, use `TagName=TagValue` without spaces.
+> * When you use the **Filter by** `Tags` option on the **Models** page of Azure Machine Learning Studio, instead of using `TagName : TagValue`, use `TagName=TagValue` without spaces.
+> * You can't delete a registered model that's being used in an active deployment.
+For more information, [Work with models in Azure Machine Learning](how-to-manage-models.md).
### Package and debug models
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds. +
+## June 28, 2022
+[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview) and [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version `22.06.10`
+
+[Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview)
+
+Version `22.06.13`
+
+Main changes:
+
+- Remove `Rstudio` software tool from DSVM images.
+ ## May 17, 2022 [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
Previously updated : 01/12/2022 Last updated : 06/28/2022 # Configure a private endpoint for an Azure Machine Learning workspace
In some situations, you may want to allow someone to connect to your secured wor
To enable public access, use the following steps:
+> [!TIP]
+> There are two possible properties that you can configure:
+> * `allow_public_access_when_behind_vnet` - used by the Python SDK v1
+> * `public_network_access` - used by the CLI and Python SDK v2 (preview)
+> Each property overrides the other. For example, setting `public_network_access` will override any previous setting to `allow_public_access_when_behind_vnet`.
+>
+> Microsoft recommends using `public_network_access` to enable or disable public access to a workspace.
+ # [Python](#tab/python) To enable public access, use [Workspace.update](/python/api/azureml-core/azureml.core.workspace(class)#update-friendly-name-none--description-none--tags-none--image-build-compute-none--service-managed-resources-settings-none--primary-user-assigned-identity-none--allow-public-access-when-behind-vnet-none-) and set `allow_public_access_when_behind_vnet=True`.
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
ms.devlang: azurecli
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] - > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](./v1/how-to-deploy-mlflow-models.md)
-> * [v2 (current version)](how-to-deploy-mlflow-models-online-endpoints.md)
+> * [v2 (current version)](how-to-deploy-mlflow-models.md)
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure ML for both real-time and batch inference. Azure ML supports no-code deployment of models created and logged with MLflow. This means that you don't have to provide a scoring script or an environment. Those models can be deployed to ACI (Azure Container Instances), AKS (Azure Kubernetes Services) or our managed inference services (referred as MIR).
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
Use MLflow to query and manage all the experiments in Azure Machine Learning. Th
### Prerequisites * Install `azureml-mlflow` plug-in.
-* If you're running in a compute not hosted in Azure ML, configure MLflow to point to the Azure ML MLtracking URL. You can follow the instruction at [Track runs from your local machine](how-to-use-mlflow-cli-runs.md#track-runs-from-your-local-machine)
+* If you're running in a compute not hosted in Azure ML, configure MLflow to point to the Azure ML MLtracking URL. You can follow the instruction at [Track runs from your local machine](how-to-use-mlflow-cli-runs.md)
### Support matrix for querying runs and experiments
By experiment name:
```python mlflow.search_runs(experiment_names=[ "my_experiment" ]) ```
-By experiment id:
+By experiment ID:
```python mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
The following example shows all the runs that have been completed:
runs[runs.status == "FINISHED"] ```
-## Accessing runs details
+## Getting metrics, parameters, artifacts and models
-By default, MLflow returns runs as a Pandas `Dataframe`. You can get Python objects if needed, which may be useful to get details about them by specifying the `output_format` parameter:
+By default, MLflow returns runs as a Pandas `Dataframe` containing a limited amount of information. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
```python runs = mlflow.search_runs(
MLflow also allows you to both operations at once and download and load the mode
model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}") ```
-## Getting child (nested) runs information
+## Getting child (nested) runs
MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. This is the typical case of hyper-parameter tuning for instance. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
# Track Azure Databricks ML experiments with MLflow and Azure Machine Learning [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
-In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to connect your Azure Databricks (ADB) experiments, MLflow, and Azure Machine Learning.
+In this article, learn how to enable MLflow to connect to Azure Machine Learning while working in an Azure Databricks workspace. You can leverage this configuration for tracking, model management and model deployment.
[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts. Learn more about [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/).
See [MLflow and Azure Machine Learning](concept-mlflow.md) for additional MLflow
If you have an MLflow Project to train with Azure Machine Learning, see [Train ML models with MLflow Projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
-> [!TIP]
-> The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
- ## Prerequisites
-* Install the `azureml-mlflow` package.
- * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
+* Install the `azureml-mlflow` package, which handles the connectivity with Azure Machine Learning, including authentication.
* An [Azure Databricks workspace and cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). * [Create an Azure Machine Learning Workspace](how-to-manage-workspace.md). * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
-## Track Azure Databricks runs
-
-MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts from your Azure Databricks runs into both your:
-
-* Azure Databricks workspace.
-* Azure Machine Learning workspace
-
-After you create your Azure Databricks workspace and cluster,
-
-1. Install the *azureml-mlflow* library from PyPi, to ensure that your cluster has access to the necessary functions and classes.
-
-1. Set up your experiment notebook.
-
-1. Connect your Azure Databricks workspace and Azure Machine Learning workspace.
-
-Additional details for these steps are in the following sections so you can successfully run your MLflow experiments with Azure Databricks.
- ## Install libraries To install libraries on your cluster, navigate to the **Libraries** tab and select **Install New**
In the **Package** field, type azureml-mlflow and then select install. Repeat th
![Azure DB install mlflow library](./media/how-to-use-mlflow-azure-databricks/install-libraries.png)
-## Set up your notebook
-
-Once your ADB cluster is set up,
-1. Select **Workspaces** on the left navigation pane.
-1. Expand the workspaces drop down menu and select **Import**
-1. Drag and drop, or browse to find, your experiment notebook to import your ADB workspace.
-1. Select **Import**. Your experiment notebook opens automatically.
-1. Under the notebook title on the top left, select the cluster want to attach to your experiment notebook.
## Connect your Azure Databricks and Azure Machine Learning workspaces
To link your ADB workspace to a new or existing Azure Machine Learning workspace
> [!NOTE] > MLflow Tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported.
-## MLflow Tracking in your workspaces
+## Track Azure Databricks runs with MLflow
+
+Azure Databricks can be configured to track experiments using MLflow in both Azure Databricks workspace and Azure Machine Learning workspace (dual-tracking), or exclusively on Azure Machine Learning. By default, dual-tracking is configured for you when you linked your Azure Databricks workspace.
+
+### Dual-tracking on Azure Databricks and Azure Machine Learning
After you link your Azure Databricks workspace with your Azure Machine Learning workspace, MLflow Tracking is automatically set to be tracked in all of the following places:
mlflow.log_metric('epoch_loss', loss.item())
> [!NOTE] > As opposite to tracking, model registries don't support registering models at the same time on both Azure Machine Learning and Azure Databricks. Either one or the other has to be used. Please read the section [Registering models in the registry with MLflow](#registering-models-in-the-registry-with-mlflow) for more details.
-### Set MLflow Tracking to only track in your Azure Machine Learning workspace
+### Tracking exclusively on Azure Machine Learning workspace
If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace. This configuration has the advantage of enabling easier path to deployment using Azure Machine Learning deployment options.
Models are logged inside of the run being tracked. That means that models are av
## Registering models in the registry with MLflow
-As opposite to tracking, **model registries can't operate** at the same time in Azure Databricks and Azure Machine Learning. Either one or the other has to be used. By default, the Azure Databricks workspace is used for model registries; unless you chose to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace), then the model registry is the Azure Machine Learning workspace.
+As opposite to tracking, **model registries can't operate** at the same time in Azure Databricks and Azure Machine Learning. Either one or the other has to be used. By default, the Azure Databricks workspace is used for model registries; unless you chose to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace), then the model registry is the Azure Machine Learning workspace.
Then, considering you are using the default configuration, the following line will log a model inside the corresponding runs of both Azure Databricks and Azure Machine Learning, but it will register it only on Azure Databricks:
mlflow.spark.log_model(model, artifact_path = "model",
### Registering models in the Azure Machine Learning Registry with MLflow
-At some point you may want to start registering models in Azure Machine Learning. Such configuration has the advantage of enabling all the deployment capabilities of Azure Machine Learning automatically, including no-code-deployment and model management capabilities. In that case, we recommend you to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace). This will remove the ambiguity of where models are being registered.
+At some point you may want to start registering models in Azure Machine Learning. Such configuration has the advantage of enabling all the deployment capabilities of Azure Machine Learning automatically, including no-code-deployment and model management capabilities. In that case, we recommend you to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace). This will remove the ambiguity of where models are being registered.
If you want to continue using the dual-tracking capabilities but register models in Azure Machine Learning you can instruct MLflow to use Azure ML for model registries by configuring the MLflow Model Registry URI. This URI has the exact same format and value that the MLflow tracking URI.
mlflow.set_registry_uri(azureml_mlflow_uri)
``` > [!NOTE]
-> The value of `azureml_mlflow_uri` was obtained in the same way it was demostrated in [Set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace)
+> The value of `azureml_mlflow_uri` was obtained in the same way it was demostrated in [Set MLflow Tracking to only track in your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace)
For a complete example about this scenario please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb).
from pyspark.sql.types import ArrayType, FloatType
model_uri = "runs:/"+last_run_id+ {model_path} #Create a Spark UDF for the MLFlow model - pyfunc_udf = mlflow.pyfunc.spark_udf(spark, model_uri) #Load Scoring Data into Spark Dataframe - scoreDf = spark.table({table_name}).where({required_conditions}) - #Make Prediction - preds = (scoreDf - .withColumn('target_column_name', pyfunc_udf('Input_column1', 'Input_column2', ' Input_column3', …)) - ) display(preds)
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
Title: Tracking for ML experiments with MLflow and CLI (v2)
+ Title: Track ML experiments and models with MLflow
-description: Set up MLflow Tracking with Azure Machine Learning to log metrics and artifacts from ML models with MLflow or the Azure Machine Learning CLI (v2)
+description: Set up MLflow Tracking with Azure Machine Learning to log metrics and artifacts from ML models with MLflow
ms.devlang: azurecli
-# Track ML experiments and models with MLflow or the Azure Machine Learning CLI (v2)
+# Track ML experiments and models with MLflow
[!INCLUDE [cli v1](../../includes/machine-learning-cli-v2.md)]
ms.devlang: azurecli
> * [v1](./v1/how-to-use-mlflow.md) > * [v2 (current version)](how-to-use-mlflow-cli-runs.md)
-In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to connect Azure Machine Learning as the backend of your MLflow experiments. You can accomplish this connection with either the MLflow Python API or the [Azure Machine Learning CLI v2](how-to-train-cli.md) in your terminal. You also learn how to use [MLflow's Model Registry](https://mlflow.org/docs/latest/model-registry.html) capabilities with Azure Machine Learning.
+In this article, learn how to enable [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) to connect Azure Machine Learning as the backend of your MLflow experiments.
[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](how-to-use-mlflow-azure-databricks.md).
See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLf
## Prerequisites * Install the `azureml-mlflow` package.
- * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
- * [Create an Azure Machine Learning Workspace](how-to-manage-workspace.md). * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations). * Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the ml extension. * Install and set up SDK(v2) for Python
-## Track runs from your local machine
+## Track runs from your local machine or remote compute
-MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace.
+Tracking using MLflow with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace.
### Set up tracking environment
-To track a local run, you need to point your local machine to the Azure Machine Learning MLflow Tracking URI.
-
->[!IMPORTANT]
-> Make sure you are logged in to your Azure account on your local machine, otherwise the tracking URI returns an empty string. If you are using any Azure ML compute the tracking environment and experiment name is already configured..
-
-# [MLflow SDK](#tab/mlflow)
+To track a run that is not running on Azure Machine Learning compute (from now on referred to as *"local compute"*), you need to point your local compute to the Azure Machine Learning MLflow Tracking URI.
+> [!NOTE]
+> When running on Azure Compute (Azure Notebooks, Jupyter Notebooks hosted on Azure Compute Instances or Compute Clusters) you don't have to configure the tracking URI. It's automatically configured for you.
+# [Using the Azure ML SDK v2](#tab/amlsdk)
-The following code uses `mlflow` and your Azure Machine Learning workspace details to construct the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
+You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
```Python from azure.ai.ml import MLClient
subscription_id = '<SUBSCRIPTION_ID>'
resource_group = '<RESOURCE_GROUP>' workspace = '<AML_WORKSPACE_NAME>'
-#get a handle to the workspace
-ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
-
-tracking_uri = ml_client.workspaces.get(name=workspace).mlflow_tracking_uri
-
-mlflow.set_tracking_uri(tracking_uri)
-
-print(tracking_uri)
+ml_client = MLClient(credential=DefaultAzureCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=resource_group)
+
+azureml_mlflow_uri = ml_client.workspaces.get(workspace).mlflow_tracking_uri
+mlflow.set_tracking_uri(azureml_mlflow_uri)
```
-# [Terminal](#tab/terminal)
+>[!IMPORTANT]
+> `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
+
+# [Using an environment variable](#tab/environ)
Another option is to set one of the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) directly in your terminal. ```Azure CLI
-# Configure MLflow to communicate with a Azure Machine Learning-hosted tracking server
- export MLFLOW_TRACKING_URI=$(az ml workspace show --query mlflow_tracking_uri | sed 's/"//g') ```-
+>[!IMPORTANT]
+> Make sure you are logged in to your Azure account on your local machine, otherwise the tracking URI returns an empty string. If you are using any Azure ML compute the tracking environment and experiment name is already configured.
-### Set experiment name
+# [Building the MLflow tracking URI](#tab/build)
-All MLflow runs are logged to the active experiment, which can be set with the MLflow SDK or Azure CLI.
+The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
-# [MLflow SDK](#tab/mlflow)
+```python
+import mlflow
+aml_region = ""
+subscription_id = ""
+resource_group = ""
+workspace = ""
+azureml_mlflow_uri = f"azureml://{aml_region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace}"
+mlflow.set_tracking_uri(azureml_mlflow_uri)
+```
-With MLflow you can use the [`mlflow.set_experiment()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_experiment) command.
+> [!NOTE]
+> You can also get this URL by: Navigate to [Azure ML studio](https://ml.azure.com) -> Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
+++
+### Set experiment name
+
+All MLflow runs are logged to the active experiment. By default, runs are logged to an experiment named `Default` that is automatically created for you. To configure the experiment you want to work on use MLflow command [`mlflow.set_experiment()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_experiment).
```Python experiment_name = 'experiment_with_mlflow' mlflow.set_experiment(experiment_name) ```
-# [Terminal](#tab/terminal)
-
-You can set one of the MLflow environment variables [MLFLOW_EXPERIMENT_NAME or MLFLOW_EXPERIMENT_ID](https://mlflow.org/docs/latest/cli.html#cmdoption-mlflow-run-arg-uri) with the experiment name.
+You can also set one of the MLflow environment variables [MLFLOW_EXPERIMENT_NAME or MLFLOW_EXPERIMENT_ID](https://mlflow.org/docs/latest/cli.html#cmdoption-mlflow-run-arg-uri) with the experiment name.
-```Azure CLI
-# Configure MLflow to communicate with a Azure Machine Learning-hosted tracking server
+```bash
export MLFLOW_EXPERIMENT_NAME="experiment_with_mlflow" ```- ### Start training run
with mlflow.start_run() as mlflow_run:
mlflow.log_artifact("helloworld.txt") ```
-## Track remote runs with Azure Machine Learning CLI (v2)
+For details about how to log metrics, parameters and artifacts in a run using MLflow view [How to log and view metrics](how-to-log-view-metrics.md).
+
+## Track jobs running on Azure Machine Learning
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-Remote runs (jobs) let you train your models on more powerful computes, such as GPU enabled virtual machines, or Machine Learning Compute clusters. See [Use compute targets for model training](how-to-set-up-training-targets.md) to learn about different compute options.
+Remote runs (jobs) let you train your models in a more robust and repetitive way. They can also leverage more powerful computes, such as Machine Learning Compute clusters. See [Use compute targets for model training](how-to-set-up-training-targets.md) to learn about different compute options.
+
+When submitting runs using jobs, Azure Machine Learning automatically configures MLflow to work with the workspace the job is running in. This means that there is no need to configure the MLflow tracking URI. On top of that, experiments are automatically named based on the details of the job.
+
+> [!IMPORTANT]
+> When submitting training jobs to Azure Machine Learning, you don't have to configure the MLflow tracking URI on your training logic as it is already configured for you.
-MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts from your remote runs into your Azure Machine Learning workspace. Any run with MLflow Tracking code in it logs metrics automatically to the workspace.
+### Creating a training routine
First, you should create a `src` subdirectory and create a file with your training code in a `hello_world.py` file in the `src` subdirectory. All your training code will go into the `src` subdirectory, including `train.py`.
Copy this code into the file:
:::code language="python" source="~/azureml-examples-main/cli/jobs/basics/src/hello-mlflow.py":::
+> [!NOTE]
+> Note how this sample don't contains the instructions `mlflow.start_run` nor `mlflow.set_experiment`. This is automatically done by Azure Machine Learning.
+
+### Submitting the job
+ Use the [Azure Machine Learning CLI (v2)](how-to-train-cli.md) to submit a remote run. When using the Azure Machine Learning CLI (v2), the MLflow tracking URI and experiment name are set automatically and directs the logging from MLflow to your workspace. Learn more about [logging Azure Machine Learning CLI (v2) experiments with MLflow](how-to-train-cli.md#model-tracking-with-mlflow) Create a YAML file with your job definition in a `job.yml` file. This file should be created outside the `src` directory. Copy this code into the file:
Open your terminal and use the following to submit the job.
az ml job create -f job.yml --web ```
-## View metrics and artifacts in your workspace
+## Automatic logging
+With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported.
+
+To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
+
+```Python
+mlflow.autolog()
+```
+
+[Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
+## View metrics and artifacts in your workspace
The metrics and artifacts from MLflow logging are tracked in your workspace. To view them anytime, navigate to your workspace and find the experiment by name in your workspace in [Azure Machine Learning studio](https://ml.azure.com). Or run the below code.
params = finished_mlflow_run.data.params
print(metrics,tags,params) ```
-### Retrieve artifacts with MLFLow
- To view the artifacts of a run, you can use [MlFlowClient.list_artifacts()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.list_artifacts) ```Python
To download an artifact to the current directory, you can use [MLFlowClient.down
client.download_artifacts(run_id, "helloworld.txt", ".") ```
-### Compare and query
-
-Compare and query all MLflow runs in your Azure Machine Learning workspace with the following code.
-[Learn more about how to query runs with MLflow](https://mlflow.org/docs/latest/search-syntax.html#programmatically-searching-runs).
-
-```Python
-from mlflow.entities import ViewType
-
-all_experiments = [exp.experiment_id for exp in MlflowClient().list_experiments()]
-query = "metrics.hello_metric > 0"
-runs = mlflow.search_runs(experiment_ids=all_experiments, filter_string=query, run_view_type=ViewType.ALL)
-
-runs.head(10)
-```
-
-## Automatic logging
-With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported.
-
-To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
+For more details about how to retrieve information from experiments and runs in Azure Machine Learning using MLflow view [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
-```Python
-mlflow.autolog()
-```
-
-[Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
## Manage models
The following MLflow methods are not fully supported with Azure Machine Learning
## Next steps
-* [Deploy MLflow models to managed online endpoint (preview)](how-to-deploy-mlflow-models-online-endpoints.md).
-* [Manage your models](concept-model-management-and-deployment.md).
+* [Deploy MLflow models)](how-to-deploy-mlflow-models.md).
+* [Manage models with MLflow](how-to-manage-models-mlflow.md).
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-managed.md
Indicate who should have management access to this managed application in each s
Complete the following steps for Global Azure and Azure Government Cloud, as applicable. 1. In the **Azure Active Directory Tenant ID** box, enter the Azure AD Tenant ID (also known as directory ID) containing the identities of the users, groups, or applications you want to grant permissions to.
-1. In the **Principal ID** box, provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Identify the user by their Principal ID, which can be found at the [Azure Active Directory users blade](https://portal.azure.com/#view/Microsoft_AAD_UsersAndTenants/UserManagementMenuBlade/~/AllUsers) on the Azure portal.
+1. In the **Principal ID** box (also known as object id), provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Identify the user by their Principal ID, which can be found at the [Azure Active Directory users blade](https://portal.azure.com/#view/Microsoft_AAD_UsersAndTenants/UserManagementMenuBlade/~/AllUsers) on the Azure portal.
1. From the **Role definition** list, select an Azure AD built-in role. The role you select describes the permissions the principal will have on the resources in the customer subscription. 1. To add another authorization, select the **Add authorization (max 100)** link, and repeat steps 1 through 3.
marketplace Company Work Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/company-work-accounts.md
Title: Company work accounts and Partner Center
-description: How to check whether your company has a work account set up with Microsoft, create a new work account, or set up multiple work accounts to use with Partner Center (Azure Marketplace).
+description: Find out how to link a work email account domain to Partner Center. Learn how to create a work account and use multiple accounts. See troubleshooting tips.
Previously updated : 06/08/2021 Last updated : 06/10/2022+ # Company work accounts and Partner Center
-Partner Center uses company work accounts, also known as Azure Active Directory (AD) tenants, to manage account access for multiple users, control permissions, host groups and applications, and maintain profile data. By linking your company's work email account domain to your Partner Center account, employees of your company can sign in to Partner Center to manage marketplace offers using their own work account usernames and passwords.
+Partner Center uses company work accounts, also known as Azure Active Directory (Azure AD) tenants, for many purposes:
+
+- To manage account access for multiple users
+- To control permissions
+- To host groups and applications
+- To maintain profile data
+
+If you link your company's work email account domain to your Partner Center account, your employees can sign in to Partner Center to manage marketplace offers by using their own work account usernames and passwords.
## Check whether your company already has a work account
-If your company has subscribed to a Microsoft cloud service such as Azure, Microsoft Intune, or Microsoft 365, then you already have a work email account domain (also referred to as an Azure Active Directory tenant) that can be used with Partner Center.
+If your company subscribes to a Microsoft cloud service such as Azure, Microsoft Intune, or Microsoft 365, you already have a work email account domain. You can use that work account with Partner Center.
-Follow these steps to check:
+Follow these steps to check for a work account:
-1. Sign in to the Azure admin portal at https://portal.azure.com.
-2. Select **Azure Active Directory** from the left-navigation menu and then select **Custom Domain Names**.
-3. If you already have a work account, your domain name will be listed.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Azure Active Directory**, and then select **Custom domain names**.
+1. Search for your domain name in the list. If you already have a work account, the list will contain your domain name.
If your company doesn't already have a work account, one will be created for you during the Partner Center enrollment process. ## Set up multiple work accounts
-Before deciding to use an existing work account, consider how many users in the work account will need to access Partner Center. If you have users in the work account who won't need to access Partner Center, you may want to consider creating multiple work accounts, so that only those users who will need to access Partner Center are represented on a particular account.
+Before you decide to use an existing work account, consider how many users in the work account need to access Partner Center. If you have users in the work account who don't need to access Partner Center, you might want to consider creating multiple work accounts. That way, only users who need to access Partner Center are represented on a particular account.
## Create a new work account
-To create a new work account for your company, follow the steps below. You may need to request assistance from whoever has administrative permissions on your company's Microsoft Azure account.
+To create a new work account for your company, take the following steps. You might need to request assistance from the person who has administrative permissions for your company's Microsoft Azure account.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Azure Active Directory**, and then select **Users**.
+
+1. Select **New user**, and then follow these steps to configure a new Azure work account:
+
+ 1. Enter a name and work email address.
+ 1. For **Directory role**, ensure the value meets the user requirement.
+ 1. At the bottom, select **Show password**.
+ 1. Make a note of the autogenerated password.
+ 1. Complete the other required fields.
-1. Sign in to the [Microsoft Azure portal](https://portal.azure.com).
-2. From the left navigation menu, select the **Azure Active Directory** > **Users**.
-3. Select **New user** and create a new Azure work account by entering a name and work email address. Ensure the **Directory role** is set as per the User requirement and select the **Show Password** checkbox at the bottom to view and make a note of the auto-generated password.
-4. Complete the other required fields and select **Create** to save the new user.
+1. Select **Create** to save the new user.
-The email address for the user account must be a verified domain name in your directory. You can list all the verified domains in your directory by selecting **Azure Active Directory** > **Custom domain names** in the left-navigation menu.
+The email address for the user account must be a verified domain name in your directory. To list all the verified domains in your directory, select **Azure Active Directory** > **Custom domain names**.
-To learn more about adding custom domains in Azure Active Directory, see [Add or associate a domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md).
+To learn more about adding custom domains in Azure AD, see [Add or associate a domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md).
## Troubleshoot work email sign-in
-If you're having trouble signing in to your work account (also known as your Azure AD tenant), find the scenario on the diagram below that best matches your situation and follow the recommended steps.
+If you're having trouble signing in to your work account, find the scenario on the following diagram that best matches your situation, and take the recommended steps.
-[![Diagram for troubleshooting work account sign-in](media/manage-accounts/onboarding-aad-flow.png)](media/manage-accounts/onboarding-aad-flow.png#lightbox)
## Next steps
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
While you can create assessments for multiple regions in an Azure Migrate projec
Yes, you can migrate to multiple subscriptions (same Azure tenant) in the same target region for an Azure Migrate project. You can select the target subscription while enabling replication for a machine or a set of machines. The target region is locked post first replication for agentless VMware migrations and during the replication appliance and Hyper-V provider installation for agent-based migrations and agentless Hyper-V migrations respectively.
-### How is the data transmitted from on-prem environment to Azure? Is it encrypted before transmission?
+### How is the data transmitted from on-premises environment to Azure? Is it encrypted before transmission?
The Azure Migrate appliance in the agentless replication case compresses data and encrypts before uploading. Data is transmitted over a secure communication channel over https and uses TLS 1.2 or later. Additionally, Azure Storage automatically encrypts your data when it is persisted it to the cloud (encryption-at-rest).
The applications can continue to run at the source while letting you perform tes
### Is there a Rollback option for Azure Migrate? You can use the Test Migration option to validate your application functionality and performance in Azure. You can perform any number of test migrations and can execute the final migration after establishing confidence through the test migration operation.
-A test migration doesnΓÇÖt impact the on-premises machine, which remains operational and continues replicating until you perform the actual migration. If there were any errors during the test migration UAT, you can choose to postpone the final migration and keep your source VM/server running and replicating to Azure. You can reattempt the final migration once you resolve the errors.
+A test migration doesnΓÇÖt affect the on-premises machine, which remains operational and continues replicating until you perform the actual migration. If there were any errors during the test migration UAT, you can choose to postpone the final migration and keep your source VM/server running and replicating to Azure. You can reattempt the final migration once you resolve the errors.
Note: Once you have performed a final migration to Azure and the on-premises source machine was shut down, you cannot perform a rollback from Azure to your on-premises environment. ### Can I select the Virtual Network and subnet to use for test migrations?
Azure Migrate: Server Migration tool migrates all the UEFI-based machines to Azu
### Can I migrate Active Directory domain-controllers using Azure Migrate?
-The Server Migration tool is application agnostic and works for most applications. When you migrate a server using the Server Migration tool, all the applications installed on the server are migrated along with it. However, for some applications, alternate migration methods other than server migration may be better suited for the migration. For Active Directory, in the case of hybrid environments where the on-premises site is connected to your Azure environment, you can extend your Directory into Azure by adding extra domain controllers in Azure and setting up Active Directory replication. If you are migrating into an isolated environment in Azure requiring its own domain controllers (or testing applications in a sandbox environment), you can migrate servers using the server migration tool.
+The Server Migration tool is application agnostic and works for most applications. When you migrate a server using the Server Migration tool, all the applications installed on the server are migrated along with it. However, for some applications, alternate migration methods other than server migration may be better suited for the migration. For Active Directory, if hybrid environments where the on-premises site is connected to your Azure environment, you can extend your Directory into Azure by adding extra domain controllers in Azure and setting up Active Directory replication. If you are migrating into an isolated environment in Azure requiring its own domain controllers (or testing applications in a sandbox environment), you can migrate servers using the server migration tool.
### Can I upgrade my OS while migrating?
Azure Migrate: Server Migration provides agentless replication options for the m
The agentless replication option works by using mechanisms provided by the virtualization provider (VMware, Hyper-V). In the case of VMware virtual machines, the agentless replication mechanism uses VMware snapshots and VMware changed block tracking technology to replicate data from virtual machine disks. This mechanism is similar to the one used by many backup products. In the case of Hyper-V virtual machines, the agentless replication mechanism uses VM snapshots and the change tracking capability of the Hyper-V replica to replicate data from virtual machine disks. When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
-When you perform the migrate operation on a replicating virtual machine, you have the option to shutdown the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migrate option, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
+When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
To get started, refer the [VMware agentless migration](./tutorial-migrate-vmware.md) and [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorials.
If there are multiple appliances set up, it is required there is no overlap amon
Agentless replication results in some performance impact on VMware vCenter Server and VMware ESXi hosts. Because agentless replication uses snapshots, it consumes IOPS on storage, so some IOPS storage bandwidth is required. We don't recommend using agentless replication if you have constraints on storage or IOPs in your environment.
+### Can I use Azure Migrate to migrate my web apps to Azure App Service?
+
+You can perform at-scale agentless migration of ASP.NET web apps running on IIS web servers hosted on a Windows OS in a VMware environment. [Learn more.](./tutorial-migrate-webapps.md)
+ ## Agent-based Migration
The agentless replication option works by using mechanisms provided by the virtu
When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
-When you perform the migrate operation on a replicating virtual machine, you have the option to shutdown the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migrate option, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
+When you perform the migrate operation on a replicating virtual machine, you have the option to shut down the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migration, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
To get started, refer the [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorial.
migrate Concepts Migration Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md
+
+ Title: Support matrix for web apps migration
+description: Support matrix for web apps migration
++++ Last updated : 06/22/2022+++
+# Support matrix for web apps migration
+
+This article summarizes support settings and limitations for agentless migration of web apps to Azure App Service [Azure Migrate: Migration and modernization](migrate-services-overview.md#azure-migrate-server-migration-tool) . If you're looking for information about assessing web apps for migration to Azure App Service, review the [assessment support matrix](concepts-azure-webapps-assessment-calculation.md).
+
+## Migration options
+
+You can perform agentless migration of ASP.NET web apps at-scale to [Azure App Service](https://azure.microsoft.com/services/app-service/) using Azure Migrate. However, agent based migration is not supported.
+
+## Limitations
+
+- Currently, At-Scale Discovery, Assessment and Migration is supported for ASP.NET web apps deployed to on-premises IIS servers hosted on VMware Environment.
+- You can select up to five App Service Plans as part of single migration.
+- Currently, we do not support selecting existing App service plans during the migration flow.
+- You can migrate web apps up to max 2 GB in size including content stored in mapped virtual directory.
+- Currently, we do not support migrating UNC directory content.
+- You need Windows PowerShell 4.0 installed on VMs hosting the IIS web servers from which you plan to migrate ASP.NET web apps to Azure App Services.
+- Currently, the migration flow does not support VNet integrated scenarios.
+
+## ASP.NET web apps migration requirements
+
+Azure Migrate now supports agentless at-scale migration of ASP.NET web apps to [Azure App Service](https://azure.microsoft.com/services/app-service/). Performing [web apps assessment](./tutorial-assess-webapps.md) is mandatory for migration web apps using the integrated flow in Azure Migrate.
+
+Support | Details
+ |
+**Supported servers** | Currently supported only for windows servers running IIS in your VMware environment.
+**Windows servers** | Windows Server 2008 R2 and later are supported.
+**Linux servers** | Currently not supported.
+**IIS access** | Web apps discovery requires a local admin user account.
+**IIS versions** | IIS 7.5 and later are supported.
+**PowerShell version** | PowerShell 4.0
+
+## Next steps
+
+- Learn how to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md).
+- Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s):
+ - [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain.md).
+ - [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings.md).
+ - [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview).
+ - [Deployment best practices](/azure/app-service/deploy-best-practices).
+ - [Security recommendations](/azure/app-service/security-recommendations).
+ - [Networking features](/azure/app-service/networking-features).
+ - [Monitor App Service with Azure Monitor](/azure/app-service/monitor-app-service).
+ - [Configure Azure AD authentication](/azure/app-service/configure-authentication-provider-aad).
+- [Review best practices](/azure/app-service/deploy-best-practices.md) for deploying to Azure App service.
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
description: Learn how to assess SQL instances for migration to Azure SQL Manage
Previously updated : 02/07/2021 Last updated : 06/27/2022 - # Create an Azure SQL assessment As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
This article shows you how to assess discovered SQL instances in preparation for
## Before you start - Make sure you've [created](./create-manage-projects.md) an Azure Migrate project and have the Azure Migrate: Discovery and assessment tool added.-- To create an assessment, you need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md). The appliance discovers on-premises servers, and sends metadata and performance data to Azure Migrate. [Learn more](migrate-appliance.md)
+- To create an assessment, you need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md). The appliance discovers on-premises servers, and sends metadata and performance data to Azure Migrate. [Learn more](migrate-appliance.md).
## Azure SQL assessment overview You can create an Azure SQL assessment with sizing criteria as **Performance-based**.
You can create an Azure SQL assessment with sizing criteria as **Performance-bas
## Run an assessment Run an assessment as follows:
-1. On the **Overview** page > **Windows, Linux and SQL Server**, click **Assess and migrate servers**.
-
+1. On the **Overview** page > **Servers, databases and web apps**, select **Assess and migrate servers**.
+
:::image type="content" source="./media/tutorial-assess-sql/assess-migrate-inline.png" alt-text="Screenshot of Overview page for Azure Migrate." lightbox="./media/tutorial-assess-sql/assess-migrate-expanded.png":::
-2. On **Azure Migrate: Discovery and assessment**, click **Assess** and choose the assessment type as **Azure SQL**.
+1. In **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Azure SQL**.
:::image type="content" source="./media/tutorial-assess-sql/assess-inline.png" alt-text="Screenshot of Dropdown to choose assessment type as Azure SQL." lightbox="./media/tutorial-assess-sql/assess-expanded.png":::
+
+1. In **Assess servers**, the assessment type is pre-selected as **Azure SQL** and the discovery source is defaulted to **Servers discovered from Azure Migrate appliance**.
-3. In **Assess servers**, you will be able to see the assessment type pre-selected as **Azure SQL** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
-
-4. Click **Edit** to review the assessment properties.
-
+1. Select **Edit** to review the assessment settings.
:::image type="content" source="./media/tutorial-assess-sql/assess-servers-sql-inline.png" alt-text="Screenshot of Edit button from where assessment settings can be customized." lightbox="./media/tutorial-assess-sql/assess-servers-sql-expanded.png":::-
-5. In Assessment properties > **Target Properties**:
+1. In **Assessment settings** > **Target and pricing settings**, do the following:
- In **Target location**, specify the Azure region to which you want to migrate. - Azure SQL configuration and cost recommendations are based on the location that you specify.
- - In **Target deployment type**,
- - Select **Recommended**, if you want Azure Migrate to assess the readiness of your SQL instances for migrating to Azure SQL MI and Azure SQL DB, and recommend the best suited target deployment option, target tier, Azure SQL configuration and monthly estimates. [Learn More](concepts-azure-sql-assessment-calculation.md)
- - Select **Azure SQL DB**, if you want to assess the readiness of your SQL instances for migrating to Azure SQL Databases only and review the target tier, Azure SQL configuration and monthly estimates.
- - Select **Azure SQL MI**, if you want to assess the readiness of your SQL instances for migrating to Azure SQL Managed Instance only and review the target tier, Azure SQL configuration and monthly estimates.
+ - In **Environment type**, specify the environment for the SQL deployments to apply pricing applicable to Production or Dev/Test.
+ - In **Offer/Licensing program**, specify the Azure offer if you're enrolled. Currently the field is defaulted to Pay-as-you-go, which will give you retail Azure prices.
+ - You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
+ - You can apply Azure Hybrid Benefit on top of the Pay-as-you-go offer and Dev/Test environment. The assessment does not support applying Reserved Capacity on top of the Pay-as-you-go offer and Dev/Test environment.
+ - If the offer is set to *Pay-as-you-go* and Reserved capacity is set to *No reserved instances*, the monthly cost estimates are calculated by multiplying the number of hours chosen in the VM uptime field with the hourly price of the recommended SKU.
- In **Reserved Capacity**, specify whether you want to use reserved capacity for the SQL server after migration.
- - If you select a reserved capacity option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
-
-6. In Assessment properties > **Assessment criteria**:
- - The Sizing criteria is defaulted to **Performance-based** which means Azure migrate will collect performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized Azure SQL Database and/or SQL Managed Instance configuration. You can specify:
- - **Performance history** to indicate the data duration on which you want to base the assessment. (Default is one day)
- - **Percentile utilization**, to indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
- - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
+ - If you select a reserved capacity option, you can't specify "Discount (%)" or "VM uptime".
+ - If the Reserved capacity is set to *1 year reserved* or *3 years reserved*, the monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
+ - In **Currency**, select the billing currency for your account.
+ - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
+ - In **VM uptime**, specify the duration (days per month/hour per day) that servers/VMs will run. This is useful for computing cost estimates for SQL Server on Azure VM where you are aware that Azure VMs might not run continuously.
+ - Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified.
+ - Default is 31 days per month/24 hours per day.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server and/or an SQL Server license. Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have an SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
+
+1. In **Assessment settings** > **Assessment criteria**,
+ - The **Sizing criteria** is defaulted to *Performance-based*, which means Azure migrate will collect performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized SQL Server on Azure VM and/or Azure SQL Database and/or Azure SQL Managed Instance configuration. You can specify:
+ - **Performance history** to indicate the data duration on which you want to base the assessment. (Default is one day.)
+ - **Percentile utilization**, to indicate the percentile value you want to use for the performance sample. (Default is 95th percentile.)
+ - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues such as seasonal usage, short performance history, and likely increases in future usage. For example, the following table displays values if you use a comfort factor of two:
**Component** | **Effective utilization** | **Add comfort factor (2.0)** | | Cores | 2 | 4 Memory | 8 GB | 16 GB
-
-7. In **Pricing**:
- - In **Offer/Licensing program**, specify the Azure offer if you're enrolled. Currently you can only choose from Pay-as-you-go and Pay-as-you-go Dev/Test.
- - You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
- - You can apply Azure Hybrid Benefit on top of Pay-as-you-go Dev/Test. The assessment currently does not support applying Reserved Capacity on top of Pay-as-you-go Dev/Test offer.
- - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database and/or SQL Managed Instance:
- - Select **Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical. Learn More
+
+1. In **Assessment settings** > **Azure SQL Managed Instance sizing**,
+ - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance:
+ - Select *Recommended* if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.
+ - Select *General Purpose* if you want an Azure SQL configuration designed for budget-oriented workloads.
+ - Select *Business Critical* if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
+ - **Instance type** - Default value is *Single instance*.
+1. In **Assessment settings** > **SQL Server on Azure VM sizing**:
+ - **Pricing Tier** - Default value is *Standard*.
+ - In **VM series**, specify the Azure VM series you want to consider for *SQL Server on Azure VM* sizing. Based on the configuration and performance requirements of your SQL Server or SQL Server instance, the assessment will recommend a VM size from the selected list of VM series.
+ - You can edit settings as needed. For example, if you don't want to include D-series VM, you can exclude D-series from this list.
+ > [!NOTE]
+ > As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+ - **Storage Type** is defaulted to *Recommended*, which means the assessment will recommend the best suited Azure Managed Disk based on the chosen environment type, on-premises disk size, IOPS, and throughput.
+
+1. In **Assessment settings** > **Azure SQL Database sizing**,
+ - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database.
+ - Select **Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.
- Select **General Purpose** if you want an Azure SQL configuration designed for budget-oriented workloads. - Select **Business Critical** if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
- - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
- - In **Currency**, select the billing currency for your account.
- - In **Azure Hybrid Benefit**, specify whether you already have a SQL Server license. If you do and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
- - Click Save if you make changes.
-
+ - **Instance type** - Default value is *Single database*.
+ - **Purchase model** - Default value is *vCore*.
+ - **Compute tier** - Default value is *Provisioned*.
+
+ - Select **Save** if you made changes.
+ :::image type="content" source="./media/tutorial-assess-sql/view-all-inline.png" alt-text="Screenshot to save the assessment properties." lightbox="./media/tutorial-assess-sql/view-all-expanded.png":::
-8. In **Assess Servers** > click Next.
-9. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
-10. In **Select or create a group** > select **Create New** and specify a group name.
-
+8. In **Assess Servers**, select **Next**.
+9. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
+10. In **Select or create a group** > select **Create New** and specify a group name.
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-add-servers-inline.png" alt-text="Screenshot of Location of New group button." lightbox="./media/tutorial-assess-sql/assessment-add-servers-expanded.png":::
-11. Select the appliance, and select the servers you want to add to the group. Then click Next.
-12. In **Review + create assessment**, review the assessment details, and click Create Assessment to create the group and run the assessment.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-create.png" alt-text="Location of Review and create assessment button.":::
-13. After the assessment is created, go to **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment** tile > Click on the number next to Azure SQL assessment.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-navigation.png" alt-text="Navigation to created assessment":::
-15. Click on the assessment name which you wish to view.
+11. Select the appliance and select the servers you want to add to the group and select **Next**.
+12. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
+13. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to Azure SQL assessment. If you do not see the number populated, select **Refresh** to get the latest updates.
+
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-navigation.png" alt-text="Screenshot of Navigation to created assessment.":::
+
+15. Select the assessment name, which you wish to view.
> [!NOTE]
-> As Azure SQL assessments are performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. If your discovery is still in progress, the readiness of your SQL instances will be marked as **Unknown**. Ideally, after you start discovery, **wait for the performance duration you specify (day/week/month)** to create or recalculate the assessment for a high-confidence rating.
+> As Azure SQL assessments are performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. If your discovery is still in progress, the readiness of your SQL instances will be marked as **Unknown**. Ideally, after you start discovery, **wait for the performance duration you specify (day/week/month)** to create or recalculate the assessment for a high-confidence rating.
## Review an assessment **To view an assessment**:
-1. **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment** > Click on the number next to Azure SQL assessment.
-2. Click on the assessment name which you wish to view. As an example(estimations and costs for example only):
+1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to Azure SQL assessment.
+2. Select the assessment name, which you wish to view. As an example(estimations and costs, for example, only):
:::image type="content" source="./media/tutorial-assess-sql/assessment-sql-summary-inline.png" alt-text="Screenshot of Overview of SQL assessment." lightbox="./media/tutorial-assess-sql/assessment-sql-summary-expanded.png":::
-3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
+3. Review the assessment summary. You can also edit the assessment settings or recalculate the assessment.
-#### Discovered items
+### Discovered entities
-This indicates the number of SQL servers, instances and databases that were assessed in this assessment.
+This indicates the number of SQL servers, instances, and databases that were assessed in this assessment.
-#### Azure readiness
+### SQL Server migration scenarios
-This indicates the distribution of assessed SQL instances:
-
-**Target deployment type (in assessment properties)** | **Readiness**
- | |
-**Recommended** | Ready for Azure SQL Database, Ready for SQL Managed Instance, Potentially ready for Azure VM, Readiness unknown (In case the discovery is in progress or there are some discovery issues to be fixed)
-**Azure SQL DB** or **Azure SQL MI** | Ready for Azure SQL Database or SQL Managed Instance, Not ready for Azure SQL Database or SQL Managed Instance, Readiness unknown (In case the discovery is in progress or there are some discovery issues to be fixed)
-
-You can drill down to understand details around migration issues/warnings that you can remediate before migration to Azure SQL. [Learn More](concepts-azure-sql-assessment-calculation.md)
-You can also review the recommended Azure SQL configurations for migrating to Azure SQL databases and/or Managed Instances.
-
-#### Azure SQL Database and Managed Instance cost details
+This indicates the different migration strategies that you can consider for your SQL deployments. You can review the readiness for target deployment types and the cost estimates for SQL Servers/Instances/Databases that are marked ready or ready with conditions:
+
+1. **Recommended deployment**:
+This is a strategy where an Azure SQL deployment type that is the most compatible with your SQL instance. It is the most cost-effective and is recommended. Migrating to a Microsoft-recommended target reduces your overall migration effort. If your instance is ready for SQL Server on Azure VM, Azure SQL Managed Instance and Azure SQL Database, the target deployment type, which has the least migration readiness issues and is the most cost-effective is recommended.
+You can see the SQL Server instance readiness for different recommended deployment targets and monthly cost estimates for SQL instances marked *Ready* and *Ready with conditions*.
+
+ - You can go to the Readiness report to:
+ - Review the recommended Azure SQL configurations for migrating to SQL Server on Azure VM and/or Azure SQL databases and/or Azure SQL Managed Instances.
+ - Understand details around migration issues/warnings that you can remediate before migration to the different Azure SQL targets. [Learn More](concepts-azure-sql-assessment-calculation.md).
+ - You can go to the cost estimates report to review cost of each of the SQL instance after migrating to the recommended deployment target.
+
+ > [!NOTE]
+ > In the recommended deployment strategy, migrating instances to SQL Server on Azure VM is the recommended strategy for migrating SQL Server instances. When the SQL Server credentials are not available, the Azure SQL assessment provides right-sized lift-and-shift, that is, *Server to SQL Server on Azure VM* recommendations.
-The monthly cost estimate includes compute and storage costs for Azure SQL configurations corresponding to the recommended Azure SQL Database and/or SQL Managed Instance deployment type. [Learn More](concepts-azure-sql-assessment-calculation.md#calculate-monthly-costs)
+1. **Migrate all instances to Azure SQL MI**:
+In this strategy, you can see the readiness and cost estimates for migrating all SQL Server instances to Azure SQL Managed Instance.
+1. **Migrate all instances to SQL Server on Azure VM**:
+In this strategy, you can see the readiness and cost estimates for migrating all SQL Server instances to SQL Server on Azure VM.
+
+1. **Migrate all servers to SQL Server on Azure VM**:
+In this strategy, you can see how you can rehost the servers running SQL Server to SQL Server on Azure VM and review the readiness and cost estimates.
+Even when SQL Server credentials are not available, this report will provide right-sized lift-and-shift, that is, "Server to SQL Server on Azure VM" recommendations. The readiness and sizing logic is similar to Azure VM assessment type.
+
+1. **Migrate all SQL databases to Azure SQL Database**
+In this strategy, you can see how you can migrate individual databases to Azure SQL Database and review the readiness and cost estimates.
### Review readiness
+You can review readiness reports for different migration strategies:
+
+1. Select the **Readiness** report for any of the migration strategies.
-1. Click **Azure SQL readiness**.
-
:::image type="content" source="./media/tutorial-assess-sql/assessment-sql-readiness-inline.png" alt-text="Screenshot with Details of Azure SQL readiness" lightbox="./media/tutorial-assess-sql/assessment-sql-readiness-expanded.png":::
-1. In Azure SQL readiness, review the **Azure SQL DB readiness** and **Azure SQL MI readiness** for the assessed SQL instances:
- - **Ready**: The instance is ready to be migrated to Azure SQL DB/MI without any migration issues or warnings.
- - Ready(hyperlinked and blue information icon): The instance is ready to be migrated to Azure SQL DB/MI without any migration issues but has some migration warnings that you need to review. You can click on the hyperlink to review the migration warnings and the recommended remediation guidance:
- :::image type="content" source="./media/tutorial-assess-sql/assess-ready.png" alt-text="Assessment with ready status":::
- - **Not ready**: The instance has one or more migration issues for migrating to Azure SQL DB/MI. You can click on the hyperlink and review the migration issues and the recommended remediation guidance.
- - **Unknown**: Azure Migrate can't assess readiness, because the discovery is in progress or there are issues during discovery that need to be fixed from the notifications blade. If the issue persists, please contact Microsoft support.
-1. Review the recommended deployment type for the SQL instance which is determined as per the matrix below:
-
- - **Target deployment type** (as selected in assessment properties): **Recommended**
-
- **Azure SQL DB readiness** | **Azure SQL MI readiness** | **Recommended deployment type** | **Azure SQL configuration and cost estimates calculated?**
- | | | |
- Ready | Ready | Azure SQL DB or Azure SQL MI [Learn more](concepts-azure-sql-assessment-calculation.md#recommended-deployment-type) | Yes
- Ready | Not ready or Unknown | Azure SQL DB | Yes
- Not ready or Unknown | Ready | Azure SQL MI | Yes
- Not ready | Not ready | Potentially ready for Azure VM [Learn more](concepts-azure-sql-assessment-calculation.md#calculate-readiness) | No
- Not ready or Unknown | Not ready or Unknown | Unknown | No
-
- - **Target deployment type** (as selected in assessment properties): **Azure SQL DB**
-
- **Azure SQL DB readiness** | **Azure SQL configuration and cost estimates calculated?**
- | |
- Ready | Yes
- Not ready | No
- Unknown | No
-
- - **Target deployment type** (as selected in assessment properties): **Azure SQL MI**
+1. Review the readiness columns in the respective reports:
- **Azure SQL MI readiness** | **Azure SQL configuration and cost estimates calculated?**
- | |
- Ready | Yes
- Not ready | No
- Unknown | No
-
-4. Click on the instance name and drill down to see the number of user databases, instance details including instance properties, compute (scoped to instance) and source database storage details.
-5. Click on the number of user databases to review the list of databases and their details. As an example(estimations and costs for example only):
- :::image type="content" source="./media/tutorial-assess-sql/assessment-db.png" alt-text="SQL instance detail":::
-5. Click on review details in the Migration issues column to review the migration issues and warnings for a particular target deployment type.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-db-issues.png" alt-text="DB migration issues and warnings":::
+ **Migration strategy** | **Readiness Columns (Respective deployment target)**
+ |
+ Recommended | MI readiness (Azure SQL MI), VM readiness (SQL Server on Azure VM), DB readiness (Azure SQL DB).
+ Instances to Azure SQL MI | MI readiness (Azure SQL Managed Instance)
+ Instances to SQL Server on Azure VM | VM readiness (SQL Server on Azure VM).
+ Servers to SQL Server on Azure VM | Azure VM readiness (SQL Server on Azure VM).
+ Databases to Azure SQL DB | DB readiness (Azure SQL Database)
+
+1. Review the readiness for the assessed SQL instances/SQL Servers/Databases:
+ - **Ready**: The instance/server is ready to be migrated to SQL Server on Azure VM/Azure SQL MI/Azure SQL DB without any migration issues or warnings.
+ - Ready: The instance is ready to be migrated to Azure VM/Azure SQL MI/Azure SQL DB without any migration issues but has some migration warnings that you need to review. You can select the hyperlink to review the migration warnings and the recommended remediation guidance.
+ - **Ready with conditions**: The instance/server has one or more migration issues for migrating to Azure VM/Azure SQL MI/Azure SQL DB. You can select on the hyperlink and review the migration issues and the recommended remediation guidance.
+ - **Not ready**: The assessment could not find a SQL Server on Azure VM/Azure SQL MI/Azure SQL DB configuration meeting the desired configuration and performance characteristics. Select the hyperlink to review the recommendation to make the instance/server ready for the desired target deployment type.
+ - **Unknown**: Azure Migrate can't assess readiness, because the discovery is in progress or there are issues during discovery that need to be fixed from the notifications blade. If the issue persists, contact [Microsoft support](https://support.microsoft.com).
+
+1. Select the instance name and drill-down to see the number of user databases, instance details including instance properties, compute (scoped to instance) and source database storage details.
+1. Click the number of user databases to review the list of databases and their details.
+1. Click review details in the **Migration issues** column to review the migration issues and warnings for a particular target deployment type.
### Review cost estimates
-The assessment summary shows the estimated monthly compute and storage costs for Azure SQL configurations corresponding to the recommended Azure SQL databases and/or Managed Instances deployment type.
+The assessment summary shows the estimated monthly compute and storage costs for Azure SQL configurations corresponding to the recommended SQL Server on Azure VM and/or Azure SQL Managed Instances and/or Azure SQL Database deployment type.
1. Review the monthly total costs. Costs are aggregated for all SQL instances in the assessed group.
- - Cost estimates are based on the recommended Azure SQL configuration for an instance.
- - Estimated monthly costs for compute and storage are shown. As an example(estimations and costs for example only):
+ - Cost estimates are based on the recommended Azure SQL configuration for an instance/server/database.
+ - Estimated total(compute and storage) monthly costs are displayed. As an example:
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-cost-inline.png" alt-text="Screenshot of cost details." lightbox="./media/tutorial-assess-sql/assessment-sql-cost-expanded.png":::
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-cost-inline.png" alt-text="Screenshot of cost details." lightbox="./media/tutorial-assess-sql/assessment-sql-cost-expanded.png":::
+ - The compute and storage costs are split in the individual cost estimates reports and at instance/server/database level.
1. You can drill down at an instance level to see Azure SQL configuration and cost estimates at an instance level. 1. You can also drill down to the database list to review the Azure SQL configuration and cost estimates per database when an Azure SQL Database configuration is recommended. ### Review confidence rating Azure Migrate assigns a confidence rating to all Azure SQL assessments based on the availability of the performance/utilization data points needed to compute the assessment for all the assessed SQL instances and databases. Rating is from one star (lowest) to five stars (highest).
-
-The confidence rating helps you estimate the reliability of size recommendations in the assessment. Confidence ratings are as follows.
+The confidence rating helps you estimate the reliability of size recommendations in the assessment. Confidence ratings are as follows:
**Data point availability** | **Confidence rating** |
The confidence rating helps you estimate the reliability of size recommendations
## Next steps - [Learn more](concepts-azure-sql-assessment-calculation.md) about how Azure SQL assessments are calculated.-- Start migrating SQL instances and databases using [Azure Database Migration Service](../dms/dms-overview.md).
+- Start migrating SQL instances and databases using [Azure Database Migration Service](../dms/dms-overview.md).
migrate Migrate Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-services-overview.md
The Azure Migrate: Server Migration tool helps in migrating servers to Azure:
On-premises VMware VMs | Migrate VMs to Azure using agentless or agent-based migration.<br/><br/> For agentless migration, Server Migration uses the same appliance that is used by Discovery and assessment tool for discovery and assessment of servers.<br/><br/> For agent-based migration, Server Migration uses a replication appliance. On-premises Hyper-V VMs | Migrate VMs to Azure.<br/><br/> Server Migration uses provider agents installed on Hyper-V host for the migration. On-premises physical servers or servers hosted on other clouds | You can migrate physical servers to Azure. You can also migrate other virtualized servers, and VMs from other public clouds, by treating them as physical servers for the purpose of migration. Server Migration uses a replication appliance for the migration.
+Web apps hosted on Windows OS in a VMware environment | You can perform agentless migration of ASP.NET web apps at-scale to [Azure App Service](https://azure.microsoft.com/services/app-service/) using Azure Migrate.
## Selecting assessment and migration tools
migrate Troubleshoot Webapps Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-webapps-migration.md
+
+ Title: Troubleshoot web apps migration issues
+description: Troubleshoot web apps migration issues
+++ Last updated : 6/22/2022+++
+# Troubleshooting web apps migration issues
+
+This article describes some common issues and specific errors you might encounter when trying to migrate web apps using Azure Migrate.
+
+## Web Apps migration issues
+
+This table lists the steps for fixing the following migration issues:
+
+**Error code** | **Error message** | **Troubleshooting steps**
+-- | - | --
+AccessDenied | Access denied. | Check error details. This may be due to a change since last web app discovery. Confirm that the web app discovery is still successful and/or troubleshoot web app discovery access issues first.
+AppContentAlreadyExists | Application content appContent.zip already present on storage before content copy. | Retry the migration using a new storage account. Contact support if this occurs persistently.
+AppZipUploadFailed | Error uploading application content to storage account. | Retry if it is a transient issue and confirm connectivity between the appliance and the Azure storage account specified for the migration.
+CopyAppContentToApplianceFailure | Error occurred copying content from IIS web server to appliance. | Check error details for more information. Confirm connectivity between appliance and web server such as by looking for recently successful web app discovery.
+IISWebAppExceededMaxContentSize | Content size exceeded max content size (2 GB) for migration using this tool. | The deployment method used only supports content up to 2 GB in size. If uncompressed content is larger than 2 GB, migration will not be attempted with this error. This should be flagged in the web app assessment and may indicate file content size changes since the last web app discovery was completed.
+IISWebAppFailureCompressingSiteContent | Exception occurred compressing site content. | Check error details for more information. This could be related to physical file permissions, including, if access has been blocked for the Administrator account used for the web app discovery and migration of the site content.
+IISWebAppMigrationError | Error occurred during app content copy operation. | Check the error message for additional details.
+IISWebAppNotFoundOnServer | Web application matching site name not found on web server. | This may be due to changes on the web server since the last web app discovery was completed, such as site delete or rename operations. Confirm that web app discovery was completed recently and that the site still exists on the web server.
+IISWebAppUNCContentDirectory | Web app contains only UNC directory content. UNC directories are not currently supported for migration. | Currently, migration is not supported for content on UNC shares. This error will occur if all site content is on UNC shares, if there are non-UNC share content directories those will be migrated.
+IISWebServerAccessFailedError | Unable to access the IIS configuration. | This can be caused by insufficient access to IIS configuration and management API locations. Web app migration uses the same identity and connection mechanism as web app discovery. Check if settings have changed since the last successful web app discovery and if that discovery is still successful for this web server.
+IISWebServerIISNotFoundError | IIS Management Console feature is not enabled. | This error indicates that the IIS Management Console feature is not enabled on the web server, and is likely a change to the web server since the last successful web app discovery was completed. Ensure that the Web Server (IIS) role including the IIS Management Console feature (part of Management Tools) is enabled and that web app discovery can discover web apps for the target web server.
+IISWebServerInvalidSiteConfig | Invalid IIS configuration encountered, the site has no root application defined. | This indicates an invalid site configuration for one or more sites on the IIS server. Add a root "/" application for all web sites on the IIS server or remove the associated (non-functional) sites.
+IISWebServerPowerShellError | Error occurred during PowerShell operation. | Check the error message for more details. Remote PowerShell is used to package the site content from the web server without requiring the installation of any products or machine changes on the web server.
+IISWebServerPowerShellVersionLessThan4 | PowerShell version on IIS web server was less than minimum required PowerShell version 4. | Migration is only supported for IIS web servers with PowerShell V4 or later versions. Update the web server with PowerShell v4 to enable this migration.
+IISWebServerUnableToConnect | Unable to connect to the server. | Check error details. This may be due to a change since last successful web app discovery. Confirm that web app discovery is still successful and/or troubleshoot web app discovery access issues first.
+|IISWebServerZeroWebAppsFound | No web apps were found on the target IIS server. | This may indicate that the web server was modified after the last web app discovery was completed. Confirm that web app discovery was recently completed and that web apps were not removed from the web server.
+NullResult | PowerShell script returned no results. | Remote PowerShell is used for packaging the site content from the web server without requiring install of any products or persistent files on the server. This error may indicate that the MaxMemoryPerShell value on the IIS server is too low, or has been changed since web app discovery was completed. Try increasing the MaxMemoryPerShell value on the IIS server using a command like: Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 4096
+ResultFileContentJSONParseError | Results in unexpected format. | Contact support if you are seeing this error.
+ScriptExecutionTimedOutOnVm | Operation timed out. | This error may indicate a change on the server since last web app discovery. Check if web app discovery is still running and successful.
+StorageAuthenticationFailed | Failed to authenticate with Azure Storage container. | Check the error details for more information.
+StorageBlobAlreadyExists | App content blob already present before upload of app content. | Retry the migration using a new storage account.
+StorageGenericError | Azure Storage related error. | The Azure Resource Manager deployment step will complete only when the content (appContent.zip) or an error file (error.json) appear in the siteΓÇÖs storage container ΓÇô if the NuGet is unable to upload the error.json file in error cases, the Azure Resource Manager deployment will continue until it times out, waiting for the content. This may indicate an issue with connectivity between the appliance and the specified storage account being used by migration.
+UnableToConnectToPhysicalServer | Connecting to the remote server failed. | Check error details. This may be due to a change since last web app discovery. Check for web app discovery errors and troubleshoot web app discovery connection issues first.
+UnableToConnectToServer | Connecting to the remote server failed. | Check error details. This may be due to a change since last web app discovery. Check for web app discovery errors and troubleshoot web app discovery connection issues first.
+
+## Next steps
+
+- Continue to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md).
+- Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s):
+ - [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain.md).
+ - [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings.md).
+ - [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview).
+ - [Deployment best practices](/azure/app-service/deploy-best-practices).
+ - [Security recommendations](/azure/app-service/security-recommendations).
+ - [Networking features](/azure/app-service/networking-features).
+ - [Monitor App Service with Azure Monitor](/azure/app-service/monitor-app-service).
+ - [Configure Azure AD authentication](/azure/app-service/configure-authentication-provider-aad).
+- [Review best practices](/azure/app-service/deploy-best-practices.md) for deploying to Azure App service.
migrate Tutorial Assess Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps.md
In this tutorial, you learn how to:
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. - Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md)-- If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+- If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## Run an assessment Run an assessment as follows:
-1. On the **Overview** page > **Servers, databases and web apps**, click **Discover, assess and migrate**.
+1. On the **Overview** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
+ :::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Overview page for Azure Migrate":::
-2. On **Azure Migrate: Discovery and assessment**, click **Assess** and choose the assessment type as **Azure App Service**.
+
+2. On **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Azure App Service**.
+ :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Dropdown to choose assessment type as Azure App Service":::
-3. In **Create assessment** > you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
-4. Click **Edit** to review the assessment properties.
+
+3. In **Create assessment**, you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
+
+4. Select **Edit** to review the assessment properties.
+ :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Edit button from where assessment properties can be customized":::+ 5. Here's what's included in Azure App Service assessment properties: **Property** | **Details** | **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify. **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans.
- **Reserved instances** | Specifies reserved instances so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved instance option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
+ **Reserved instances** | Specifies reserved instances so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved instance option, you can't specify *Discount (%)*.
**Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer. **Currency** | The billing currency for your account. **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings.
- :::image type="content" source="./media/tutorial-assess-webapps/webapps-assessment-properties.png" alt-text="App Service assessment properties":::
+ :::image type="content" source="./media/tutorial-assess-webapps/webapps-assessment-properties.png" alt-text="Screenshot of App Service assessment properties.":::
-1. In **Create assessment** > click Next.
+1. In **Create assessment**, select **Next**.
1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
-1. In **Select or create a group** > select **Create New** and specify a group name.
-1. Select the appliance, and select the servers you want to add to the group. Then click Next.
-1. In **Review + create assessment**, review the assessment details, and click Create Assessment to create the group and run the assessment.
-1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** tile > Refresh the tile data by clicking on the Refresh option on top of the tile. Wait for data to get refreshed.
- :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Refresh discovery and assessment tool data":::
-1. Click on the number next to Azure App Service assessment.
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Navigation to created assessment":::
-1. Click on the assessment name which you wish to view.
+1. In **Select or create a group**, select **Create New** and specify a group name.
+1. Select the appliance, and select the servers that you want to add to the group. Select **Next**.
+1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
+1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**. Refresh the tile data by selecting the **Refresh** option on top of the tile. Wait for the data to refresh.
+
+ :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Refresh discovery and assessment tool data.":::
+
+1. Select the number next to Azure App Service assessment.
+
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Navigation to created assessment.":::
+
+1. Select the assessment name, which you wish to view.
## Review an assessment **To view an assessment**:
-1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > Click on the number next to Azure App Service assessment.
-2. Click on the assessment name which you wish to view.
+1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to the Azure App Service assessment.
+2. Select the assessment name, which you wish to view.
+
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-summary.png" alt-text="App Service assessment overview.":::
+ 3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment. #### Azure App Service readiness
-This indicates the distribution of assessed web apps. You can drill-down to understand details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md)
+This indicates the distribution of the assessed web apps. You can drill down to understand the details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md).
You can also view the recommended App Service SKU and plan for migrating to Azure App Service. #### Azure App Service cost details
An [App Service plan](../app-service/overview-hosting-plans.md) carries a [charg
### Review readiness
-1. Click **Azure App Service readiness**.
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Azure App Service readiness details":::
+1. Select **Azure App Service readiness**.
+
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Azure App Service readiness details.":::
+ 1. Review Azure App Service readiness column in table, for the assessed web apps: 1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type. 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance. 1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance. 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app.
-1. Review the recommended SKU for the web apps which is determined as per the matrix below:
+1. Review the recommended SKU for the web apps, which is determined as per the matrix below:
**Isolation required** | **Reserved instance** | **App Service plan/ SKU** | |
An [App Service plan](../app-service/overview-hosting-plans.md) carries a [charg
Not ready | No | No Unknown | No | No
-1. Click on the App Service plan hyperlink in table to see the App Service plan details such as compute resources, and other web apps that are part of the same plan.
+1. Select the App Service plan link in the Azure App Service readiness table to see the App Service plan details such as compute resources and other web apps that are part of the same plan.
### Review cost estimates
-The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan.
-To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. Number of web apps allocated to each plan instance is as per below table.
+The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). The apps that you add into this App Service plan run on the compute resources defined by your App Service plan.
+To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. The number of web apps allocated to each plan instance is shown below.
**App Service plan** | **Web apps per App Service plan** |
P1v3 | 16
## Next steps
-[Learn more](concepts-azure-webapps-assessment-calculation.md) about how Azure App Service assessments are calculated.
+- Learn how to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md).
+- [Learn more](concepts-azure-webapps-assessment-calculation.md) about how Azure App Service assessments are calculated.
migrate Tutorial Migrate Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-webapps.md
+
+ Title: Migrate ASP.NET web apps to Azure App Service using Azure Migrate
+description: At-scale migration of ASP.NET web apps to Azure App Service using Azure Migrate
++++ Last updated : 06/21/2022+++
+# Migrate ASP.NET web apps to Azure App Service with Azure Migrate
+
+This article shows you how to migrate ASP.NET web apps at-scale to [Azure App Service](https://azure.microsoft.com/services/app-service/) using Azure Migrate.
+
+> [!NOTE]
+> Tutorials show you the simplest deployment path for a scenario so that you can quickly set up a proof-of-concept. Tutorials use default options where possible and don't show all possible settings and paths.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Migrate ASP.NET web apps at-scale to [Azure App Service](https://azure.microsoft.com/services/app-service/) using integrated flow in Azure Migrate.
+> * Change migration plans for web apps.
+> * Change App Service plan for web apps.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
+
+## Prerequisites
+
+Before you begin this tutorial, you should:
+
+1. [Complete the first tutorial](./tutorial-discover-vmware.md) to discover web apps running in your VMware environment.
+2. [Complete the second tutorial](./tutorial-assess-webapps.md) to assess web apps to determine their readiness status for migration to [Azure App Service](https://azure.microsoft.com/services/app-service/). It's necessary to assess web apps in order to migrate them using the integrated flow.
+3. Go to the existing project or [create a new project](./create-manage-projects.md).
+
+## Migrate web apps
+
+Once the web apps are assessed, you can migrate them using the integrated migration flow in Azure Migrate.
+
+ - You can select up to five App Service Plans as part of a single migration.
+ - Currently, we don't support selecting existing App Service Plans during the migration flow.
+ - You can migrate web apps up to a maximum size of 2 GB, including content stored in the mapped virtual directory.
+ - Currently, we don't support migrating UNC directory content.
+ - You need Windows PowerShell 4.0 installed on servers hosting the IIS web servers from which you plan to migrate ASP.NET web apps to Azure App Services.
+ - Currently, the migration flow doesn't support VNet integrated scenarios.
+
+To migrate the web apps, perform these steps:
+1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration tools** > **Migration and modernization**, select **Replicate**.
+
+ :::image type="content" source="./media/tutorial-migrate-webapps/select-replicate.png" alt-text="Screenshot of the Replicate option selected.":::
+
+1. In **Specify intent**, > **What do you want to migrate?**, select **ASP.NET web apps**.
+1. In **Where do you want to migrate to?**, select **Azure App Service native**.
+1. In **Virtualization type**, select **VMware vSphere**.
+1. In **Select assessment**, select the assessment you want to use to migrate web apps and then select the **Continue** button. Specify the Azure App Service details where the apps will be hosted.
+
+ :::image type="content" source="./media/tutorial-migrate-webapps/specify-intent.png" alt-text="Screenshot of selected intent.":::
+
+1. In **Basics**, under **Project details**, select the **Subscription**, **Resource Group**, and **Region** where the web apps will be hosted, from the drop-down. Under **Storage**, select the **Storage account** for an intermediate storage location during the migration process. Select **Next: Web Apps >**.
+
+ :::image type="content" source="./media/tutorial-migrate-webapps/web-apps-basics.png" alt-text="Screenshot of Azure Migrate Web Apps Basics screen.":::
+
+1. In the **Web Apps** section, review the web apps you'd like to migrate.
+
+ :::image type="content" source="./media/tutorial-migrate-webapps/select-web-apps.png" alt-text="Screenshot of Azure Migrate Web Apps screen.":::
+
+ > [!NOTE]
+ > Apps with the Ready status are tagged for migration by default. Apps tagged as *Ready with conditions* can be migrated by selecting **Yes** in **Will migrate?**.
+
+ 1. Select the web apps to migrate and select **Edit**.
+
+ :::image type="content" source="./media/tutorial-migrate-webapps/web-apps-edit-multiple.png" alt-text="Screenshot of Azure Migrate selected web apps.":::
+
+ 1. In **Edit apps**, under **Will migrate?**, select **Yes**, and select the **App Service Plan** and **Pricing tier** of where the apps will be hosted. Next, select the **Ok** button.
+
+ > [!NOTE]
+ > Up to five App Service plans can be migrated at a time.
+
+ :::image type="content" source="./media/tutorial-migrate-webapps/edit-multiple-details.png" alt-text="Screenshot of Azure Migrate Edit apps.":::
+
+ Select the **Next: App Service Plans >** button.
+1. In the **App Service Plans** section, verify the App Service Plan details.
+
+ > [!NOTE]
+ > Depending on your web app requirements, you can edit the number of apps in an App Service plan or update the pricing tier. Follow these steps to update these details:
+ > 1. Select the **Edit** button.
+ > 1. In **Edit plan**, select the **Target name** and **Pricing tier**, then select **Ok**.
+ > :::image type="content" source="./media/tutorial-migrate-webapps/app-service-plan-edit-details.png" alt-text="Screenshot of App Service Plan Edit details.":::
+
+1. Once the App Service Plans are verified, select **Next: Review + create**.
+1. Azure Migrate will now validate the migration settings. Validation may take a few minutes to run. Once complete, review the details and select **Migrate**.
+
+ > [!NOTE]
+ > To download the migration summary, select the **Download CSV** button.
+
+Once the migration is initiated, you can track the status using the Azure Resource Manager Deployment Experience as shown below:
+
+ :::image type="content" source="./media/tutorial-migrate-webapps/web-apps-deployments.png" alt-text="Screenshot of Azure Migrate deployment.":::
+
+## Post-migration steps
+
+Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s):
+
+- [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain.md).
+- [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings.md).
+- [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview)
+- [Deployment best practices](/azure/app-service/deploy-best-practices).
+- [Security recommendations](/azure/app-service/security-recommendations).
+- [Networking features](/azure/app-service/networking-features).
+- [Monitor App Service with Azure Monitor](/azure/app-service/monitor-app-service).
+- [Configure Azure AD authentication](/azure/app-service/configure-authentication-provider-aad).
++
+## Next steps
+
+- Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
+- [Review best practices](/azure/app-service/deploy-best-practices.md) for deploying to Azure App service.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (June 2022)
+
+- Perform at-scale agentless migration of ASP.NET web apps running on IIS web servers hosted on a Windows OS in a VMware environment. [Learn more.](tutorial-migrate-webapps.md)
+ ## Update (May 2022) - Upgraded the Azure SQL assessment experience to identify the ideal migration target for your SQL deployments across Azure SQL MI, SQL Server on Azure VM, and Azure SQL DB: - We recommended migrating instances to *SQL Server on Azure VM* as per the Azure best practices. - *Right sized Lift and Shift* - Server to *SQL Server on Azure VM*. We recommend this when SQL Server credentials are not available. - Enhanced user-experience that covers readiness and cost estimates for multiple migration targets for SQL deployments in one assessment.
-
## Update (March 2022) - Perform agentless VMware VM discovery, assessments, and migrations over a private network using Azure Private Link. [Learn more.](how-to-use-azure-migrate-with-private-endpoints.md)
For more information, see [ASP.NET app containerization and migration to Azure K
- Assessments for migrating on-premises VMware VMs to [Azure VMware Solution (AVS)](./concepts-azure-vmware-solution-assessment-calculation.md) are now supported. [Learn more](how-to-create-azure-vmware-solution-assessment.md) - Support for multiple credentials on appliance for physical server discovery.-- Support to allow Azure login from appliance for tenant where tenant restriction has been configured.
+- Support to allow Azure sign in from appliance for tenant where tenant restriction has been configured.
## Update (April 2020)
The current version of Azure Migrate (released in July 2019) provides many new f
## Azure Migrate previous version
-If you're using the previous version of Azure Migrate (only assessment of on-premises VMware VMs was supported), you should now use the current version. In the previous version, you can no longer create new Azure Migrate projects, or perform new discoveries. You can still access existing projects. To do this in the Azure portal > **All services**, search for **Azure Migrate**. In the Azure Migrate notifications, there's a link to access old Azure Migrate projects.
+If you're using the previous version of Azure Migrate (only assessment of on-premises VMware VMs was supported), you should now use the current version. In the previous version, you can no longer create new Azure Migrate projects, or perform new discoveries. You can still access existing projects. To do this in the Azure portal, go to **All services**, search for **Azure Migrate**. In the Azure Migrate notifications, there's a link to access old Azure Migrate projects.
## Next steps
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | France Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| France South | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
One advantage of running your workload in Azure is its global reach. The flexibl
| South India | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| Switzerland West | :heavy_check_mark: | :heavy_check_mark: | :x: | :x:
| UAE North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | UK West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-in-replication.md
To learn more about this parameter, review the [MySQL documentation](https://dev
Data-in Replication is only supported in General Purpose and Memory Optimized pricing tiers.
+## Private Link support
+
+The private link for Azure database for MySQL support only inbound connections. As data-in replication requires outbound connection from service private link is not supported for the data-in traffic.
+ >[!Note] >GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2).
mysql How To Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-portal.md
Virtual Network (VNet) services endpoints and rules extend the private address s
1. On the MySQL server page, under the Settings heading, click **Connection Security** to open the Connection Security pane for Azure Database for MySQL.
-2. Ensure that the Allow access to Azure services control is set to **OFF**.
+2. Ensure that the Allow access to Azure services control is set to **No**.
> [!Important]
-> If you leave the control set to ON, your Azure MySQL Database server accepts communication from any subnet. Leaving the control set to ON might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for MySQL, together can reduce your security surface area.
+> If you leave the control set to **Yes**, your Azure MySQL Database server accepts communication from any subnet. Leaving the control set to **Yes** might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for MySQL, together can reduce your security surface area.
3. Next, click on **+ Adding existing virtual network**. If you do not have an existing VNet you can click **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../../virtual-network/quick-create-portal.md)
Virtual Network (VNet) services endpoints and rules extend the private address s
- For help in connecting to an Azure Database for MySQL server, see [Connection libraries for Azure Database for MySQL](./concepts-connection-libraries.md) <!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
orbital Howto Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/howto-downlink-aqua.md
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Prepare a virtual machine (VM) to receive the downlinked AQUA data 1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint virtual machine (VM)
-2. [Create a virtual machine (VM)](../virtual-network/quick-create-portal.md) within the virtual network above. Ensure that this VM has the following specifications:
+2. [Create a virtual machine (VM)](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network above. Ensure that this VM has the following specifications:
- Operation System: Linux (Ubuntu 18.04 or higher) - Size: at least 32 GiB of RAM - Ensure that the VM has at least one standard public IP
socat -u tcp-listen:56001,fork create:/media/aqua/out.bin
## Next steps - [Quickstart: Configure a contact profile](contact-profile.md)-- [Quickstart: Schedule a contact](schedule-contact.md)
+- [Quickstart: Schedule a contact](schedule-contact.md)
peering-service About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/about.md
Title: Azure Peering Service overview
-description: Learn about Azure Peering Service overview
+description: Learn about Azure Peering Service
na Previously updated : 05/18/2020 Last updated : 06/28/2022
-# Azure Peering Service Overview
+# Azure Peering Service overview
Azure Peering Service is a networking service that enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. Microsoft has partnered with internet service providers (ISPs), internet exchange partners (IXPs), and software-defined cloud interconnect (SDCI) providers worldwide to provide reliable and high-performing public connectivity with optimal routing from the customer to the Microsoft network.
For instructions on how to register Peering Service, see [Register Peering Servi
> [!NOTE] > This article is intended for network architects in charge of enterprise connectivity to the cloud and to the internet. - ## What is Peering Service? Peering Service is: - An IP service that uses the public internet. -- A collaboration platform with service providers and a value-added service that's intended to offer optimal and reliable routing to the customer via service provider partners to the Microsoft cloud over the public network.
+- A collaboration platform with service providers and a value-added service that's intended to offer optimal and reliable routing via service provider partners to the Microsoft cloud over the public network.
Peering Service is not a private connectivity product like Azure ExpressRoute or a VPN product. > [!NOTE]
-> For more information about ExpressRoute, see [ExpressRoute documentation](../expressroute/index.yml).
->
+> For more information about ExpressRoute, see [ExpressRoute documentation](../expressroute/expressroute-introduction.md).
## Background
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logical.md
Last updated 11/30/2021
# Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible Server
+[! INCLUDE [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
Azure Database for PostgreSQL - Flexible Server supports the following logical data extraction and replication methodologies: 1. **Logical replication**
Logical replication and logical decoding have several similarities. They both:
* Use the [write-ahead log (WAL)](https://www.postgresql.org/docs/current/wal.html) as the source of changes. * Use [logical replication slots](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS) to send out data. A slot represents a stream of changes. * Use a table's [REPLICA IDENTITY property](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-CREATETABLE-REPLICA-IDENTITY) to determine what changes can be sent out.
-* Do not replicate DDL changes.
+* Don't replicate DDL changes.
The two technologies have their differences:
Logical replication:
Logical decoding: * Extracts changes across all tables in a database.
-* Cannot directly send data between PostgreSQL instances.
+* Can't send data between PostgreSQL instances.
>[!NOTE] > As at this time, Flexible server does not support cross-region read replicas. Depending on the type of workload, you may choose to use logical replication feature for cross-region disaster recovery (DR) purpose.
Logical decoding:
1. Go to server parameters page on the portal. 2. Set the server parameter `wal_level` to `logical`.
-3. If you want to use pglogical extension, search for the `shared_preload_libraries` and `azure.extensions` parameters, and select `pglogical` from the drop-down box.
+3. If you want to use pglogical extension, search for the `shared_preload_libraries`, and `azure.extensions` parameters, and select `pglogical` from the drop-down box.
4. Update `max_worker_processes` parameter value to at least 16. Otherwise, you may run into issues like `WARNING: out of background worker slots`. 5. Save the changes and restart the server to apply the `wal_level` change. 6. Confirm that your PostgreSQL instance allows network traffic from your connecting resource.
Logical decoding:
```SQL ALTER ROLE <adminname> WITH REPLICATION; ```
-8. You may want to make sure the role you are using has [privileges](https://www.postgresql.org/docs/current/sql-grant.html) on the schema that you are replicating. Otherwise, you may run into errors such as `Permission denied for schema`.
+8. You may want to make sure the role you are using has [privileges](https://www.postgresql.org/docs/current/sql-grant.html) on the schema that you're replicating. Otherwise, you may run into errors such as `Permission denied for schema`.
++
+>[!NOTE]
+> It is always a good practice to separate your replication user from regular admin account.
## Using logical replication and logical decoding ### Native logical replication Logical replication uses the terms 'publisher' and 'subscriber'.
-* The publisher is the PostgreSQL database you are sending data **from**.
-* The subscriber is the PostgreSQL database you are sending data **to**.
+* The publisher is the PostgreSQL database you're sending data **from**.
+* The subscriber is the PostgreSQL database you're sending data **to**.
Here's some sample code you can use to try out logical replication.
Here's some sample code you can use to try out logical replication.
CREATE SUBSCRIPTION sub CONNECTION 'host=<server>.postgres.database.azure.com user=<admin> dbname=<dbname> password=<password>' PUBLICATION pub; ```
-5. You can now query the table on the subscriber. You will see that it has received data from the publisher.
+5. You can now query the table on the subscriber. You'll see that it has received data from the publisher.
```SQL SELECT * FROM basic; ```
Visit the PostgreSQL documentation to understand more about [logical decoding](h
## Monitoring
-You must monitor logical decoding. Any unused replication slot must be dropped. Slots hold on to Postgres WAL logs and relevant system catalogs until changes have been read. If your subscriber or consumer fails or has not been properly configured, the unconsumed logs will pile up and fill your storage. Also, unconsumed logs increase the risk of transaction ID wraparound. Both situations can cause the server to become unavailable. Therefore, it is critical that logical replication slots are consumed continuously. If a logical replication slot is no longer used, drop it immediately.
+You must monitor logical decoding. Any unused replication slot must be dropped. Slots hold on to Postgres WAL logs and relevant system catalogs until changes have been read. If your subscriber or consumer fails or if it is improperly configured, the unconsumed logs will pile up and fill your storage. Also, unconsumed logs increase the risk of transaction ID wraparound. Both situations can cause the server to become unavailable. Therefore, it is critical that logical replication slots are consumed continuously. If a logical replication slot is no longer used, drop it immediately.
The 'active' column in the pg_replication_slots view will indicate whether there is a consumer connected to a slot. ```SQL
postgresql Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-columnar.md
Title: Columnar table storage - Hyperscale (Citus) - Azure Database for PostgreSQL
-description: Compressing data using columnar storage
+ Title: Columnar table storage - Azure PostgreSQL Hyperscale (Citus)
+description: Learn how to compress data using columnar storage.
Previously updated : 08/03/2021 Last updated : 05/23/2022+
-# Columnar table storage
+# Compress data with columnar tables in Hyperscale (CItus)
[!INCLUDE[applies-to-postgresql-hyperscale](../includes/applies-to-postgresql-hyperscale.md)]
columnar table storage for analytic and data warehousing workloads. When
columns (rather than rows) are stored contiguously on disk, data becomes more compressible, and queries can request a subset of columns more quickly.
-## Usage
+## Create a table
To use columnar storage, specify `USING columnar` when creating a table:
This feature still has significant limitations. See [Hyperscale
## Next steps * See an example of columnar storage in a Citus [time series
- tutorial](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage)
+ tutorial](https://docs.citusdata.com/en/stable/use_cases/timeseries.html)
(external link).
postgresql Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-versions.md
Previously updated : 10/01/2021 Last updated : 06/28/2021 # Supported database versions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
versions](https://www.postgresql.org/docs/release/):
### PostgreSQL version 14
-The current minor release is 14.1. Refer to the [PostgreSQL
+The current minor release is 14.4. Refer to the [PostgreSQL
documentation](https://www.postgresql.org/docs/14/release-14-1.html) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 13
-The current minor release is 13.5. Refer to the [PostgreSQL
+The current minor release is 13.7. Refer to the [PostgreSQL
documentation](https://www.postgresql.org/docs/13/release-13-5.html) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 12
-The current minor release is 12.9. Refer to the [PostgreSQL
+The current minor release is 12.11. Refer to the [PostgreSQL
documentation](https://www.postgresql.org/docs/12/release-12-9.html) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 11
-The current minor release is 11.14. Refer to the [PostgreSQL
+The current minor release is 11.16. Refer to the [PostgreSQL
documentation](https://www.postgresql.org/docs/11/release-11-14.html) to learn more about improvements and fixes in this minor release.
postgresql Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-regions.md
Previously updated : 02/23/2022 Last updated : 06/21/2022 # Regional availability for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
Hyperscale (Citus) server groups are available in the following Azure regions:
* West Central US * West US * West US 2
+ * West US 3
* Asia Pacific: * Australia East * Central India
postgresql Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-multi-tenant.md
Title: 'Tutorial: Design a multi-tenant database - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: This tutorial shows how to power a scalable multi-tenant application with Azure Database for PostgreSQL Hyperscale (Citus).
+ Title: Multi-tenant database - Azure PostgreSQL Hyperscale (Citus)
+description: Learn how to design a scalable multi-tenant application with Azure Database for PostgreSQL Hyperscale (Citus).
-+ ms.devlang: azurecli Previously updated : 05/14/2019 Last updated : 05/23/2022 #Customer intent: As an developer, I want to design a hyperscale database so that my multi-tenant application runs efficiently for all tenants.
-# Tutorial: design a multi-tenant database by using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+# Design a multi-tenant database using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
[!INCLUDE[applies-to-postgresql-hyperscale](../includes/applies-to-postgresql-hyperscale.md)]
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-ad-authentication.md
When using Azure AD authentication, there are two Administrator accounts for the
![admin structure][2]
+ >[!NOTE]
+ > Service Principal or Managed Identity cannot act as fully functional Azure AD Administrator in Single Server and this limitation is fixed in our Flexible Server
+ ## Permissions To create new users that can authenticate with Azure AD, you must have the `azure_ad_admin` role in the database. This role is assigned by configuring the Azure AD Administrator account for a specific Azure Database for PostgreSQL server.
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
> [!NOTE] > Currently, the Microsoft Purview Data Map can't scan an asset that has `/`, `\`, or `#` in its name. To scope your scan and avoid scanning assets that have those characters in the asset name, use the example in [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan).
+> [!IMPORTANT]
+> If you plan on using a self-hosted integration runtime, scanning some data sources requires additional setup on the self-hosted integration runtime machine. For example, JDK, Visual C++ Redistributable, or specific driver.
+> For your source, **[refer to each source article for prerequisite details.](azure-purview-connector-overview.md)**
+> Any requirements will be listed in the **Prerequisites** section.
+ ## Scan regions The following is a list of all the Azure data source (data center) regions where the Microsoft Purview Data Map scanner runs. If your Azure data source is in a region outside of this list, the scanner will run in the region of your Microsoft Purview instance.
The following file types are supported for scanning, for schema extraction, and
> * The scanner supports scanning snappy compressed PARQUET types for schema extraction and classification. > * For GZIP file types, the GZIP must be mapped to a single csv file within. > Gzip files are subject to System and Custom Classification rules. We currently don't support scanning a gzip file mapped to multiple files within, or any file type other than csv.
- > * For delimited file types (CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns.
+ > * For delimited file types (CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns. \
+ > * For Parquet files, if you are using a self-hosted integration runtime, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** on your IR machine. Check our [Java Runtime Environment section at the bottom of the page](manage-integration-runtimes.md#java-runtime-environment-installation) for an installation guide.
- Document file formats supported by extension: DOC, DOCM, DOCX, DOT, ODP, ODS, ODT, PDF, POT, PPS, PPSX, PPT, PPTM, PPTX, XLC, XLS, XLSB, XLSM, XLSX, XLT - The Microsoft Purview Data Map also supports custom file extensions and custom parsers.
purview Concept Business Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-business-glossary.md
The same term can also imply multiple business objects. It is important that eac
Microsoft Purview supports eight out-of-the-box attributes for any business glossary term: - Name (mandatory)
+- Nickname
+- Status
- Definition-- Data stewards-- Data experts
+- Stewards
+- Experts
- Acronym - Synonyms - Related terms - Resources
+- Parent term
These attributes cannot be edited or deleted. However, these attributes are not sufficient to completely define a term in an organization. To solve this problem, Microsoft Purview provides a feature where you can define custom attributes for your glossary.
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
To register a new Cassandra server in your data catalog:
## Scan
-Follow the steps below to scan Cassandra to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
+Follow the steps below to scan Cassandra to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
### Create and run scan
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
On the **Register sources (Db2)** screen, do the following:
## Scan
-Follow the steps below to scan Db2 to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan Db2 to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Authentication for a scan
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-erwin-source.md
On the Register sources (erwin) screen, do the following:
## Scan
-Follow the steps below to scan erwin Mart servers to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
+Follow the steps below to scan erwin Mart servers to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
### Create and run scan
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
On the Register sources (Google BigQuery) screen, do the following:
## Scan
-Follow the steps below to scan a Google BigQuery project to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan a Google BigQuery project to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Create and run scan
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
The only supported authentication for a Hive Metastore database is Basic Authent
> 1. Confirm you have followed all [**prerequisites**](#prerequisites). > 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
-Use the following steps to scan Hive Metastore databases to automatically identify assets and classify your data. For more information about scanning in general, see [Scans and ingestion in Microsoft Purview](concept-scans-and-ingestion.md).
+Use the following steps to scan Hive Metastore databases to automatically identify assets. For more information about scanning in general, see [Scans and ingestion in Microsoft Purview](concept-scans-and-ingestion.md).
1. In the Management Center, select integration runtimes. Make sure that a self-hosted integration runtime is set up. If it isn't set up, use the steps in [Create and manage a self-hosted integration runtime](./manage-integration-runtimes.md).
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
On the Register sources (Looker) screen, follow these steps:
## Scan
-Follow the steps below to scan Looker to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
+Follow the steps below to scan Looker to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
### Create and run scan
purview Register Scan Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mongodb.md
On the **Register sources (MongoDB)** screen, do the following:
## Scan
-Follow the steps below to scan MongoDB to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan MongoDB to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Authentication for a scan
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
On the **Register sources (MySQL)** screen, follow these steps:
## Scan
-Follow the steps below to scan MySQL to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan MySQL to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Authentication for a scan
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
On the **Register sources (Oracle)** screen, do the following:
## Scan
-Follow the steps below to scan Oracle to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan Oracle to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
> [!TIP] > To troubleshoot any issues with scanning:
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
On the **Register sources (PostgreSQL)** screen, follow these steps:
## Scan
-Follow the steps below to scan PostgreSQL to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan PostgreSQL to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Authentication for a scan
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
On the **Register sources (Salesforce)** screen, follow these steps:
## Scan
-Follow the steps below to scan Salesforce to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan Salesforce to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
Microsoft Purview uses Salesforce REST API version 41.0 to extract metadata, including REST requests like 'Describe Global' URI (/v41.0/sobjects/),'sObject Basic Information' URI (/v41.0/sobjects/sObject/), and 'SOQL Query' URI (/v41.0/query?).
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
On the **Register sources (SAP BW)** screen, do the following:
## Scan
-Follow the steps below to scan SAP BW to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan SAP BW to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Create and run scan
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
This section describes how to register a SAP HANA in Microsoft Purview by using
## Scan
-Use the following steps to scan SAP HANA databases to automatically identify assets and classify your data. For more information about scanning in general, see [Scans and ingestion in Microsoft Purview](concept-scans-and-ingestion.md).
+Use the following steps to scan SAP HANA databases to automatically identify assets. For more information about scanning in general, see [Scans and ingestion in Microsoft Purview](concept-scans-and-ingestion.md).
### Authentication for a scan
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
On the **Register sources (SAP ECC)** screen, do the following:
## Scan
-Follow the steps below to scan SAP ECC to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan SAP ECC to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Create and run scan
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
On the **Register sources (SAP S/4HANA)** screen, do the following:
## Scan
-Follow the steps below to scan SAP S/4HANA to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan SAP S/4HANA to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Create and run scan
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
On the **Register sources (Snowflake)** screen, follow these steps:
## Scan
-Follow the steps below to scan Snowflake to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan Snowflake to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Authentication for a scan
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
On the **Register sources (Teradata)** screen, do the following:
## Scan
-Follow the steps below to scan Teradata to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan Teradata to automatically identify assets. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Create and run scan
purview Tutorial Atlas 2 2 Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-atlas-2-2-apis.md
Title: "How to use new APIs available with Atlas 2.2"
-description: This tutorial describes the new APIs available with Atlas 2.2 upgrade.
+ Title: "Use new APIs available with Atlas 2.2."
+description: This tutorial describes the new APIs available with the Atlas 2.2 upgrade.
Last updated 04/18/2022
-# Customer intent: I can use the new APIs available with Atlas 2.2
+# Customer intent: As a developer, I want to use the new APIs available with Atlas 2.2 to interact programmatically with the data map in Microsoft Purview.
# Tutorial: Atlas 2.2 new functionality
-In this tutorial, you learn how to programmatically interact with new Atlas 2.2 APIs with Microsoft Purview's data map.
-
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+In this tutorial, learn to programmatically interact with new Atlas 2.2 APIs with the data map in Microsoft Purview.
## Prerequisites
-* To get started, you must have an existing Microsoft Purview account. If you don't have a catalog, see the [quickstart for creating a Microsoft Purview account](create-catalog-portal.md).
+* If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+* You must have an existing Microsoft Purview account. If you don't have a catalog, see the [quickstart for creating a Microsoft Purview account](create-catalog-portal.md).
-* To establish bearer token and to call any Data Plane APIs see [the documentation about how to call REST APIs for Purview Data planes](tutorial-using-rest-apis.md).
+* To establish a bearer token and to call any data plane APIs, see [the documentation about how to call REST APIs for Microsoft Purview data planes](tutorial-using-rest-apis.md).
-## Business Metadata APIs
+## Business metadata APIs
-Business Metadata is a template containing multiple custom attributes (key values) which can be created globally and then applied across multiple typedefs.
+Business metadata is a template that contains custom attributes (key values). You can create these attributes globally and then apply them across multiple typedefs.
-### Create a Business metadata with attributes
+### Create business metadata with attributes
-You can send POST request to the following endpoint
+You can send a `POST` request to the following endpoint:
``` POST {{endpoint}}/api/atlas/v2/types/typedefs ```
-Sample JSON
+Sample JSON:
```json {
Sample JSON
} ```
-### Add/Update an attribute to an existing business metadata
+### Add or update an attribute to existing business metadata
-You can send PUT request to the following endpoint:
+You can send a `PUT` request to the following endpoint:
``` PUT {{endpoint}}/api/atlas/v2/types/typedefs ```
-Sample JSON
+Sample JSON:
```json {
Sample JSON
} ```
-### Get Business metadata definition
+### Get a business metadata definition
-You can send GET request to the following endpoint
+You can send a `GET` request to the following endpoint:
``` GET {endpoint}}/api/atlas/v2/types/typedef/name/{{Business Metadata Name}} ```
-### Set Business metadata attribute to an entity
+### Set a business metadata attribute to an entity
-You can send POST request to the following endpoint
+You can send a `POST` request to the following endpoint:
``` POST {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/businessmetadata?isOverwrite=true ```
-Sample JSON
+Sample JSON:
```json {
Sample JSON
} ```
-### Delete Business metadata attribute from an entity
+### Delete a business metadata attribute from an entity
-You can send DELETE request to the following endpoint
+You can send a `DELETE` request to the following endpoint:
```
-DELETE {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/businessmetadata?isOverwrite=true
+'DELETE' {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/businessmetadata?isOverwrite=true
```
-Sample JSON
+Sample JSON:
```json {
Sample JSON
} ```
-### Delete Business metadata type definition
+### Delete a business metadata type definition
-You can send DELETE request to the following endpoint
+You can send a `DELETE` request to the following endpoint:
``` DELETE {{endpoint}}/api/atlas/v2/types/typedef/name/{{Business Metadata Name}} ```
-## Custom Attribute APIs
+## Custom attribute APIs
-Custom Attributes are key value pairs which can be directly added to an atlas entity.
+Custom attributes are key/value pairs that can be directly added to an Atlas entity.
-### Set Custom Attribute to an entity
+### Set a custom attribute to an entity
-You can send POST request to the following endpoint
+You can send a `POST` request to the following endpoint:
``` POST {{endpoint}}/api/atlas/v2/entity ```
-Sample JSON
+Sample JSON:
```json {
Sample JSON
``` ## Label APIs
-Labels are free text tags which can be applied to any atlas entity.
+Labels are free text tags that can be applied to any Atlas entity.
### Set labels to an entity
-You can send POST request to the following endpoint
+You can send a `POST` request to the following endpoint:
``` POST {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/labels ```
-Sample JSON
+Sample JSON:
```json [
Sample JSON
### Delete labels to an entity
-You can send DELETE request to the following endpoint:
+You can send a `DELETE` request to the following endpoint:
``` DELETE {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/labels ```
-Sample JSON
+Sample JSON:
```json [
Sample JSON
> [!div class="nextstepaction"] > [Manage data sources](manage-data-sources.md)
-> [Purview Data Plane REST APIs](/rest/api/purview/)
+> [Microsoft Purview data plane REST APIs](/rest/api/purview/)
role-based-access-control Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/best-practices.md
Previously updated : 11/15/2021 Last updated : 06/28/2022 #Customer intent: As a dev, devops, or it admin, I want to learn how to best use Azure RBAC.
Even if a role is renamed, the role ID does not change. If you are using scripts
For more information, see [Assign a role using the unique role ID and Azure PowerShell](role-assignments-powershell.md#assign-a-role-for-a-user-using-the-unique-role-id-at-a-resource-group-scope) and [Assign a role using the unique role ID and Azure CLI](role-assignments-cli.md#assign-a-role-for-a-user-using-the-unique-role-id-at-a-resource-group-scope).
+## Avoid using a wildcard when creating custom roles
+
+When creating custom roles, you can use the wildcard (`*`) character to define permissions. It's recommended that you specify `Actions` and `DataActions` explicitly instead of using the wildcard (`*`) character. The additional access and permissions granted through future `Actions` or `DataActions` may be unwanted behavior using the wildcard. For more information, see [Azure custom roles](custom-roles.md#wildcard-permissions).
+ ## Next steps - [Troubleshoot Azure RBAC](troubleshooting.md)
role-based-access-control Custom Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles.md
Previously updated : 06/14/2022 Last updated : 06/28/2022
Instead of adding all of these strings, you could just add a wildcard string. Fo
Microsoft.CostManagement/exports/* ```
+It's recommended that you specify `Actions` and `DataActions` explicitly instead of using the wildcard (`*`) character. The additional access and permissions granted through future `Actions` or `DataActions` may be unwanted behavior using the wildcard.
+ ## Who can create, delete, update, or view a custom role Just like built-in roles, the `AssignableScopes` property specifies the scopes that the role is available for assignment. The `AssignableScopes` property for a custom role also controls who can create, delete, update, or view the custom role.
sentinel Anomalies Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/anomalies-reference.md
Microsoft Sentinel uses two different models to create baselines and detect anom
Sentinel UEBA detects anomalies based on dynamic baselines created for each entity across various data inputs. Each entity's baseline behavior is set according to its own historical activities, those of its peers, and those of the organization as a whole. Anomalies can be triggered by the correlation of different attributes such as action type, geo-location, device, resource, ISP, and more.
+You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA anomalies to be detected.
+ - [Anomalous Account Access Removal](#anomalous-account-access-removal) - [Anomalous Account Creation](#anomalous-account-creation) - [Anomalous Account Deletion](#anomalous-account-deletion)
sentinel Enable Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-entity-behavior-analytics.md
If you haven't yet enabled UEBA, you will be taken to the **Settings** page. Sel
## Next steps
-In this document, you learned how to enable and configure User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles:
+In this document, you learned how to enable and configure User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel. For more information about UEBA:
+- See the [list of anomalies](anomalies-reference.md#ueba-anomalies) detected using UEBA.
+- Learn more about [how UEBA works](identify-threats-with-entity-behavior-analytics.md) and how to use it.
+
+To learn more about Microsoft Sentinel, see the following articles:
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
sentinel Extend Sentinel Across Workspaces Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md
You can get the full benefit of the Microsoft Sentinel experience when using a s
| Data ownership | The boundaries of data ownership, for example by subsidiaries or affiliated companies, are better delineated using separate workspaces. | | | Multiple Azure tenants | Microsoft Sentinel supports data collection from Microsoft and Azure SaaS resources only within its own Azure Active Directory (Azure AD) tenant boundary. Therefore, each Azure AD tenant requires a separate workspace. | | | Granular data access control | An organization may need to allow different groups, within or outside the organization, to access some of the data collected by Microsoft Sentinel. For example:<br><ul><li>Resource owners' access to data pertaining to their resources</li><li>Regional or subsidiary SOCs' access to data relevant to their parts of the organization</li></ul> | Use [resource Azure RBAC](resource-context-rbac.md) or [table level Azure RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043) |
-| Granular retention settings | Historically, multiple workspaces were the only way to set different retention periods for different data types. This is no longer needed in many cases, thanks to the introduction of table level retention settings. | Use [table level retention settings](https://techcommunity.microsoft.com/t5/azure-sentinel/new-per-data-type-retention-is-now-available-for-azure-sentinel/ba-p/917316) or automate [data deletion](../azure-monitor/logs/personal-data-mgmt.md#how-to-export-and-delete-private-data) |
+| Granular retention settings | Historically, multiple workspaces were the only way to set different retention periods for different data types. This is no longer needed in many cases, thanks to the introduction of table level retention settings. | Use [table level retention settings](https://techcommunity.microsoft.com/t5/azure-sentinel/new-per-data-type-retention-is-now-available-for-azure-sentinel/ba-p/917316) or automate [data deletion](../azure-monitor/logs/personal-data-mgmt.md#exporting-and-deleting-personal-data) |
| Split billing | By placing workspaces in separate subscriptions, they can be billed to different parties. | Usage reporting and cross-charging | | Legacy architecture | The use of multiple workspaces may stem from a historical design that took into consideration limitations or best practices which do not hold true anymore. It might also be an arbitrary design choice that can be modified to better accommodate Microsoft Sentinel.<br><br>Examples include:<br><ul><li>Using a per-subscription default workspace when deploying Microsoft Defender for Cloud</li><li>The need for granular access control or retention settings, the solutions for which are relatively new</li></ul> | Re-architect workspaces |
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
As legacy defense tools become obsolete, organizations may have such a vast and
In this document, you learned about Microsoft Sentinel's entity behavior analytics capabilities. For practical guidance on implementation, and to use the insights you've gained, see the following articles: - [Enable entity behavior analytics](./enable-entity-behavior-analytics.md) in Microsoft Sentinel.
+- See the [list of anomalies](anomalies-reference.md#ueba-anomalies) detected by the UEBA engine.
- [Investigate incidents with UEBA data](investigate-with-ueba.md). - [Hunt for security threats](./hunting.md).
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
This page shows the supported authentication types and client types of Azure App
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Azure App Configuration stores instances. For each example below, replace the placeholder texts
+`<App-Configuration-name>`, `<ID>`, `<secret>`, `<client-ID>`, `<client-secret>`, and `<tenant-ID>` with your App Configuration store name, ID, secret, client ID, client secret and tenant ID.
+ ### .NET, Java, Node.JS, Python #### Secret / connection string
This page shows the supported authentication types and client types of Azure App
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | |
-> | AZURE_APPCONFIGURATION_CONNECTIONSTRING | Your App Configuration Connection String | `Endpoint=https://{AppConfigurationName}.azconfig.io;Id={ID};Secret={secret}` |
+> | AZURE_APPCONFIGURATION_CONNECTIONSTRING | Your App Configuration Connection String | `Endpoint=https://<App-Configuration-name>.azconfig.io;Id=<ID>;Secret=<secret>` |
#### System-assigned managed identity
-| Default environment variable name | Description | Sample value |
-|--||-|
-| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration endpoint | `https://{AppConfigurationName}.azconfig.io` |
+| Default environment variable name | Description | Sample value |
+|--|||
+| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration endpoint | `https://<App-Configuration-name>.azconfig.io` |
#### User-assigned managed identity | Default environment variable name | Description | Sample value | |--|-|--|
-| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://{AppConfigurationName}.azconfig.io` |
-| AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `UserAssignedMiClientId` |
+| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://App-Configuration-name>.azconfig.io` |
+| AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `<client-ID>` |
#### Service principal | Default environment variable name | Description | Sample value | |-|-|-|
-| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://{AppConfigurationName}.azconfig.io` |
-| AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `{yourClientID}` |
-| AZURE_APPCONFIGURATION_CLIENTSECRET | Your client secret | `{yourClientSecret}` |
-| AZURE_APPCONFIGURATION_TENANTID | Your tenant ID | `{yourTenantID}` |
+| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://<AppConfigurationName>.azconfig.io` |
+| AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_APPCONFIGURATION_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_APPCONFIGURATION_TENANTID | Your tenant ID | `<tenant-ID>` |
## Next steps
service-connector How To Integrate Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-confluent-kafka.md
This page shows the supported authentication types and client types of Apache ka
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Kafka. For each example below, replace the placeholder texts `<server-name>`, `<Bootstrap-server-key>`, `<Bootstrap-server-secret>`, `<schema-registry-key>`, and `<schema-registry-secret>` with your server name, Bootstrap server key, Bootstrap server secret, schema registry key, and schema registry secret.
+ ### .NET, Java, Node.JS and Python
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_CONFLUENTCLOUDKAFKA_BOOTSTRAPSERVER | Your Kafka bootstrap server | `pkc-{serverName}.eastus.azure.confluent.cloud:9092` |
-| AZURE_CONFLUENTCLOUDKAFKA_KAFKASASLCONFIG | Your Kafka SASL configuration | `org.apache.kafka.common.security.plain.PlainLoginModule required username='{bootstrapServerKey}' password='{bootstrapServerSecret}';` |
-| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_URL | Your Confluent registry URL | `https://psrc-{serverName}.westus2.azure.confluent.cloud` |
-| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_USERINFO | Your Confluent registry user information | `{schemaRegistryKey} + ":" + {schemaRegistrySecret}` |
+| Default environment variable name | Description | Example value |
+|||--|
+| AZURE_CONFLUENTCLOUDKAFKA_BOOTSTRAPSERVER | Your Kafka bootstrap server | `pkc-<server-name>.eastus.azure.confluent.cloud:9092` |
+| AZURE_CONFLUENTCLOUDKAFKA_KAFKASASLCONFIG | Your Kafka SASL configuration | `org.apache.kafka.common.security.plain.PlainLoginModule required username='<Bootstrap-server-key>' password='<Bootstrap-server-secret>';` |
+| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_URL | Your Confluent registry URL | `https://psrc-<server-name>.westus2.azure.confluent.cloud` |
+| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_USERINFO | Your Confluent registry user information | `<schema-registry-key>:<schema-registry-secret>` |
### Spring Boot
-| Default environment variable name | Description | Example value |
-| | | |
-| spring.kafka.properties.bootstrap.servers | Your Kafka bootstrap server | `pkc-{serverName}.eastus.azure.confluent.cloud:9092` |
-| spring.kafka.properties.sasl.jaas.config | Your Kafka SASL configuration | `org.apache.kafka.common.security.plain.PlainLoginModule required username='{bootstrapServerKey}' password='{bootstrapServerSecret}';` |
-| spring.kafka.properties.schema.registry.url | Your Confluent registry URL | `https://psrc-{serverName}.westus2.azure.confluent.cloud` |
-| spring.kafka.properties.schema.registry.basic.auth.user.info | Your Confluent registry user information | `{schemaRegistryKey} + ":" + {schemaRegistrySecret}` |
+| Default environment variable name | Description | Example value |
+|--||--|
+| spring.kafka.properties.bootstrap.servers | Your Kafka bootstrap server | `pkc-<server-name>.eastus.azure.confluent.cloud:9092` |
+| spring.kafka.properties.sasl.jaas.config | Your Kafka SASL configuration | `org.apache.kafka.common.security.plain.PlainLoginModule required username='<Bootstrap-server-key>' password='<Bootstrap-server-secret>';` |
+| spring.kafka.properties.schema.registry.url | Your Confluent registry URL | `https://psrc-<server-name>.westus2.azure.confluent.cloud` |
+| spring.kafka.properties.schema.registry.basic.auth.user.info | Your Confluent registry user information | `<schema-registry-key>:<schema-registry-secret>` |
## Next steps
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
This page shows the supported authentication types and client types of Azure Cos
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Cosmos DB. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<mongo-db-server>`, `<subscription-ID>`, `<resource-group-name>`, `<database-server>`, `<client-secret>`, and `<tenant-id>` with your Mongo DB Admin username, password, Mongo DB server, subscription ID, resource group name, database server, client secret and tenant ID.
+ ### Secret / Connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_COSMOS_CONNECTIONSTRING | Mango DB in Cosmos DB connection string | `mongodb://{mango-db-admin-user}:{********}@{mango-db-server}.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@{mango-db-server}@` |
+| Default environment variable name | Description | Example value |
+|--|--|-|
+| AZURE_COSMOS_CONNECTIONSTRING | Mongo DB in Cosmos DB connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
### System-assigned managed identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/{your-subscription-id}/resourceGroups/{your-resource-group-name}/providers/Microsoft.DocumentDB/databaseAccounts/{your-database-server}/listConnectionStrings?api-version=2021-04-15` |
-| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
-| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint| `https://{your-database-server}.documents.azure.com:443/` |
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
### User-assigned managed identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/{your-subscription-id}/resourceGroups/{your-resource-group-name}/providers/Microsoft.DocumentDB/databaseAccounts/{your-database-server}/listConnectionStrings?api-version=2021-04-15` |
-| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
-| AZURE_COSMOS_CLIENTID | Your client secret ID | `{client-id}` |
-| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `{your-subscription-id}` |
-| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint| `https://{your-database-server}.documents.azure.com:443/` |
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
+| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `<subscription-ID>` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
### Service principal
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/{your-subscription-id}/resourceGroups/{your-resource-group-name}/providers/Microsoft.DocumentDB/databaseAccounts/{your-database-server}/listConnectionStrings?api-version=2021-04-15` |
-| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
-| AZURE_COSMOS_CLIENTID | Your client secret ID | `{client-id}` |
-| AZURE_COSMOS_CLIENTSECRET | Your client secret secret | `{client-secret}` |
-| AZURE_COSMOS_TENANTID | Your tenant ID | `{tenant-id}` |
-| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `{your-subscription-id}` |
-| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint| `https://{your-database-server}.documents.azure.com:443/` |
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
+| AZURE_COSMOS_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` |
+| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `<subscription-ID>` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
## Next steps
service-connector How To Integrate Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md
This page shows the supported authentication types and client types of Azure Eve
## Supported authentication types and client types
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
| | :-: | :--:| :--:| :--:| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
This page shows the supported authentication types and client types of Azure Eve
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Event Hubs. For each example below, replace the placeholder texts `<Event-Hubs-namespace>`, `<access-key-name>`, `<access-key-value>` `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your Event Hubs namespace, shared access key name, shared access key value, client ID, client secret and tenant ID.
+ ### .NET, Java, Node.JS, Python #### Secret / connection string
This page shows the supported authentication types and client types of Azure Eve
> [!div class="mx-tdBreakAll"] > |Default environment variable name | Description | Sample value | > | -- | -- | |
-> | AZURE_EVENTHUB_CONNECTIONSTRING | Event Hubs connection string | `Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey={****}` |
+> | AZURE_EVENTHUB_CONNECTIONSTRING | Event Hubs connection string | `Endpoint=sb://<Event-Hubs-namespace>.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` |
#### System-assigned managed identity
-| Default environment variable name | Description | Sample value |
-| -- | -- | -- |
-| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
+| Default environment variable name | Description | Sample value |
+|-|-||
+| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `<Event-Hubs-namespace>.servicebus.windows.net` |
#### User-assigned managed identity
-| Default environment variable name | Description | Sample value |
-| -- | -- | -- |
-| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
-| AZURE_EVENTHUB_CLIENTID | Your client ID | `{yourClientID}` |
+| Default environment variable name | Description | Sample value |
+|-|-||
+| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `<Event-Hubs-namespace>.servicebus.windows.net` |
+| AZURE_EVENTHUB_CLIENTID | Your client ID | `<client-ID>` |
#### Service principal
-| Default environment variable name | Description | Sample value |
-| | -- | -- |
-| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
-| AZURE_EVENTHUB_CLIENTID | Your client ID | `{yourClientID}` |
-| AZURE_EVENTHUB_CLIENTSECRET | Your client secret | `{yourClientSecret}` |
-| AZURE_EVENTHUB_TENANTID | Your tenant ID | `{yourTenantID}` |
+| Default environment variable name | Description | Sample value |
+|-|-||
+| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `<Event-Hubs-namespace>.servicebus.windows.net` |
+| AZURE_EVENTHUB_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_EVENTHUB_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_EVENTHUB_TENANTID | Your tenant ID | `<tenant-id>` |
### Java - Spring Boot
This page shows the supported authentication types and client types of Azure Eve
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > |--| -- | |
-> | spring.cloud.azure.storage.connection-string | Event Hubs connection string | `Endpoint=sb://servicelinkertesteventhub.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=****` |
+> | spring.cloud.azure.storage.connection-string | Event Hubs connection string | `Endpoint=sb://servicelinkertesteventhub.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` |
#### Spring Boot system-assigned managed identity
-| Default environment variable name | Description | Sample value |
-| - | -- | -- |
-| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
+| Default environment variable name | Description | Sample value |
+||-||
+| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `<Event-Hub-namespace>.servicebus.windows.net` |
#### Spring Boot user-assigned managed identity
-| Default environment variable name | Description | Sample value |
-| - | -- | -- |
-| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
-| spring.cloud.azure.client-id | Your client ID | `{yourClientID}` |
+| Default environment variable name | Description | Sample value |
+||-||
+| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `<Event-Hub-namespace>.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `<client-ID>` |
#### Spring Boot service principal
-| Default environment variable name | Description | Sample value |
-| - | -- | -- |
-| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
-| spring.cloud.azure.client-id | Your client ID | `{yourClientID}` |
-| spring.cloud.azure.tenant-id | Your client secret | `******` |
-| spring.cloud.azure.client-secret | Your tenant ID | `{yourTenantID}` |
+| Default environment variable name | Description | Sample value |
+||-||
+| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `<Event-Hub-namespace>.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `<client-ID>` |
+| spring.cloud.azure.tenant-id | Your client secret | `<client-secret>` |
+| spring.cloud.azure.client-secret | Your tenant ID | `<tenant-id>` |
## Next steps
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md
Last updated 06/13/2022
# Integrate Azure Key Vault with Service Connector > [!NOTE]
-> When you use Service Connector to connect your key vault or manage key vault connections, Service Connector will be using your token to perform the corresponding operations.
+> When you use Service Connector to connect your key vault or manage key vault connections, Service Connector use your token to perform the corresponding operations.
This page shows the supported authentication types and client types of Azure Key Vault using Service Connector. You might still be able to connect to Azure Key Vault in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
This page shows the supported authentication types and client types of Azure Key
## Supported authentication types and client types
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-| | | | | |
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|-|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Azure Key Vault. For each example below, replace the placeholder texts `<vault-name>`, `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your key vault name, client-ID, client secret and tenant ID.
+ ### .NET, Java, Node.JS, Python #### System-assigned managed identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` |
-| AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://{yourKeyVault}.vault.azure.net/` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` |
+| AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://<vault-name>.vault.azure.net/` |
#### User-assigned managed identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` |
-| AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://{yourKeyVault}.vault.azure.net/` |
-| AZURE_KEYVAULT_CLIENTID | Your Client ID | `{yourClientID}` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` |
+| AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://<vault-name>.vault.azure.net/` |
+| AZURE_KEYVAULT_CLIENTID | Your Client ID | `<client-ID>` |
#### Service principal
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` |
-| AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://{yourKeyVault}.vault.azure.net/` |
-| AZURE_KEYVAULT_CLIENTID | Your Client ID | `{yourClientID}` |
-| AZURE_KEYVAULT_CLIENTSECRET | Your Client secret | `{yourClientSecret}` |
-| AZURE_KEYVAULT_TENANTID | Your Tenant ID | `{yourTenantID}` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` |
+| AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://<vault-name>.vault.azure.net/` |
+| AZURE_KEYVAULT_CLIENTID | Your Client ID | `<client-ID>` |
+| AZURE_KEYVAULT_CLIENTSECRET | Your Client secret | `<client-secret>` |
+| AZURE_KEYVAULT_TENANTID | Your Tenant ID | `<tenant-id>` |
### Java - Spring Boot #### Java - Spring Boot service principal
-| Default environment variable name | Description | Example value |
-| | | |
-| azure.keyvault.uri | Your Key Vault endpoint URL | `"https://{yourKeyVaultName}.vault.azure.net/"` |
-| azure.keyvault.client-id | Your Client ID | `{yourClientID}` |
-| azure.keyvault.client-key | Your Client secret | `{yourClientSecret}` |
-| azure.keyvault.tenant-id | Your Tenant ID | `{yourTenantID}` |
-| azure.keyvault.scope | Your Azure RBAC scope | `https://management.azure.com/.default` |
+| Default environment variable name | Description | Example value |
+|--|--|-|
+| azure.keyvault.uri | Your Key Vault endpoint URL | `"https://<vault-name>.vault.azure.net/"` |
+| azure.keyvault.client-id | Your Client ID | `<client-ID>` |
+| azure.keyvault.client-key | Your Client secret | `<client-secret>` |
+| azure.keyvault.tenant-id | Your Tenant ID | `<tenant-id>` |
+| azure.keyvault.scope | Your Azure RBAC scope | `https://management.azure.com/.default` |
## Next steps
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
This page shows the supported authentication types and client types of Azure Dat
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Azure Database for MySQL. For each example below, replace the placeholder texts `<MySQL-DB-name>`, `<MySQL-DB-username>`, `<MySQL-DB-password>`, `<server-host>`, and `<port>` with your Azure Database for MySQL name, Azure Database for MySQL username, Azure Database for MySQL password, server host, and port.
+ ### .NET (MySqlConnector) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_MYSQL_CONNECTIONSTRING | ADO.NET MySQL connection string | `Server={MySQLName}.mysql.database.azure.com;Database={MySQLDbName};Port=3306;SSL Mode=Required;User Id={MySQLUsername};Password={TestDbPassword}` |
+| Default environment variable name | Description | Example value |
+|--||-|
+| AZURE_MYSQL_CONNECTIONSTRING | ADO.NET MySQL connection string | `Server=<MySQL-DB-name>.mysql.database.azure.com;Database=<MySQL-DB-name>;Port=3306;SSL Mode=Required;User Id=<MySQL-DBusername>;Password=<MySQL-DB-password>` |
### Java (JDBC) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_MYSQL_CONNECTIONSTRING | JDBC MySQL connection string | `jdbc:mysql://{MySQLName}.mysql.database.azure.com:3306/{MySQLDbName}?sslmode=required&user={MySQLUsername}&password={Uri.EscapeDataString(TestDbPassword)}` |
+| Default environment variable name | Description | Example value |
+|--||-|
+| AZURE_MYSQL_CONNECTIONSTRING | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>&password=<Uri.EscapeDataString(<MySQL-DB-password>)` |
### Java - Spring Boot (JDBC) secret / connection string
-| Application properties | Description | Example value |
-| | | |
-| spring.datatsource.url | Spring Boot JDBC database URL | `jdbc:mysql://{MySQLName}.mysql.database.azure.com:3306/{MySQLDbName}?sslmode=required` |
-| spring.datatsource.username | Database username | `{MySQLUsername}@{MySQLName}` |
-| spring.datatsource.password | Database password | `****` |
+| Application properties | Description | Example value |
+|--|-|--|
+| spring.datatsource.url | Spring Boot JDBC database URL | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required` |
+| spring.datatsource.username | Database username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| spring.datatsource.password | Database password | `MySQL-DB-password` |
### Node.js (mysql) secret / connection string
-| Default environment variable name | Description | Example value |
-||||
-| AZURE_MYSQL_HOST | Database Host URL | `{MySQLName}.mysql.database.azure.com` |
-| AZURE_MYSQL_USER | Database Username | `MySQLDbName` |
-| AZURE_MYSQL_PASSWORD | Database password | `****` |
-| AZURE_MYSQL_DATABASE | Database name | `{MySQLUsername}@{MySQLName}` |
-| AZURE_MYSQL_PORT | Port number | `3306` |
-| AZURE_MYSQL_SSL | SSL option | `true` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_MYSQL_HOST | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| AZURE_MYSQL_USER | Database Username | `MySQL-DB-username` |
+| AZURE_MYSQL_PASSWORD | Database password | `MySQL-DB-password` |
+| AZURE_MYSQL_DATABASE | Database name | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| AZURE_MYSQL_PORT | Port number | `3306` |
+| AZURE_MYSQL_SSL | SSL option | `true` |
### Python (mysql-connector-python) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_MYSQL_HOST | Database Host URL | `{MySQLName}.mysql.database.azure.com` |
-| AZURE_MYSQL_NAME | Database name | `{MySQLDbName}` |
-| AZURE_MYSQL_PASSWORD | Database password | `****` |
-| AZURE_MYSQL_USER | Database Username | `{MySQLUsername}@{MySQLName}` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_MYSQL_HOST | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| AZURE_MYSQL_NAME | Database name | `MySQL-DB-name` |
+| AZURE_MYSQL_PASSWORD | Database password | `MySQL-DB-password` |
+| AZURE_MYSQL_USER | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
### Python-Django secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_MYSQL_HOST | Database Host URL | `{MySQLName}.mysql.database.azure.com` |
-| AZURE_MYSQL_USER | Database Username | `{MySQLUsername}@{MySQLName}` |
-| AZURE_MYSQL_PASSWORD | Database password | `****` |
-| AZURE_MYSQL_NAME | Database name | `MySQLDbName` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_MYSQL_HOST | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| AZURE_MYSQL_USER | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| AZURE_MYSQL_PASSWORD | Database password | `MySQL-DB-password` |
+| AZURE_MYSQL_NAME | Database name | `MySQL-DB-name` |
### Go (go-sql-driver for mysql) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_MYSQL_CONNECTIONSTRING | Go-sql-driver connection string | `{MySQLUsername}@{MySQLName}:{Password}@tcp({ServerHost}:{Port})/{Database}?tls=true` |
+| Default environment variable name | Description | Example value |
+|--||--|
+| AZURE_MYSQL_CONNECTIONSTRING | Go-sql-driver connection string | `<MySQL-DB-username>@<MySQL-DB-name>:<MySQL-DB-password>@tcp(<server-host>:<port>)/<MySQL-DB-name>?tls=true` |
### PHP (mysqli) secret / connection string
-| Default environment variable name | Description | Example value |
-||||
-| AZURE_MYSQL_HOST | Database Host URL | `{MySQLName}.mysql.database.azure.com` |
-| AZURE_MYSQL_USERNAME | Database Username | `{MySQLUsername}@{MySQLName}` |
-| AZURE_MYSQL_PASSWORD | Database password | `****` |
-| AZURE_MYSQL_DBNAME | Database name | `{MySQLDbName}` |
-| AZURE_MYSQL_PORT | Port number | `3306` |
-| AZURE_MYSQL_FLAG | SSL or other flags | `MYSQLI_CLIENT_SSL` |
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_MYSQL_HOST | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| AZURE_MYSQL_USERNAME | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| AZURE_MYSQL_PASSWORD | Database password | `<MySQL-DB-password>` |
+| AZURE_MYSQL_DBNAME | Database name | `<MySQL-DB-name>` |
+| AZURE_MYSQL_PORT | Port number | `3306` |
+| AZURE_MYSQL_FLAG | SSL or other flags | `MYSQLI_CLIENT_SSL` |
### Ruby (mysql2) secret / connection string
-| Default environment variable name | Description | Example value |
-||||
-| AZURE_MYSQL_HOST | Database Host URL | `{MySQLName}.mysql.database.azure.com` |
-| AZURE_MYSQL_USERNAME | Database Username | `{MySQLUsername}@{MySQLName}` |
-| AZURE_MYSQL_PASSWORD | Database password | `****` |
-| AZURE_MYSQL_DATABASE | Database name | `{MySQLDbName}` |
-| AZURE_MYSQL_SSLMODE | SSL option | `required` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_MYSQL_HOST | Database Host URL | `<MySQL-DB-name>.mysql.database.azure.com` |
+| AZURE_MYSQL_USERNAME | Database Username | `<MySQL-DB-username>@<MySQL-DB-name>` |
+| AZURE_MYSQL_PASSWORD | Database password | `<MySQL-DB-password>` |
+| AZURE_MYSQL_DATABASE | Database name | `<MySQL-DB-name>` |
+| AZURE_MYSQL_SSLMODE | SSL option | `required` |
## Next steps
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
This page shows the supported authentication types and client types of Azure Dat
## Supported authentication types and client types
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-| | | | | |
-| .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
-| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (pg) | | | ![yes icon](./media/green-check.png) | |
-| Python (psycopg2) | | | ![yes icon](./media/green-check.png) | |
-| Python-Django | | | ![yes icon](./media/green-check.png) | |
-| Go (pg) | | | ![yes icon](./media/green-check.png) | |
-| PHP (native) | | | ![yes icon](./media/green-check.png) | |
-| Ruby (ruby-pg) | | | ![yes icon](./media/green-check.png) | |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||-|--|--|-|
+| .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
+| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Node.js (pg) | | | ![yes icon](./media/green-check.png) | |
+| Python (psycopg2) | | | ![yes icon](./media/green-check.png) | |
+| Python-Django | | | ![yes icon](./media/green-check.png) | |
+| Go (pg) | | | ![yes icon](./media/green-check.png) | |
+| PHP (native) | | | ![yes icon](./media/green-check.png) | |
+| Ruby (ruby-pg) | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
+Use the connection details below to connect compute services to PostgreSQL. For each example below, replace the placeholder texts `<postgreSQL-server-name>`, `<database-name>`, `<username>`, and `<password>` with your server name, database name, username and password.
+ ### .NET (ADO.NET) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_POSTGRESQL_CONNECTIONSTRING | ADO.NET PostgreSQL connection string | `Server={your-postgres-server-name}.postgres.database.azure.com;Database={database-name};Port=5432;Ssl Mode=Require;User Id={username}@{servername};Password=****;` |
+| Default environment variable name | Description | Example value |
+|--|--||
+| AZURE_POSTGRESQL_CONNECTIONSTRING | ADO.NET PostgreSQL connection string | `Server=<PostgreSQL-server-name>.postgres.database.azure.com;Database=<database-name>;Port=5432;Ssl Mode=Require;User Id=<username>@<PostgreSQL-server-name>;Password=<password>;` |
### Java (JDBC) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_POSTGRESQL_CONNECTIONSTRING | JDBC PostgreSQL connection string | `jdbc:postgresql://{your-postgres-server-name}.postgres.database.azure.com:5432/{database-name}?sslmode=require&user={username}%40{servername}l&password=****` |
+| Default environment variable name | Description | Example value |
+|--|--||
+| AZURE_POSTGRESQL_CONNECTIONSTRING | JDBC PostgreSQL connection string | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require&user=<username>%40<PostgreSQL-server-name>l&password=<password>` |
### Java - Spring Boot (JDBC) secret / connection string
-| Application properties | Description | Example value |
-| | | |
-| spring.datatsource.url | Database URL | `jdbc:postgresql://{your-postgres-server-name}.postgres.database.azure.com:5432/{database-name}?sslmode=require` |
-| spring.datatsource.username | Database username | `{username}@{servername}` |
-| spring.datatsource.password | Database password | `****` |
+| Application properties | Description | Example value |
+|--|-||
+| spring.datatsource.url | Database URL | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require` |
+| spring.datatsource.username | Database username | `<username>@<PostgreSQL-server-name>` |
+| spring.datatsource.password | Database password | `<password>` |
### Node.js (pg) secret / connection string
-| Default environment variable name | Description | Example value |
-||||
-| AZURE_POSTGRESQL_HOST | Database host URL | `{your-postgres-server-name}.postgres.database.azure.com` |
-| AZURE_POSTGRESQL_USER | Database username | `{username}@{servername}` |
-| AZURE_POSTGRESQL_PASSWORD | Database password | `****` |
-| AZURE_POSTGRESQL_DATABASE | Database name | `{database-name}` |
-| AZURE_POSTGRESQL_PORT | Port number | `5432` |
-| AZURE_POSTGRESQL_SSL | SSL option | `true` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_POSTGRESQL_HOST | Database host URL | `<PostgreSQL-server-name>.postgres.database.azure.com` |
+| AZURE_POSTGRESQL_USER | Database username | `<username>@<PostgreSQL-server-name>` |
+| AZURE_POSTGRESQL_PASSWORD | Database password | `<password>` |
+| AZURE_POSTGRESQL_DATABASE | Database name | `<database-name>` |
+| AZURE_POSTGRESQL_PORT | Port number | `5432` |
+| AZURE_POSTGRESQL_SSL | SSL option | `true` |
### Python (psycopg2) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_POSTGRESQL_CONNECTIONSTRING | psycopg2 connection string | `dbname={database-name} host={your-postgres-server-name}.postgres.database.azure.com port=5432 sslmode=require user={username}@{servername} password=****` |
+| Default environment variable name | Description | Example value |
+|--|-||
+| AZURE_POSTGRESQL_CONNECTIONSTRING | psycopg2 connection string | `dbname=<database-name> host=<PostgreSQL-server-name>.postgres.database.azure.com port=5432 sslmode=require user=<username>@<PostgreSQL-server-name> password=<password>` |
### Python-Django secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_POSTGRESQL_HOST | Database host URL | `{your-postgres-server-name}.postgres.database.azure.com` |
-| AZURE_POSTGRESQL_USER | Database username | `{username}@{servername}` |
-| AZURE_POSTGRESQL_PASSWORD | Database password | `****` |
-| AZURE_POSTGRESQL_NAME | Database name | `{database-name}` |
+| Default environment variable name | Description | Example value |
+|--|-|--|
+| AZURE_POSTGRESQL_HOST | Database host URL | `<PostgreSQL-server-name>.postgres.database.azure.com` |
+| AZURE_POSTGRESQL_USER | Database username | `<username>@<PostgreSQL-server-name>` |
+| AZURE_POSTGRESQL_PASSWORD | Database password | `<password>` |
+| AZURE_POSTGRESQL_NAME | Database name | `<database-name>` |
### Go (pg) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_POSTGRESQL_CONNECTIONSTRING | Go pg connection string | `host={your-postgres-server-name}.postgres.database.azure.com dbname={database-name} sslmode=require user={username}@{servername} password=****` |
+| Default environment variable name | Description | Example value |
+|--|-||
+| AZURE_POSTGRESQL_CONNECTIONSTRING | Go pg connection string | `host=<PostgreSQL-server-name>.postgres.database.azure.com dbname=<database-name> sslmode=require user=<username>@<server-name> password=<password>` |
### PHP (native) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_POSTGRESQL_CONNECTIONSTRING | PHP native postgres connection string | `host={your-postgres-server-name}.postgres.database.azure.com port=5432 dbname={database-name} sslmode=requrie user={username}@{servername} password=****` |
+| Default environment variable name | Description | Example value |
+|--|||
+| AZURE_POSTGRESQL_CONNECTIONSTRING | PHP native postgres connection string | `host=<PostgreSQL-server-name>.postgres.database.azure.com port=5432 dbname=<database-name> sslmode=require user=<username>@<PostgreSQL-server-name> password=<password>` |
### Ruby (ruby-pg) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_POSTGRESQL_CONNECTIONSTRING | Ruby pg connection string | `host={your-postgres-server-name}.postgres.database.azure.com port=5432 dbname={database-name} sslmode=require user={username}@{servername} password=****` |
+| Default environment variable name | Description | Example value |
+|--|||
+| AZURE_POSTGRESQL_CONNECTIONSTRING | Ruby pg connection string | `host=<your-postgres-server-name>.postgres.database.azure.com port=5432 dbname=<database-name> sslmode=require user=<username>@<servername> password=<password>` |
## Next steps
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md
This page shows the supported authentication types and client types of Azure Cac
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Redis Server. For each example below, replace the placeholder texts `<redis-server-name>`, and `<redis-key>` with your own Redis server name and key.
+ ### .NET (StackExchange.Redis) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_REDIS_CONNECTIONSTRING | StackExchange.Redis connection string | `{redis-server}.redis.cache.windows.net:6380,password={redis-key},ssl=True,defaultDatabase=0` |
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_REDIS_CONNECTIONSTRING | StackExchange. Redis connection string | `<redis-server-name>.redis.cache.windows.net:6380,password=<redis-key>,ssl=True,defaultDatabase=0` |
### Java (Jedis) secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_REDIS_CONNECTIONSTRING | Jedis connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_REDIS_CONNECTIONSTRING | Jedis connection string | `rediss://:<redis-key>@<redis-server-name>.redis.cache.windows.net:6380/0` |
### Java - Spring Boot (spring-boot-starter-data-redis) secret / connection string
-| Application properties | Description | Example value |
-| | | |
-| spring.redis.host | Redis host | `{redis-server}.redis.cache.windows.net` |
-| spring.redis.port | Redis port | `6380` |
-| spring.redis.database | Redis database | `0` |
-| spring.redis.password | Redis key | `{redis-key}` |
-| spring.redis.ssl | SSL setting | `true` |
+| Application properties | Description | Example value |
+||-|--|
+| spring.redis.host | Redis host | `<redis-server-name>.redis.cache.windows.net` |
+| spring.redis.port | Redis port | `6380` |
+| spring.redis.database | Redis database | `0` |
+| spring.redis.password | Redis key | `<redis-key>` |
+| spring.redis.ssl | SSL setting | `true` |
### Node.js (node-redis) secret / connection string
-| Default environment variable name | Description | Example value |
-||||
-| AZURE_REDIS_CONNECTIONSTRING | node-redis connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
+| Default environment variable name | Description | Example value |
+|--||-|
+| AZURE_REDIS_CONNECTIONSTRING | node-redis connection string | `rediss://:<redis-key>@<redis-server-name>.redis.cache.windows.net:6380/0` |
### Python (redis-py) secret / connection string
-| Default environment variable name | Description | Example value |
-||||
-| AZURE_REDIS_CONNECTIONSTRING | redis-py connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_REDIS_CONNECTIONSTRING | redis-py connection string | `rediss://:<redis-key>@<redis-server-name>.redis.cache.windows.net:6380/0` |
### Go (go-redis) secret / connection string
-| Default environment variable name | Description | Example value |
-||||
-| AZURE_REDIS_CONNECTIONSTRING | redis-py connection string | `rediss://:{redis-key}@{redis-server}.redis.cache.windows.net:6380/0` |
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_REDIS_CONNECTIONSTRING | redis-py connection string | `rediss://:<redis-key>@<redis-server-name>.redis.cache.windows.net:6380/0` |
## Next steps
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
This page shows the supported authentication types and client types of Azure Ser
## Supported authentication types and client types
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-| | :-: | :--: | :--: | :--: |
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|--|::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Service Bus. For each example below, replace the placeholder texts `<Service-Bus-namespace>`, `<access-key-name>`, `<access-key-value>` `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your own Service Bus namespace, shared access key name, shared access key value, client ID, client secret and tenant ID.
+ ### .NET, Java, Node.JS, Python #### Secret/connection string
This page shows the supported authentication types and client types of Azure Ser
> [!div class="mx-tdBreakAll"] > |Default environment variable name | Description | Sample value | > | -- | -- | |
-> | AZURE_SERVICEBUS_CONNECTIONSTRING | Service Bus connection string | `Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey={****}` |
+> | AZURE_SERVICEBUS_CONNECTIONSTRING | Service Bus connection string | `Endpoint=sb://<Service-Bus-namespace>.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` |
#### System-assigned managed identity | Default environment variable name | Description | Sample value | | -- | -- | -- |
-| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
+| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
#### User-assigned managed identity | Default environment variable name | Description | Sample value | | - | -| - |
-| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
-| AZURE_SERVICEBUS_CLIENTID | Your client ID | `{yourClientID}` |
+| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
+| AZURE_SERVICEBUS_CLIENTID | Your client ID | `<client-ID>` |
#### Service principal | Default environment variable name | Description | Sample value | | --| | -- |
-| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
-| AZURE_SERVICEBUS_CLIENTID | Your client ID | `{yourClientID}` |
-| AZURE_SERVICEBUS_CLIENTSECRET | Your client secret | `{yourClientSecret}` |
-| AZURE_SERVICEBUS_TENANTID | Your tenant ID | `{yourTenantID}` |
+| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
+| AZURE_SERVICEBUS_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_SERVICEBUS_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_SERVICEBUS_TENANTID | Your tenant ID | `<tenant-id>` |
### Java - Spring Boot
This page shows the supported authentication types and client types of Azure Ser
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | -- | -- | |
-> | spring.cloud.azure.servicebus.connection-string | Service Bus connection string | `Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=***` |
+> | spring.cloud.azure.servicebus.connection-string | Service Bus connection string | `Endpoint=sb://<Service-Bus-namespace>.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` |
#### Spring Boot system-assigned managed identity
-| Default environment variable name | Description | Sample value |
-| | | - |
-| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
+| Default environment variable name | Description | Sample value |
+|--|--|--|
+| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
#### Spring Boot user-assigned managed identity
-| Default environment variable name | Description | Sample value |
-| | | - |
-| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
-| spring.cloud.azure.client-id | Your client ID | `{yourClientID}` |
+| Default environment variable name | Description | Sample value |
+|--|--|--|
+| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `<client-ID>` |
#### Spring Boot service principal
-| Default environment variable name | Description | Sample value |
-| | | - |
-| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `{ServiceBusNamespace}.servicebus.windows.net` |
-| spring.cloud.azure.client-id | Your client ID | `{yourClientID}` |
-| spring.cloud.azure.tenant-id | Your client secret | `******` |
-| spring.cloud.azure.client-secret | Your tenant ID | `{yourTenantID}` |
+| Default environment variable name | Description | Sample value |
+|--|--|--|
+| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `<client-ID>` |
+| spring.cloud.azure.tenant-id | Your client secret | `<client-secret>` |
+| spring.cloud.azure.client-secret | Your tenant ID | `<tenant-id>` |
## Next step
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md
This article shows the supported authentication types and client types of Azure
## Default environment variable names or application properties
+Use the connection details below to connect compute services to SignalR. For each example below, replace the placeholder texts
+`<SignalR-name>`, `<access-key>`, `<client-ID>`, `<tenant-ID>`, and `<client-secret>` with your own SignalR name, access key, client ID, tenant ID and client secret.
+ ### .NET -- Secret/ConnectionString
+#### Secret / Connection string
| Default environment variable name | Description | Example value | | | | |
- | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string | `Endpoint=https://{signalrName}.service.signalr.net;AccessKey={};Version=1.0;` |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string | `Endpoint=https://<SignalR-name>.service.signalr.net;AccessKey=<access-key>;Version=1.0;` |
-- System-assigned Managed Identity
+#### System-assigned Managed Identity
| Default environment variable name | Description | Example value | | | | |
- | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};Version=1.0;` |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://<SignalR-name>.service.signalr.net;AuthType=aad;<client-ID>;Version=1.0;` |
-- User-assigned Managed Identity
+#### User-assigned Managed Identity
| Default environment variable name | Description | Example value | | | | |
- | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};Version=1.0;` |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://<SignalR-name>.service.signalr.net;AuthType=aad;client-id=<client-id>;Version=1.0;` |
-- Service Principal
+#### Service Principal
| Default environment variable name | Description | Example value | | | | |
- | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Service Principal | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};ClientSecret={};TenantId={};Version=1.0;` |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Service Principal | `Endpoint=https://<SignalR-name>.service.signalr.net;AuthType=aad;ClientId=<client-ID>;ClientSecret=<client-secret>;TenantId=<tenant-ID>;Version=1.0;` |
## Next steps
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
This page shows all the supported compute services, clients, and authentication
## Default environment variable names or application properties
-Use the environment variable names and application properties listed below to connect a service to Azure SQL Database using a secret and a connection string.
+Use the environment variable names and application properties listed below to connect compute services to Azure SQL Database using a secret and a connection string.
### Connect an Azure App Service instance
-Use the connection details below to connect Azure App Service instances with .NET, Go, Java, Java - Spring Boot, PHP, Node.js, Python, Python - Django and Ruby. For each example below, replace the placeholder texts `<sql-server>`, `<sql-db>`, `<sql-user>`, and `<sql-pass>` with your server name, database name, user ID and password.
+Use the connection details below to connect Azure App Service instances with .NET, Go, Java, Java - Spring Boot, PHP, Node.js, Python, Python - Django and Ruby. For each example below, replace the placeholder texts `<sql-server>`, `<sql-database>`, `<sql-username>`, and `<sql-password>` with your own server name, database name, user ID and password.
#### Azure App Service with .NET (sqlClient) > [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | |
-> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-db>;User ID=<sql-user>;Password=<sql-pass>` |
+> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;User ID=<sql-username>;Password=<sql-password>` |
#### Azure App Service with Java Database Connectivity (JDBC) > [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | |
-> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;user=<sql-user>;password=<sql-pass>;` |
+> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;user=<sql-username>;password=<sql-password>;` |
#### Azure App Service with Java Spring Boot (spring-boot-starter-jdbc)
Use the connection details below to connect Azure App Service instances with .NE
> |--|-|-| > | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;` | > | spring.datasource.username | Azure SQL Database datasource username | `<sql-user>` |
-> | spring.datasource.password | Azure SQL Database datasource password | `<sql-pass>` |
+> | spring.datasource.password | Azure SQL Database datasource password | `<sql-password>` |
#### Azure App Service with Go (go-mssqldb) > [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | |
-> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `server=<sql-server>.database.windows.net;port=1433;database=<sql-db>;user id=<sql-user>;password=<sql-pass>;` |
+> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `server=<sql-server>.database.windows.net;port=1433;database=<sql-database>;user id=<sql-username>;password=<sql-password>;` |
#### Azure App Service with Node.js
Use the connection details below to connect Azure App Service instances with .NE
> |--|--|-| > | AZURE_SQL_SERVER | Azure SQL Database server | `<sql-server>.database.windows.net` | > | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
-> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-db>` |
-> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-user>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
+> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
#### Azure App Service with PHP
Use the connection details below to connect Azure App Service instances with .NE
> | Default environment variable name | Description | Sample value | > |--|--|-| > | AZURE_SQL_SERVERNAME | Azure SQL Database servername | `<sql-server>.database.windows.net` |
-> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-db>` |
-> | AZURE_SQL_UID | Azure SQL Database unique identifier (UID) | `<sql-user>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
+> | AZURE_SQL_UID | Azure SQL Database unique identifier (UID) | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
#### Azure App Service with Python (pyobdc)
Use the connection details below to connect Azure App Service instances with .NE
> |--|--|-| > | AZURE_SQL_SERVER | Azure SQL Database server | `<sql-server>.database.windows.net` | > | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
-> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-db>` |
-> | AZURE_SQL_USER | Azure SQL Database user | `<sql-user>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
+> | AZURE_SQL_USER | Azure SQL Database user | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
#### Azure App Service with Django (mssql-django)
Use the connection details below to connect Azure App Service instances with .NE
> |--|--|-| > | AZURE_SQL_HOST | Azure SQL Database host | `<sql-server>.database.windows.net` | > | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
-> | AZURE_SQL_NAME | Azure SQL Database name | `<sql-db>` |
-> | AZURE_SQL_USER | Azure SQL Database user | `<sql-user>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+> | AZURE_SQL_NAME | Azure SQL Database name | `<sql-database>` |
+> | AZURE_SQL_USER | Azure SQL Database user | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
#### Azure App Service with Ruby
Use the connection details below to connect Azure App Service instances with .NE
> |--|--|-| > | AZURE_SQL_HOST | Azure SQL Database host | `<sql-server>.database.windows.net` | > | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
-> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-db>` |
-> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-user>` |
-> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-database>` |
+> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-username>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
### Connect an Azure Spring Cloud instance
Use the connection details below to connect Azure Spring Cloud instances with Ja
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > |--|-|-|
-> | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;` |
-> | spring.datasource.username | Azure SQL Database datasource username | `<sql-user>` |
-> | spring.datasource.password | Azure SQL Database datasource password | `<sql-pass>` |
+> | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;` |
+> | spring.datasource.username | Azure SQL Database datasource username | `<sql-username>` |
+> | spring.datasource.password | Azure SQL Database datasource password | `<sql-password>` |
## Next steps
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
This page shows the supported authentication types and client types of Azure Blo
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Blob Storage. For each example below, replace the placeholder texts
+`<account name>`, `<account-key>`, `<client-ID>`, `<client-secret>`, `<tenant-ID>`, and `<storage-account-name>` with your own account name, account key, client ID, client secret, tenant ID and storage account name.
+ ### .NET, Java, Node.JS, Python #### Secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob storage connection string | `DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={****};EndpointSuffix=core.windows.net` |
+| Default environment variable name | Description | Example value |
+||--||
+| AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
#### system-assigned managed identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob storage endpoint | `https://{storageAccountName}.blob.core.windows.net/` |
+| Default environment variable name | Description | Example value |
+||--||
+| AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
#### User-assigned managed identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob storage endpoint | `https://{storageAccountName}.blob.core.windows.net/` |
-| AZURE_STORAGEBLOB_CLIENTID | Your client ID | `{yourClientID}` |
+| Default environment variable name | Description | Example value |
+||--||
+| AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
+| AZURE_STORAGEBLOB_CLIENTID | Your client ID | `<client-ID>` |
#### Service principal
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob storage endpoint | `https://{storageAccountName}.blob.core.windows.net/` |
-| AZURE_STORAGEBLOB_CLIENTID | Your client ID | `{yourClientID}` |
-| AZURE_STORAGEBLOB_CLIENTSECRET | Your client secret | `{yourClientSecret}` |
-| AZURE_STORAGEBLOB_TENANTID | Your tenant ID | `{yourTenantID}` |
+| Default environment variable name | Description | Example value |
+||--||
+| AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
+| AZURE_STORAGEBLOB_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_STORAGEBLOB_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_STORAGEBLOB_TENANTID | Your tenant ID | `<tenant-ID>` |
### Java - Spring Boot #### Java - Spring Boot secret / connection string
-| Application properties | Description | Example value |
-| | | |
-| azure.storage.account-name | Your blob storage account name | `{storageAccountName}` |
-| azure.storage.account-key | Your blob storage account key | `{yourSecret}` |
-| azure.storage.blob-endpoint | Your blob storage endpoint | `https://{storageAccountName}.blob.core.windows.net/` |
+| Application properties | Description | Example value |
+|--|--||
+| azure.storage.account-name | Your Blob storage-account-name | `<storage-account-name>` |
+| azure.storage.account-key | Your Blob Storage account key | `<account-key>` |
+| azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
## Next steps
service-connector How To Integrate Storage File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-file.md
Title: Integrate Azure File Storage with Service Connector
-description: Integrate Azure File Storage into your application with Service Connector
+ Title: Integrate Azure Files with Service Connector
+description: Integrate Azure Files into your application with Service Connector
Last updated 06/13/2022
-# Integrate Azure File Storage with Service Connector
+# Integrate Azure Files with Service Connector
-This page shows the supported authentication types and client types of Azure File Storage using Service Connector. You might still be able to connect to Azure File Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types of Azure Files using Service Connector. You might still be able to connect to Azure Files in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service
This page shows the supported authentication types and client types of Azure Fil
## Supported authentication types and client types
-| Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+| Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
|--|-|--|--|-| | .NET | | | ![yes icon](./media/green-check.png) | | | Java | | | ![yes icon](./media/green-check.png) | |
This page shows the supported authentication types and client types of Azure Fil
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Azure Files. For each example below, replace the placeholder texts `<account-name>`, `<account-key>`, `<storage-account-name>` and `<storage-account-key>` with your own account name, account key, storage account name, and storage account key.
+ ### .NET, Java, Node.JS, Python, PHP and Ruby secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEFILE_CONNECTIONSTRING | File storage connection string | `DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={****};EndpointSuffix=core.windows.net` |
+| Default environment variable name | Description | Example value |
+||--|-|
+| AZURE_STORAGEFILE_CONNECTIONSTRING | File storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
### Java - Spring Boot secret / connection string
-| Application properties | Description | Example value |
-| | | |
-| azure.storage.account-name | File storage account name | `{storageAccountName}` |
-| azure.storage.account-key | File storage account key | `{yourSecret}` |
-| azure.storage.file-endpoint | File storage endpoint | `https://{storageAccountName}.file.core.windows.net/` |
+| Application properties | Description | Example value |
+|--|||
+| azure.storage.account-name | File storage account name | `<storage-account-name>` |
+| azure.storage.account-key | File storage account key | `<storage-account-key>` |
+| azure.storage.file-endpoint | File storage endpoint | `https://<storage-account-name>.file.core.windows.net/` |
## Next steps
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
This page shows the supported authentication types and client types of Azure Que
## Supported authentication types and client types
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-| | | | | |
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Queue Storage. For each example below, replace the placeholder texts
+`<account name>`, `<account-key>`, `<client-ID>`, `<client-secret>`, `<tenant-ID>`, and `<storage-account-name>` with your own account name, account key, client ID, client secret, tenant ID and storage account name.
+ ### .NET, Java, Node.JS, Python #### Secret/ connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEQUEUE_CONNECTIONSTRING | Queue storage connection string | `DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={****};EndpointSuffix=core.windows.net` |
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGEQUEUE_CONNECTIONSTRING | Queue storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
#### System-assigned managed identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://{StorageAccountName}.queue.core.windows.net/` |
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
#### User-assigned managed identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://{storageAccountName}.queue.core.windows.net/` |
-| AZURE_STORAGEQUEUE_CLIENTID | Your client ID | `{yourClientID}` |
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
+| AZURE_STORAGEQUEUE_CLIENTID | Your client ID | `<client-ID>` |
#### Service principal
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://{storageAccountName}.queue.core.windows.net/` |
-| AZURE_STORAGEQUEUE_CLIENTID | Your client ID | `{yourClientID}` |
-| AZURE_STORAGEQUEUE_CLIENTSECRET | Your client secret | `{yourClientSecret}` |
-| AZURE_STORAGEQUEUE_TENANTID | Your tenant ID | `{yourTenantID}` |
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` |
+| AZURE_STORAGEQUEUE_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_STORAGEQUEUE_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_STORAGEQUEUE_TENANTID | Your tenant ID | `<tenant-ID>` |
### Java - Spring Boot #### Java - Spring Boot secret / connection string
-| Application properties | Description | Example value |
-| | | |
-| azure.storage.account-name | Queue storage account name | `{storageAccountName}` |
-| azure.storage.account-key | Queue storage account key | `{yourSecret}` |
+| Application properties | Description | Example value |
+|-|-|--|
+| azure.storage.account-name | Queue storage account name | `<storage-account-name>` |
+| azure.storage.account-key | Queue storage account key | `<account-key>` |
## Next steps
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
This page shows the supported authentication types and client types of Azure Tab
## Default environment variable names or application properties
+Use the connection details below to connect compute services to Azure Table Storage. For each example below, replace the placeholder texts `<account-name>` and `<account-key>` with your own account name and account key.
+ ### .NET, Java, Node.JS and Python secret / connection string
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_STORAGETABLE_CONNECTIONSTRING | Table storage connection string | `DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={****};EndpointSuffix=core.windows.net` |
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGETABLE_CONNECTIONSTRING | Table storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
## Next steps
service-connector How To Integrate Web Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-web-pubsub.md
This page shows all the supported compute services, clients, and authentication
## Default environment variable names or application properties
-Use the environment variable names and application properties listed below to connect an Azure service to Web PubSub using .NET, Java, Node.js, or Python. For each example below, replace the placeholder texts `<name>`, `<client-id>`, `<client-secret`, `<access-key>`, and `<tenant-id>` with your resource name, client ID, client secret, access-key, and tenant ID.
+Use the environment variable names and application properties listed below to connect an Azure service to Web PubSub using .NET, Java, Node.js, or Python. For each example below, replace the placeholder texts `<name>`, `<client-id>`, `<client-secret`, `<access-key>`, and `<tenant-id>` with your own resource name, client ID, client secret, access-key, and tenant ID.
### System-assigned managed identity
spring-cloud Tutorial Circuit Breaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-circuit-breaker.md
# Tutorial: Use Circuit Breaker Dashboard with Azure Spring Apps
+> [!WARNING]
+> Hystrix is no longer in active development and is currently in maintenance mode.
+ > [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
To restore a deleted storage account from within another storage account, follow
- [Storage account overview](storage-account-overview.md) - [Create a storage account](storage-account-create.md)-- [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md) - [Move an Azure Storage account to another region](storage-account-move.md)
storage Storage Solution Large Dataset Moderate High Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-large-dataset-moderate-high-network.md
Previously updated : 04/01/2019 Last updated : 06/28/2022
If the available network bandwidth is high, use one of the following tools.
- **AzCopy** - Use this command-line tool to easily copy data to and from Azure Blobs, Files, and Table storage with optimal performance. AzCopy supports concurrency and parallelism, and the ability to resume copy operations when interrupted. - **Azure Storage REST APIs/SDKs** ΓÇô When building an application, you can develop the application against Azure Storage REST APIs and use the Azure SDKs offered in multiple languages.-- **Azure Data Box family for online transfers** ΓÇô Data Box Edge and Data Box Gateway are online network devices that can move data into and out of Azure. Use Data Box Edge physical device when there is a simultaneous need for continuous ingestion and pre-processing of the data prior to upload. Data Box Gateway is a virtual version of the device with the same data transfer capabilities. In each case, the data transfer is managed by the device.
+- **Azure Data Box family for online transfers** ΓÇô Azure Stack Edge and Data Box Gateway are online network devices that can move data into and out of Azure. Use Azure Stack Edge physical device when there is a simultaneous need for continuous ingestion and pre-processing of the data prior to upload. Data Box Gateway is a virtual version of the device with the same data transfer capabilities. In each case, the data transfer is managed by the device.
- **Azure Data Factory** ΓÇô Data Factory should be used to scale out a transfer operation, and if there is a need for orchestration and enterprise grade monitoring capabilities. Use Data Factory to regularly transfer files between several Azure services, on-premises, or a combination of the two. with Data Factory, you can create and schedule data-driven workflows (called pipelines) that ingest data from disparate data stores and automate data movement and data transformation. ## Comparison of key capabilities
If using online data transfer, use the table in the following section for high n
### High network bandwidth
-| | Tools AzCopy, <br>Azure PowerShell, <br>Azure CLI | Azure Storage REST APIs, SDKs | Data Box Gateway or Data Box Edge | Azure Data Factory |
+| | Tools AzCopy, <br>Azure PowerShell, <br>Azure CLI | Azure Storage REST APIs, SDKs | Data Box Gateway or Azure Stack Edge | Azure Data Factory |
|-||-|-|--| | **Data type** | Azure Blobs, Azure Files, Azure Tables | Azure Blobs, Azure Files, Azure Tables | Azure Blobs, Azure Files | Supports 70+ data connectors for data stores and formats | | **Form factor** | Command-line tools | Programmatic interface | Microsoft supplies a virtual <br>or physical device | Service in Azure portal |
If using online data transfer, use the table in the following section for high n
| **Data pre-processing** | No | No | Yes (With Edge compute) | Yes | | **Transfer from other clouds** | No | No | No | Yes | | **User type** | IT Pro or dev | Dev | IT Pro | IT Pro |
-| **Pricing** | Free, data egress charges apply | Free, data egress charges apply | [Pricing](https://azure.microsoft.com/pricing/details/storage/databox/edge/) | [Pricing](https://azure.microsoft.com/pricing/details/data-factory/) |
+| **Pricing** | Free, data egress charges apply | Free, data egress charges apply | [Azure Stack Edge pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/) <br> [Data Box Gateway pricing](https://azure.microsoft.com/pricing/details/databox/gateway/) | [Pricing](https://azure.microsoft.com/pricing/details/data-factory/) |
## Next steps
If using online data transfer, use the table in the following section for high n
- [Transfer data with Data Box](../../databox/data-box-quickstart-portal.md). - [Transfer data with AzCopy](./storage-use-azcopy-v10.md). - [Transfer data with Data Box Gateway](../../databox-gateway/data-box-gateway-deploy-add-shares.md).
- - [Transform data with Data Box Edge before sending to Azure](../../databox-online/azure-stack-edge-deploy-configure-compute.md).
+ - [Transform data with Azure Stack Edge before sending to Azure](../../databox-online/azure-stack-edge-deploy-configure-compute.md).
- [Learn how to transfer data with Azure Data Factory](../../data-factory/quickstart-create-data-factory-portal.md).
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/container-solutions/partner-overview.md
This article highlights Microsoft partner solutions that enable automation, data
| ![Portworx company logo](./media/portworx-logo.png) |**Portworx**<br>Portworx by Pure Storage is the Kubernetes Data Services Platform enterprises trust to run mission-critical applications in containers in production.<br><br>Portworx provides a fully integrated solution for persistent storage, data protection, disaster recovery, data security, cross-cloud and data migrations, and automated capacity management for applications running on Kubernetes.|[Partner page](https://portworx.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/portworx.portworx_enterprise?tab=overview)| | ![Robin.io company logo](./media/robin-logo.png) |**Robin.io**<br>Robin.io provides an application and data management platform that enables enterprises and 5G service providers to deliver complex application pipelines as a service.<br><br>Robin Cloud Native Storage (CNS) brings advanced data management capabilities to Azure Kubernetes Service. Robin CNS seamlessly integrates with Azure Disk Storage to simplify management of stateful applications. Developers and DevOps teams can deploy Robin CNS as a standard Kubernetes operator on AKS. Robin Cloud Native Storage helps simplify data management operations such as BCDR and cloning of entire applications. |[Partner page](https://robin.io/robin-cloud-native-storage-for-microsoft-aks/)| | ![NetApp company logo](./media/astra-logo.jpg) |**NetApp**<br>NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation.<br><br>NetApp Astra Control Service is a fully managed service that makes it easier for customers to manage, protect, and move their data-rich containerized workloads running on Kubernetes within and across public clouds and on-premises. Astra Control provides persistent container storage with Azure NetApp Files offering advanced application-aware data management functionality (like snapshot-revert, backup-restore, activity log, and active cloning) for data protection, disaster recovery, data audit, and migration use-cases for your modern apps. |[Partner page](https://cloud.netapp.com/astra)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/netapp.astra-info?tab=Overview)|
-| ![Rackware company logo](./media/rackware-logo.png) |**Rackware**<br>RackWare provides an intelligent highly automated Hybrid Cloud Management Platform that extends across physical and virtual environments.<br><br>RackWare SWIFT is a converged disaster recovery, backup and migration solution for Kubernetes and OpenShift. It is a cross-platform, cross-cloud and cross-version solution that enables you to move and protect your stateful Kubernetes applications from any on-premises or cloud environment to Azure Kubernetes Service (AKS) and Azure Storage.|[Partner page](https://www.rackwareinc.com/rackware-swift-microsoft-azure)|
+| ![Rackware company logo](./media/rackware-logo.png) |**Rackware**<br>RackWare provides an intelligent highly automated Hybrid Cloud Management Platform that extends across physical and virtual environments.<br><br>RackWare SWIFT is a converged disaster recovery, backup and migration solution for Kubernetes and OpenShift. It is a cross-platform, cross-cloud and cross-version solution that enables you to move and protect your stateful Kubernetes applications from any on-premises or cloud environment to Azure Kubernetes Service (AKS) and Azure Storage.|[Partner page](https://www.rackwareinc.com/rackware-swift-microsoft-azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=rackware%20swift&page=1&filters=virtual-machine-images)|
| ![Ondat company logo](./media/ondat-logo.png) |**Ondat**<br>Ondat, formerly StorageOS, provides an agnostic platform to run any data service anywhere, while ensuring industry-leading levels of application performance, availability and security.<br><br>Ondat cloud native storage solution delivers persistent container storage for your stateful applications in production. Fast, scalable, software-based block storage, Ondat delivers high availability, rapid application failover, replication, encryption of data in-transit & at-rest, data reduction with access controls and native Kubernetes integration.|[Partner page](https://www.ondat.io/platform/how-it-works)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/storageosinc.storageos_sds?tab=Overview)| Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu).
storage Table Storage How To Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-how-to-use-powershell.md
Title: Perform Azure Table storage operations with PowerShell | Microsoft Docs description: Learn how to run common tasks such as creating, querying, deleting data from Azure Table storage account by using PowerShell.-+ Previously updated : 04/05/2019- Last updated : 06/23/2022+
Azure Table storage is a NoSQL datastore that you can use to store and query huge sets of structured, non-relational data. The main components of the service are tables, entities, and properties. A table is a collection of entities. An entity is a set of properties. Each entity can have up to 252 properties, which are all name-value pairs. This article assumes that you are already familiar with the Azure Table Storage Service concepts. For detailed information, see [Understanding the Table Service Data Model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model) and [Get started with Azure Table storage using .NET](../../cosmos-db/tutorial-develop-table-dotnet.md).
-This how-to article covers common Azure Table storage operations. You learn how to:
+This how-to article covers common Azure Table storage operations. You learn how to:
> [!div class="checklist"] > * Create a table
This how-to article covers common Azure Table storage operations. You learn how
> * Delete table entities > * Delete a table
-This how-to article shows you how to create a new Azure Storage account in a new resource group so you can easily remove it when you're done. If you'd rather use an existing Storage account, you can do that instead.
+This how-to article shows you how to create a new storage account in a new resource group so you can easily remove it when you're done. You can also use an existing storage account.
The examples require Az PowerShell modules `Az.Storage (1.1.0 or greater)` and `Az.Resources (1.2.0 or greater)`. In a PowerShell window, run `Get-Module -ListAvailable Az*` to find the version. If nothing is displayed, or you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). > [!IMPORTANT]
-> Using this Azure feature from PowerShell requires that you have the `Az` module installed. The current version of `AzTable` is not compatible with the older AzureRM module.
-> Follow the [latest install instructions for installing Az module](/powershell/azure/install-az-ps) if needed.
+> Using this Azure feature from PowerShell requires that you have the `Az` module installed. The current version of `AzTable` is not compatible with the older AzureRM module. Follow the [latest install instructions for installing Az module](/powershell/azure/install-az-ps) if needed.
+>
+> For module name compatibility reasons, this module is also published under the previous name `AzureRmStorageTables` in PowerShell Gallery. This document will reference the new name only.
After Azure PowerShell is installed or updated, you must install module **AzTable**, which has the commands for managing the entities. To install this module, run PowerShell as an administrator and use the **Install-Module** command.
-> [!IMPORTANT]
-> For module name compatibility reasons we are still publishing this same module under the old name `AzureRmStorageTables` in PowerShell Gallery. This document will reference the new name only.
- ```powershell Install-Module AzTable ```
+## Authorizing table data operations
+
+The AzTable PowerShell module supports authorization with the account access key via Shared Key authorization. The examples in this article show how to authorize table data operations via Shared Key.
+
+Azure Table Storage supports authorization with Azure AD. However, the AzTable PowerShell module does not natively support authorization with Azure AD. Using Azure AD with the AzTable module requires that you call methods in the .NET client library from PowerShell.
+ ## Sign in to Azure
-Sign in to your Azure subscription with the `Add-AzAccount` command and follow the on-screen directions.
+To get started, sign in to your Azure subscription with the `Add-AzAccount` command and follow the on-screen directions.
```powershell Add-AzAccount
$location = "eastus"
## Create resource group
-Create a resource group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command.
+Create a resource group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command.
An Azure resource group is a logical container into which Azure resources are deployed and managed. Store the resource group name in a variable for future use. In this example, a resource group named *pshtablesrg* is created in the *eastus* region.
To perform operations on a table, you need a reference to the specific table. Ge
$storageTable = Get-AzStorageTable ΓÇôName $tableName ΓÇôContext $ctx ```
-## Reference CloudTable property of a specific table
+## Reference the CloudTable property of a specific table
> [!IMPORTANT]
-> Usage of CloudTable is mandatory when working with **AzTable** PowerShell module. Call the **Get-AzStorageTable** command to get the reference to this object. This command also creates the table if it does not already exist.
+> Using the **CloudTable** property is mandatory when working with table data via the **AzTable** PowerShell module. Call the **Get-AzStorageTable** command to get the reference to this object. This command also creates the table if it does not already exist.
-To perform operations on a table using **AzTable**, you need a reference to CloudTable property of a specific table.
+To perform operations on a table using **AzTable**, return a reference to the **CloudTable** property of a specific table. The **CloudTable** property exposes the .NET methods available for managing table data from PowerShell.
```powershell
-$cloudTable = (Get-AzStorageTable ΓÇôName $tableName ΓÇôContext $ctx).CloudTable
+$cloudTable = $storageTable.CloudTable
``` [!INCLUDE [storage-table-entities-powershell-include](../../../includes/storage-table-entities-powershell-include.md)]
Remove-AzResourceGroup -Name $resourceGroup
## Next steps
-In this how-to article, you learned about common Azure Table storage operations with PowerShell, including how to:
+In this how-to article, you learned about common Azure Table storage operations with PowerShell, including how to:
> [!div class="checklist"] > * Create a table
stream-analytics Stream Analytics Managed Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-managed-identities-overview.md
Previously updated : 06/09/2022 Last updated : 06/28/2022 # Managed identities for Azure Stream Analytics
Below is a table that shows Azure Stream Analytics inputs and outputs that suppo
| | IoT Hubs | No (available with a workaround: users can route events to Event Hubs) | No | | | Blob/ADLS Gen 2 | Yes | Yes | | Reference Data | Blob/ADLS Gen 2 | Yes | Yes |
-| | SQL | Yes (preview) | Yes |
+| | SQL | Yes | Yes |
| Outputs | Event Hubs | Yes | Yes | | | SQL Database | Yes | Yes | | | Blob/ADLS Gen 2 | Yes | Yes |
virtual-machine-scale-sets Virtual Machine Scale Sets Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md
Title: Troubleshoot autoscale with Virtual Machine Scale Sets description: Troubleshoot autoscale with Virtual Machine Scale Sets. Understand typical problems encountered and how to resolve them. -+ Last updated 06/25/2020 ms.reviwer: jushiman-+ # Troubleshooting autoscale with Virtual Machine Scale Sets
Some things to consider include:
If the data is not there, it implies the problem is with the diagnostic extension running in the VMs. If the data is there, it implies there is either a problem with your scale rules, or with the Insights service. Check [Azure Status](https://azure.microsoft.com/status/). Once you've been through these steps, if you're still having autoscale problems, you can try the following resources:
- * Read the forums on [Microsoft Q&A question page](/answers/topics/azure-virtual-machines.html), or [Stack overflow](https://stackoverflow.com/questions/tagged/azure)
+ * Visit [Troubleshooting common issue with VM Scale Sets](https://docs.microsoft.com/troubleshoot/azure/virtual-machine-scale-sets/welcome-virtual-machine-scale-sets) page
+ * Read the forums on [Microsoft Q&A question page](/answers/topics/azure-virtual-machines.html), or [Stack overflow](https://stackoverflow.com/questions/tagged/azure)
* Log a support call. Be prepared to share the template and a view of your performance data. [audit]: ./media/virtual-machine-scale-sets-troubleshoot/image3.png
virtual-machines Oms Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-windows.md
The following table provides a mapping of the version of the Windows Log Analyti
| Log Analytics Windows agent version | Log Analytics Windows VM extension version | Release Date | Release Notes | |--|--|--|--|
+| 10.20.18067.0|1.0.18067 | March 2022 | <ul><li>Bug fix for perf counters</li><li>Enhancements to Agent Troubleshooter</li></ul> |
| 10.20.18064.0|1.0.18064 | December 2021 | <ul><li>Bug fix for intermittent crashes</li></ul> | | 10.20.18062.0| 1.0.18062 | November 2021 | <ul><li>Minor bug fixes and stabilization improvements</li></ul> | | 10.20.18053| 1.0.18053.0 | October 2020 | <ul><li>New Agent Troubleshooter</li><li>Updates to how the agent handles certificate changes to Azure services</li></ul> |
virtual-machines Tutorial Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-manage-vm.md
Debian credativ 8 credativ:Debian:8:lat
CoreOS CoreOS Stable CoreOS:CoreOS:Stable:latest CoreOS latest ```
-A full list can be seen by adding the `--all` argument. The image list can also be filtered by `--publisher` or `ΓÇô-offer`. In this example, the list is filtered for all images with an offer that matches *CentOS*.
+A full list can be seen by adding the `--all` parameter. The image list can also be filtered by `--publisher` or `ΓÇô-offer`. In this example, the list is filtered for all images with an offer that matches *CentOS*.
```azurecli-interactive az vm image list --offer CentOS --all --output table
CentOS OpenLogic 6.5 OpenLogic:CentOS:6.5:6.5.20160309
CentOS OpenLogic 6.5 OpenLogic:CentOS:6.5:6.5.20170207 6.5.20170207 ```
-To deploy a VM using a specific image, take note of the value in the *Urn* column, which consists of the publisher, offer, SKU, and optionally a version number to [identify](cli-ps-findimage.md#terminology) the image. When specifying the image, the image version number can be replaced with ΓÇ£latestΓÇ¥, which selects the latest version of the distribution. In this example, the `--image` argument is used to specify the latest version of a CentOS 6.5 image.
+To deploy a VM using a specific image, take note of the value in the *Urn* column, which consists of the publisher, offer, SKU, and optionally a version number to [identify](cli-ps-findimage.md#terminology) the image. When specifying the image, the image version number can be replaced with `latest`, which selects the latest version of the distribution. In this example, the `--image` parameter is used to specify the latest version of a CentOS 6.5 image.
```azurecli-interactive az vm create --resource-group myResourceGroupVM --name myVM2 --image OpenLogic:CentOS:6.5:latest --generate-ssh-keys
Partial output:
### Create VM with specific size
-In the previous VM creation example, a size was not provided, which results in a default size. A VM size can be selected at creation time using [az vm create](/cli/azure/vm) and the `--size` argument.
+In the previous VM creation example, a size was not provided, which results in a default size. A VM size can be selected at creation time using [az vm create](/cli/azure/vm) and the `--size` parameter.
```azurecli-interactive az vm create \
In this tutorial, you learned about basic VM creation and management such as how
Advance to the next tutorial to learn about VM disks. > [!div class="nextstepaction"]
-> [Create and Manage VM disks](./tutorial-manage-disks.md)
+> [Create and Manage VM disks](./tutorial-manage-disks.md)
virtual-machines States Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/states-billing.md
The following table provides a description of each instance state and indicates
| Power state | Description | Billing | ||||
+| Creating | Virtual machine is allocating resources. | Not Billed* |
| Starting| Virtual machine is powering up. | Billed | | Running | Virtual machine is fully up. This state is the standard working state. | Billed | | Stopping | This state is transitional between running and stopped. | Billed |
virtual-machines Vm Applications How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications-how-to.md
Choose an option below for creating your VM application definition and version:
1. On the page for the application, select **Create a VM application version**. The **Create a VM Application Version** page will open. 1. Enter a version number like 1.0.0. 1. Select the region where you've uploaded your application package.
-1. Under **Source application package**, select **Browse**. Select the storage account, then the container where your package is located. Select the package from the list and then click **Select** when you're done.
+1. Under **Source application package**, select **Browse**. Select the storage account, then the container where your package is located. Select the package from the list and then click **Select** when you're done. Alternatively, you can paste the SAS URI in this field if preferred.
1. Type in the **Install script**. You can also provide the **Uninstall script** and **Update script**. See the [Overview](vm-applications.md#command-interpreter) for information on how to create the scripts. 1. If you have a default configuration file uploaded to a storage account, you can select it in **Default configuration**. 1. Select **Exclude from latest** if you don't want this version to appear as the latest version when you create a VM.
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
This procedure is provided for reference only. RHEL PAYG images already have the
## Next steps
-* To create a Red Hat Enterprise Linux VM from an Azure Marketplace PAYG image and to use Azure-hosted RHUI, go to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/RedHat.RHEL_6).
+* To create a Red Hat Enterprise Linux VM from an Azure Marketplace PAYG image and to use Azure-hosted RHUI, go to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/RedHat.RedHatEnterpriseLinux610).
* To learn more about the Red Hat images in Azure, go to the [documentation page](./redhat-images.md). * Information on Red Hat support policies for all versions of RHEL can be found on the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
virtual-network Manage Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md
Learn how to create and delete a virtual network and change settings, like DNS s
Complete the following tasks before completing steps in any section of this article: - If you don't already have an Azure account, sign up for a [free trial account](https://azure.microsoft.com/free).-- If using the portal, open https://portal.azure.com, and log in with your Azure account.-- If using PowerShell commands to complete tasks in this article, either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+- If using the portal, open https://portal.azure.com, and sign in with your Azure account.
+- If using PowerShell commands to complete tasks in this article, either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
- If using Azure CLI commands to complete tasks in this article, run the commands via either [Azure Cloud Shell](https://shell.azure.com/bash) or the Azure CLI running locally. This tutorial requires the Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If you're running the Azure CLI locally, you also need to run `az login` to create a connection with Azure. - The account you log into, or connect to Azure with, must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that is assigned the appropriate actions listed in [Permissions](#permissions).
Complete the following tasks before completing steps in any section of this arti
1. Select **+ Create a resource** > **Networking** > **Virtual network**. 2. Enter or select values for the following settings, then select **Create**:
- - **Name**: The name must be unique in the [resource group](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) that you select to create the virtual network in. You cannot change the name after the virtual network is created. You can create multiple virtual networks over time. For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks.
- - **Address space**: The address space for a virtual network is composed of one or more non-overlapping address ranges that are specified in CIDR notation. The address range you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you have connected to the virtual network. You cannot add the following address ranges:
+ - **Name**: The name must be unique in the [resource group](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) that you select to create the virtual network in. You can't change the name after the virtual network is created. You can create multiple virtual networks over time. For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks.
+ - **Address space**: The address space for a virtual network is composed of one or more non-overlapping address ranges that are specified in CIDR notation. The address range you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you've connected to the virtual network. You can't add the following address ranges:
- 224.0.0.0/4 (Multicast) - 255.255.255.255/32 (Broadcast) - 127.0.0.0/8 (Loopback)
Complete the following tasks before completing steps in any section of this arti
> If a virtual network has address ranges that overlap with another virtual network or on-premises network, the two networks can't be connected. Before you define an address range, consider whether you might want to connect the virtual network to other virtual networks or on-premises networks in the future. Microsoft recommends configuring virtual network address ranges with private address space or public address space owned by your organization. >
- - **Subnet name**: The subnet name must be unique within the virtual network. You cannot change the subnet name after the subnet is created. The portal requires that you define one subnet when you create a virtual network, even though a virtual network isn't required to have any subnets. In the portal, you can define one or more subnets when you create a virtual network. You can add more subnets to the virtual network later, after the virtual network is created. To add a subnet to a virtual network, see [Manage subnets](virtual-network-manage-subnet.md). You can create a virtual network that has multiple subnets by using Azure CLI or PowerShell.
+ - **Subnet name**: The subnet name must be unique within the virtual network. You can't change the subnet name after the subnet is created. The portal requires that you define one subnet when you create a virtual network, even though a virtual network isn't required to have any subnets. In the portal, you can define one or more subnets when you create a virtual network. You can add more subnets to the virtual network later, after the virtual network is created. To add a subnet to a virtual network, see [Manage subnets](virtual-network-manage-subnet.md). You can create a virtual network that has multiple subnets by using Azure CLI or PowerShell.
>[!TIP] >Sometimes, administrators create different subnets to filter or control traffic routing between the subnets. Before you define subnets, consider how you might want to filter and route traffic between your subnets. To learn more about filtering traffic between subnets, see [Network security groups](./network-security-groups-overview.md). Azure automatically routes traffic between subnets, but you can override Azure default routes. To learn more about Azures default subnet traffic routing, see [Routing overview](virtual-networks-udr-overview.md). >
- - **Subnet address range**: The range must be within the address space you entered for the virtual network. The smallest range you can specify is /29, which provides eight IP addresses for the subnet. Azure reserves the first and last address in each subnet for protocol conformance. Three additional addresses are reserved for Azure service usage. As a result, a virtual network with a subnet address range of /29 has only three usable IP addresses. If you plan to connect a virtual network to a VPN gateway, you must create a gateway subnet. Learn more about [specific address range considerations for gateway subnets](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub). You can change the address range after the subnet is created, under specific conditions. To learn how to change a subnet address range, see [Manage subnets](virtual-network-manage-subnet.md).
- - **Subscription**: Select a [subscription](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription). You cannot use the same virtual network in more than one Azure subscription. However, you can connect a virtual network in one subscription to virtual networks in other subscriptions with [virtual network peering](virtual-network-peering-overview.md). Any Azure resource that you connect to the virtual network must be in the same subscription as the virtual network.
+ - **Subnet address range**: The range must be within the address space you entered for the virtual network. The smallest range you can specify is /29, which provides eight IP addresses for the subnet. Azure reserves the first and last address in each subnet for protocol conformance. Three more addresses are reserved for Azure service usage. As a result, a virtual network with a subnet address range of /29 has only three usable IP addresses. If you plan to connect a virtual network to a VPN gateway, you must create a gateway subnet. Learn more about [specific address range considerations for gateway subnets](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub). You can change the address range after the subnet is created, under specific conditions. To learn how to change a subnet address range, see [Manage subnets](virtual-network-manage-subnet.md).
+ - **Subscription**: Select a [subscription](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription). You can't use the same virtual network in more than one Azure subscription. However, you can connect a virtual network in one subscription to virtual networks in other subscriptions with [virtual network peering](virtual-network-peering-overview.md). Any Azure resource that you connect to the virtual network must be in the same subscription as the virtual network.
- **Resource group**: Select an existing [resource group](../azure-resource-manager/management/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-groups) or create a new one. An Azure resource that you connect to the virtual network can be in the same resource group as the virtual network or in a different resource group. - **Location**: Select an Azure [location](https://azure.microsoft.com/regions/), also known as a region. A virtual network can be in only one Azure location. However, you can connect a virtual network in one location to a virtual network in another location by using a VPN gateway. Any Azure resource that you connect to the virtual network must be in the same location as the virtual network.
Complete the following tasks before completing steps in any section of this arti
3. The following settings are listed for the virtual network you selected: - **Overview**: Provides information about the virtual network, including address space and DNS servers. The following screenshot shows the overview settings for a virtual network named **MyVNet**:
- ![Network interface overview](./media/manage-virtual-network/vnet-overview.png)
+ :::image type="content" source="media/manage-virtual-network/vnet-overview-inline.png" alt-text="Screenshot of the Virtual Network overview page. It includes essential information including resource group, subscription info, and DNS information." lightbox="media/manage-virtual-network/vnet-overview-expanded.png":::
- You can move a virtual network to a different subscription or resource group by selecting **Change** next to **Resource group** or **Subscription name**. To learn how to move a virtual network, see [Move resources to a different resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md?toc=%2fazure%2fvirtual-network%2ftoc.json). The article lists prerequisites, and how to move resources by using the Azure portal, PowerShell, and Azure CLI. All resources that are connected to the virtual network must move with the virtual network.
+ You can move a virtual network to a different subscription, region, or resource group by selecting **Move** next to **Resource group**, **Location**, or **Subscription**. To learn how to move a virtual network, see [Move resources to a different resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md?toc=%2fazure%2fvirtual-network%2ftoc.json). The article lists prerequisites, and how to move resources by using the Azure portal, PowerShell, and Azure CLI. All resources that are connected to the virtual network must move with the virtual network.
- **Address space**: The address spaces that are assigned to the virtual network are listed. To learn how to add and remove an address range to the address space, complete the steps in [Add or remove an address range](#add-or-remove-an-address-range).
- - **Connected devices**: Any resources that are connected to the virtual network are listed. In the preceding screenshot, three network interfaces and one load balancer are connected to the virtual network. Any new resources that you create and connect to the virtual network are listed. If you delete a resource that was connected to the virtual network, it no longer appear in the list.
+ - **Connected devices**: Any resources that are connected to the virtual network are listed. In the preceding screenshot, three network interfaces and one load balancer are connected to the virtual network. Any new resources that you create and connect to the virtual network are listed. If you delete a resource that was connected to the virtual network, it no longer appears in the list.
- **Subnets**: A list of subnets that exist within the virtual network is shown. To learn how to add and remove a subnet, see [Manage subnets](virtual-network-manage-subnet.md). - **DNS servers**: You can specify whether the Azure internal DNS server or a custom DNS server provides name resolution for devices that are connected to the virtual network. When you create a virtual network by using the Azure portal, Azure's DNS servers are used for name resolution within a virtual network, by default. To modify the DNS servers, complete the steps in [Change DNS servers](#change-dns-servers) in this article.
- - **Peerings**: If there are existing peerings in the subscription, they are listed here. You can view settings for existing peerings, or create, change, or delete peerings. To learn more about peerings, see [Virtual network peering](virtual-network-peering-overview.md).
+ - **Peerings**: If there are existing peerings in the subscription, they're listed here. You can view settings for existing peerings, or create, change, or delete peerings. To learn more about peerings, see [Virtual network peering](virtual-network-peering-overview.md).
- **Properties**: Displays settings about the virtual network, including the virtual network's resource ID and the Azure subscription it is in. - **Diagram**: The diagram provides a visual representation of all devices that are connected to the virtual network. The diagram has some key information about the devices. To manage a device in this view, in the diagram, select the device. - **Common Azure settings**: To learn more about common Azure settings, see the following information:
Complete the following tasks before completing steps in any section of this arti
## Add or remove an address range
-You can add and remove address ranges for a virtual network. An address range must be specified in CIDR notation, and cannot overlap with other address ranges within the same virtual network. The address ranges you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you have connected to the virtual network.
+You can add and remove address ranges for a virtual network. An address range must be specified in CIDR notation, and can't overlap with other address ranges within the same virtual network. The address ranges you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you've connected to the virtual network.
You can decrease the address range for a virtual network as long as it still includes the ranges of any associated subnets. Additionally, you can extend the address range, for example, changing a /16 to /8. <!-- the above statement has been edited to reflect the most recent comments on the reopened issue: https://github.com/MicrosoftDocs/azure-docs/issues/20572 -->
-You cannot add the following address ranges:
+You can't add the following address ranges:
- 224.0.0.0/4 (Multicast) - 255.255.255.255/32 (Broadcast)
To add or remove an address range:
2. From the list of virtual networks, select the virtual network for which you want to add or remove an address range. 3. Select **Address space**, under **SETTINGS**. 4. Complete one of the following options:
- - **Add an address range**: Enter the new address range. The address range cannot overlap with an existing address range that is defined for the virtual network.
- - **Remove an address range**: On the right of the address range you want to remove, select **...**, then select **Remove**. If a subnet exists in the address range, you cannot remove the address range. To remove an address range, you must first delete any subnets (and any resources in the subnets) that exist in the address range.
+ - **Add an address range**: Enter the new address range. The address range can't overlap with an existing address range that is defined for the virtual network.
+ - **Remove an address range**: On the right of the address range you want to remove, select **...**, then select **Remove**. If a subnet exists in the address range, you can't remove the address range. To remove an address range, you must first delete any subnets (and any resources in the subnets) that exist in the address range.
5. Select **Save**. **Commands**
To add or remove an address range:
All VMs that are connected to the virtual network register with the DNS servers that you specify for the virtual network. They also use the specified DNS server for name resolution. Each network interface (NIC) in a VM can have its own DNS server settings. If a NIC has its own DNS server settings, they override the DNS server settings for the virtual network. To learn more about NIC DNS settings, see [Network interface tasks and settings](virtual-network-network-interface.md#change-dns-servers). To learn more about name resolution for VMs and role instances in Azure Cloud Services, see [Name resolution for VMs and role instances](virtual-networks-name-resolution-for-vms-and-role-instances.md). To add, change, or remove a DNS server: 1. In the search box at the top of the portal, enter *virtual networks* in the search box. When **Virtual networks** appear in the search results, select it.
-2. From the list of virtual networks, select the virtual network for which you want to change DNS servers for.
+2. From the list of virtual networks, select the virtual network for which you want to change DNS servers.
3. Select **DNS servers**, under **SETTINGS**. 4. Select one of the following options:
- - **Default (Azure-provided)**: All resource names and private IP addresses are automatically registered to the Azure DNS servers. You can resolve names between any resources that are connected to the same virtual network. You cannot use this option to resolve names across virtual networks. To resolve names across virtual networks, you must use a custom DNS server.
+ - **Default (Azure-provided)**: All resource names and private IP addresses are automatically registered to the Azure DNS servers. You can resolve names between any resources that are connected to the same virtual network. You can't use this option to resolve names across virtual networks. To resolve names across virtual networks, you must use a custom DNS server.
- **Custom**: You can add one or more servers, up to the Azure limit for a virtual network. To learn more about DNS server limits, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#virtual-networking-limits-classic). You have the following options: - **Add an address**: Adds the server to your virtual network DNS servers list. This option also registers the DNS server with Azure. If you've already registered a DNS server with Azure, you can select that DNS server in the list. - **Remove an address**: Next to the server that you want to remove, select **...**, then **Remove**. Deleting the server removes the server only from this virtual network list. The DNS server remains registered in Azure for your other virtual networks to use.
- - **Reorder DNS server addresses**: It's important to verify that you list your DNS servers in the correct order for your environment. DNS server lists are used in the order that they are specified. They do not work as a round-robin setup. If the first DNS server in the list can be reached, the client uses that DNS server, regardless of whether the DNS server is functioning properly. Remove all the DNS servers that are listed, and then add them back in the order that you want.
+ - **Reorder DNS server addresses**: It's important to verify that you list your DNS servers in the correct order for your environment. DNS server lists are used in the order that they're specified. They don't work as a round-robin setup. If the first DNS server in the list can be reached, the client uses that DNS server, regardless of whether the DNS server is functioning properly. Remove all the DNS servers that are listed, and then add them back in the order that you want.
- **Change an address**: Highlight the DNS server in the list, and then enter the new address. 5. Select **Save**.
-6. Restart the VMs that are connected to the virtual network, so they are assigned the new DNS server settings. VMs continue to use their current DNS settings until they are restarted.
+6. Restart the VMs that are connected to the virtual network, so they're assigned the new DNS server settings. VMs continue to use their current DNS settings until they're restarted.
**Commands**
virtual-network Tutorial Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic.md
Title: Filter network traffic - tutorial - Azure portal
+ Title: 'Tutorial: Filter network traffic with a network security group (NSG) - Azure portal'
-description: In this tutorial, you learn how to filter network traffic to a subnet, with a network security group, using the Azure portal.
+description: In this tutorial, you learn how to filter network traffic to a subnet, with a network security group (NSG), using the Azure portal.
-# Customer intent: I want to filter network traffic to virtual machines that perform similar functions, such as web servers.
Previously updated : 03/06/2021 Last updated : 06/28/2022 -+
+# Customer intent: I want to filter network traffic to virtual machines that perform similar functions, such as web servers.
# Tutorial: Filter network traffic with a network security group using the Azure portal
-You can use a network security group to filter network traffic inbound and outbound from a virtual network subnet.
+You can use a network security group to filter inbound and outbound network traffic to and from Azure resources in an Azure virtual network.
-Network security groups contain security rules that filter network traffic by IP address, port, and protocol. Security rules are applied to resources deployed in a subnet.
+Network security groups contain security rules that filter network traffic by IP address, port, and protocol. When a network security group is associated with a subnet, security rules are applied to resources deployed in that subnet.
In this tutorial, you learn how to: > [!div class="checklist"] > * Create a network security group and security rules
+> * Create application security groups
> * Create a virtual network and associate a network security group to a subnet
-> * Deploy virtual machines (VM) into a subnet
+> * Deploy virtual machines and associate their network interfaces to the application security groups
> * Test traffic filters If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites -- An Azure subscription.
+- An Azure subscription
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a virtual network
-1. Select **Create a resource** in the upper left-hand corner of the portal.
+1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Virtual network**, or search for *Virtual Network* in the portal search box.
-2. In the search box, enter **Virtual Network**. Select **Virtual Network** in the search results.
+1. Select **Create**.
-3. In the **Virtual Network** page, select **Create**.
-
-4. In **Create virtual network**, enter or select this information in the **Basics** tab:
+1. On the **Basics** tab of **Create virtual network**, enter or select this information:
| Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroup**. </br> Select **OK**. |
+ | Resource group | Select **Create new**. </br> Enter *myResourceGroup*. </br> Select **OK**. |
| **Instance details** | |
- | Name | Enter **myVNet**. |
- | Region | Select **(US) East US**. |
+ | Name | Enter *myVNet*. |
+ | Region | Select **East US**. |
-5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-6. Select **Create**.
+1. Select **Create**.
## Create application security groups
-An application security group enables you to group together servers with similar functions, such as web servers.
+An [application security group (ASGs)](application-security-groups.md) enables you to group together servers with similar functions, such as web servers.
-1. Select **Create a resource** in the upper left-hand corner of the portal.
+1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Application security group**, or search for *Application security group* in the portal search box.
-2. In the search box, enter **Application security group**. Select **Application security group** in the search results.
+2. Select **Create**.
-3. In the **Application security group** page, select **Create**.
-
-4. In **Create an application security group**, enter or select this information in the **Basics** tab:
+3. On the **Basics** tab of **Create an application security group**, enter or select this information:
| Setting | Value | | - | -- |
An application security group enables you to group together servers with similar
| Subscription | Select your subscription. | | Resource group | Select **myResourceGroup**. | | **Instance details** | |
- | Name | Enter **myAsgWebServers**. |
+ | Name | Enter *myAsgWebServers*. |
| Region | Select **(US) East US**. |
-5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+4. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-6. Select **Create**.
+5. Select **Create**.
-7. Repeat step 4 again, specifying the following values:
+6. Repeat the previous steps, specifying the following values:
| Setting | Value | | - | -- |
An application security group enables you to group together servers with similar
| Subscription | Select your subscription. | | Resource group | Select **myResourceGroup**. | | **Instance details** | |
- | Name | Enter **myAsgMgmtServers**. |
+ | Name | Enter *myAsgMgmtServers*. |
| Region | Select **(US) East US**. | 8. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
An application security group enables you to group together servers with similar
## Create a network security group
-A network security group secures network traffic in your virtual network.
-
-1. Select **Create a resource** in the upper left-hand corner of the portal.
+A [network security group (NSG)](network-security-groups-overview.md) secures network traffic in your virtual network.
-2. In the search box, enter **Network security group**. Select **Network security group** in the search results.
+1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Network security group**, or search for *Network security group* in the portal search box.
-3. In the **Network security group** page, select **Create**.
+1. Select **Create**.
-4. In **Create network security group**, enter or select this information in the **Basics** tab:
+1. On the **Basics** tab of **Create network security group**, enter or select this information:
| Setting | Value | | - | -- |
A network security group secures network traffic in your virtual network.
| Subscription | Select your subscription. | | Resource group | Select **myResourceGroup**. | | **Instance details** | |
- | Name | Enter **myNSG**. |
+ | Name | Enter *myNSG*. |
| Location | Select **(US) East US**. | 5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
A network security group secures network traffic in your virtual network.
## Associate network security group to subnet
-In this section, we'll associate the network security group with the subnet of the virtual network we created earlier.
+In this section, you'll associate the network security group with the subnet of the virtual network you created earlier.
-1. In the **Search resources, services, and docs** box at the top of the portal, begin typing **myNsg**. When **myNsg** appears in the search results, select it.
+1. Search for *myNsg* in the portal search box.
-2. In the overview page of **myNSG**, select **Subnets** in **Settings**.
+2. Select **Subnets** from the **Settings** section of **myNSG**.
-3. In the **Settings** page, select **Associate**:
+3. In the **Subnets** page, select **+ Associate**:
- :::image type="content" source="./media/tutorial-filter-network-traffic/associate-nsg-subnet.png" alt-text="Associate NSG to subnet." border="true":::
+ :::image type="content" source="./media/tutorial-filter-network-traffic/associate-nsg-subnet.png" alt-text="Screenshot of Associate a network security group to a subnet." border="true":::
-3. Under **Associate subnet**, select **Virtual network** and then select **myVNet**.
+3. Under **Associate subnet**, select **myVNet** for **Virtual network**.
-4. Select **Subnet**, select **default**, and then select **OK**.
+4. Select **default** for **Subnet**, and then select **OK**.
## Create security rules
-1. In **Settings** of **myNSG**, select **Inbound security rules**.
-2. In **Inbound security rules**, select **+ Add**:
+1. Select **Inbound security rules** from the **Settings** section of **myNSG**.
+
+1. In **Inbound security rules** page, select **+ Add**:
- :::image type="content" source="./media/tutorial-filter-network-traffic/add-inbound-rule.png" alt-text="Add inbound security rule." border="true":::
+ :::image type="content" source="./media/tutorial-filter-network-traffic/add-inbound-rule.png" alt-text="Screenshot of Inbound security rules in a network security group." border="true":::
-3. Create a security rule that allows ports 80 and 443 to the **myAsgWebServers** application security group. In **Add inbound security rule**, enter or select the following information:
+1. Create a security rule that allows ports 80 and 443 to the **myAsgWebServers** application security group. In **Add inbound security rule** page, enter or select this information:
| Setting | Value | | - | -- | | Source | Leave the default of **Any**. |
- | Source port ranges | Leave the default of **(*)** |
+ | Source port ranges | Leave the default of **(*)**. |
| Destination | Select **Application security group**. |
- | Destination application security group | Select **myAsgWebServers**. |
+ | Destination application security groups | Select **myAsgWebServers**. |
| Service | Leave the default of **Custom**. |
- | Destination port ranges | Enter **80,443**. |
+ | Destination port ranges | Enter *80,443*. |
| Protocol | Select **TCP**. | | Action | Leave the default of **Allow**. | | Priority | Leave the default of **100**. |
- | Name | Enter **Allow-Web-All**. |
+ | Name | Enter *Allow-Web-All*. |
- :::image type="content" source="./media/tutorial-filter-network-traffic/inbound-security-rule.png" alt-text="Inbound security rule." border="true":::
+ :::image type="content" source="./media/tutorial-filter-network-traffic/inbound-security-rule-inline.png" alt-text="Screenshot of Add inbound security rule in a network security group." lightbox="./media/tutorial-filter-network-traffic/inbound-security-rule-expanded.png":::
-3. Complete step 2 again, using the following values:
+1. Select **Add**.
+
+1. Complete steps 3-4 again using this information:
| Setting | Value | | - | -- | | Source | Leave the default of **Any**. |
- | Source port ranges | Leave the default of **(*)** |
+ | Source port ranges | Leave the default of **(*)**. |
| Destination | Select **Application security group**. | | Destination application security group | Select **myAsgMgmtServers**. | | Service | Leave the default of **Custom**. |
- | Destination port ranges | Enter **3389**. |
+ | Destination port ranges | Enter *3389*. |
| Protocol | Select **Any**. | | Action | Leave the default of **Allow**. | | Priority | Leave the default of **110**. |
- | Name | Enter **Allow-RDP-All**. |
+ | Name | Enter *Allow-RDP-All*. |
+
+1. Select **Add**.
> [!CAUTION] > In this article, RDP (port 3389) is exposed to the internet for the VM that is assigned to the **myAsgMgmtServers** application security group.
In this section, we'll associate the network security group with the subnet of t
Once you've completed steps 1-3, review the rules you created. Your list should look like the list in the following example: ## Create virtual machines
-Create two VMs in the virtual network.
+Create two virtual machines (VMs) in the virtual network.
-### Create the first VM
+### Create the first virtual machine
-1. Select **Create a resource** in the upper left-hand corner of the portal.
+1. From the Azure portal menu, select **+ Create a resource** > **Compute** > **Virtual machine**, or search for *Virtual machine* in the portal search box.
-2. Select **Compute**, then select **Virtual machine**.
-
-3. In **Create a virtual machine**, enter or select this information in the **Basics** tab:
+2. In **Create a virtual machine**, enter or select this information in the **Basics** tab:
| Setting | Value | | - | -- |
Create two VMs in the virtual network.
| Subscription | Select your subscription. | | Resource group | Select **myResourceGroup**. | | **Instance details** | |
- | Virtual machine name | Enter **myVMWeb**. |
+ | Virtual machine name | Enter *myVMWeb*. |
| Region | Select **(US) East US**. |
- | Availability options | Leave the default of no redundancy required. |
- | Image | Select **Windows Server 2019 Datacenter - Gen1**. |
+ | Availability options | Leave the default of **No infrastructure redundancy required**. |
+ | Security type | Leave the default of **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
| Azure Spot instance | Leave the default of unchecked. | | Size | Select **Standard_D2s_V3**. | | **Administrator account** | |
Create two VMs in the virtual network.
| Password | Enter a password. | | Confirm password | Reenter password. | | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
+ | Select inbound ports | Select **None**. |
-4. Select the **Networking** tab.
+3. Select the **Networking** tab.
-5. In the **Networking** tab, enter or select the following information:
+4. In the **Networking** tab, enter or select the following information:
| Setting | Value | | - | -- |
Create two VMs in the virtual network.
| Public IP | Leave the default of a new public IP. | | NIC network security group | Select **None**. |
-6. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-7. Select **Create**.
+6. Select **Create**. The VM may take a few minutes to deploy.
-### Create the second VM
+### Create the second virtual machine
-Complete steps 1-7 again, but in step 3, name the VM **myVMMgmt**. The VM takes a few minutes to deploy.
+Complete steps 1-6 again, but in step 2, enter *myVMMgmt* for Virtual machine name.
-Don't continue to the next step until the VM is deployed.
+Wait for the VMs to complete deployment before advancing to the next section.
## Associate network interfaces to an ASG
-When the portal created the VMs, it created a network interface for each VM, and attached the network interface to the VM.
+When you created the VMs, Azure created a network interface for each VM, and attached it to the VM.
-Add the network interface for each VM to one of the application security groups you created previously:
+Add the network interface of each VM to one of the application security groups you created previously:
-1. In the **Search resources, services, and docs** box at the top of the portal, begin typing **myVMWeb**. When the **myVMWeb** virtual machine appears in the search results, select it.
+1. Search for *myVMWeb* in the portal search box.
-2. In **Settings**, select **Networking**.
+2. Select **Networking** from the **Settings** section of **myVMWeb** VM.
3. Select the **Application security groups** tab, then select **Configure the application security groups**.
- :::image type="content" source="./media/tutorial-filter-network-traffic/configure-app-sec-groups.png" alt-text="Configure application security groups." border="true":::
+ :::image type="content" source="./media/tutorial-filter-network-traffic/configure-app-sec-groups.png" alt-text="Screenshot of Configure application security groups." border="true":::
4. In **Configure the application security groups**, select **myAsgWebServers**. Select **Save**.
- :::image type="content" source="./media/tutorial-filter-network-traffic/select-asgs.png" alt-text="Select application security groups." border="true":::
+ :::image type="content" source="./media/tutorial-filter-network-traffic/select-application-security-groups-inline.png" alt-text="Screenshot showing how to associate application security groups to a network interface." border="true" lightbox="./media/tutorial-filter-network-traffic/select-application-security-groups-expanded.png":::
-5. Complete steps 1 and 2 again, searching for the **myVMMgmt** virtual machine and selecting the **myAsgMgmtServers** ASG.
+5. Complete steps 1 and 2 again, searching for the *myVMMgmt* virtual machine and selecting the **myAsgMgmtServers** ASG.
## Test traffic filters
-1. Connect to the **myVMMgmt** VM. Enter **myVMMgmt** in the search box at the top of the portal. When **myVMMgmt** appears in the search results, select it. Select the **Connect** button.
+1. Search for *myVMWeb* in the portal search box.
-2. Select **Download RDP file**.
+1. On the **Overview** page, select the **Connect** button and then select **RDP**.
-3. Open the downloaded rdp file and select **Connect**. Enter the user name and password you specified when creating the VM.
+1. Select **Download RDP file**.
+
+1. Open the downloaded rdp file and select **Connect**. Enter the username and password you specified when creating the VM.
4. Select **OK**. 5. You may receive a certificate warning during the connection process. If you receive the warning, select **Yes** or **Continue**, to continue with the connection.
- The connection succeeds, because port 3389 is allowed inbound from the internet to the **myAsgMgmtServers** application security group.
+ The connection succeeds, because inbound traffic from the internet to the **myAsgMgmtServers** application security group is allowed through port 3389.
The network interface for **myVMMgmt** is associated with the **myAsgMgmtServers** application security group and allows the connection.
-6. Open a PowerShell session on **myVMMgmt**. Connect to **myVMWeb** using the following example:
+6. Open a PowerShell session on **myVMMgmt**. Connect to **myVMWeb** using the following:
```powershell mstsc /v:myVmWeb ```
- The RDP connection from **myVMMgmt** to **myVMWeb** succeeds because virtual machines in the same network can communicate with each over any port by default.
+ The RDP connection from **myVMMgmt** to **myVMWeb** succeeds because virtual machines in the same network can communicate with each other over any port by default.
You can't create an RDP connection to the **myVMWeb** virtual machine from the internet. The security rule for the **myAsgWebServers** prevents connections to port 3389 inbound from the internet. Inbound traffic from the Internet is denied to all resources by default.
Add the network interface for each VM to one of the application security groups
9. Disconnect from the **myVMMgmt** VM.
-10. In the **Search resources, services, and docs** box at the top of the Azure portal, begin typing **myVMWeb** from your computer. When **myVMWeb** appears in the search results, select it. Note the **Public IP address** for your VM. The address shown in the following example is 23.96.39.113, but your address is different:
+10. Search for *myVMWeb* in the portal search box.
+
+11. On the **Overview** page of **myVMWeb**, note the **Public IP address** for your VM. The address shown in the following example is 23.96.39.113, but your address is different:
- :::image type="content" source="./media/tutorial-filter-network-traffic/public-ip-address.png" alt-text="Public IP address." border="true":::
+ :::image type="content" source="./media/tutorial-filter-network-traffic/public-ip-address.png" alt-text="Screenshot of Public IP address of a virtual machine in the Overview page." border="true":::
11. To confirm that you can access the **myVMWeb** web server from the internet, open an internet browser on your computer and browse to `http://<public-ip-address-from-previous-step>`.
-You see the IIS welcome screen, because port 80 is allowed inbound from the internet to the **myAsgWebServers** application security group.
+You see the IIS default page, because inbound traffic from the internet to the **myAsgWebServers** application security group is allowed through port 80.
The network interface attached for **myVMWeb** is associated with the **myAsgWebServers** application security group and allows the connection.
The network interface attached for **myVMWeb** is associated with the **myAsgWeb
When no longer needed, delete the resource group and all of the resources it contains:
-1. Enter **myResourceGroup** in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
2. Select **Delete resource group**. 3. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
In this tutorial, you:
* Created a network security group and associated it to a virtual network subnet. * Created application security groups for web and management.
-* Created two virtual machines.
+* Created two virtual machines and associated their network interfaces with the application security groups.
* Tested the application security group network filtering. To learn more about network security groups, see [Network security group overview](./network-security-groups-overview.md) and [Manage a network security group](manage-network-security-group.md).
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
# Create, change, or delete a virtual network peering
-Learn how to create, change, or delete a virtual network peering. Virtual network peering enables you to connect virtual networks in the same region and across regions (also known as Global VNet Peering) through the Azure backbone network. Once peered, the virtual networks are still managed as separate resources. If you're new to virtual network peering, you can learn more about it in the [virtual network peering overview](virtual-network-peering-overview.md) or by completing a [tutorial](tutorial-connect-virtual-networks-portal.md).
+Learn how to create, change, or delete a virtual network peering. Virtual network peering enables you to connect virtual networks in the same region and across regions (also known as Global VNet Peering) through the Azure backbone network. Once peered, the virtual networks are still managed as separate resources. If you're new to virtual network peering, you can learn more about it in the [virtual network peering overview](virtual-network-peering-overview.md) or by completing the [virtual network peering tutorial](tutorial-connect-virtual-networks-portal.md).
## Before you begin
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
### <a name="update-router"></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
-The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. The Cloud Services infrastructure will be deprecated soon. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal.
+The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. Azure-wide Cloud Services-based infrastructure is deprecating. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal.
YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update. If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
This section helps you configure the native VPN client on your Windows computer
Unzip the configuration file to view the following folders:
-* **WindowsAmd64** and **WindowsX86**, which contain the Windows 32-bit and 64-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
+* **WindowsAmd64** and **WindowsX86**, which contain the Windows 64-bit and 32-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isnΓÇÖt present. ### <a name="install"></a>Configure VPN client profile