Updates from: 07/08/2022 01:06:16
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
Previously updated : 06/08/2022 Last updated : 07/07/2022
In this article, youΓÇÖll do the following tasks:
## Step 1: Configure your user flows --- A combined **Sign in and sign up** user flow, such as `susi`. This user flow also supports the **Forgot your password** experience.-- A **Profile editing** user flow, such as `edit_profile`.-- A **Password reset** user flow, such as `reset_password`.-
-Azure AD B2C prepends `B2C_1_` to the user flow name. For example, `susi` becomes `B2C_1_susi`.
## Step 2: Register a web application To enable your application sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app registration establishes a trust relationship between the app and Azure AD B2C.
-During app registration, you'll specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID and the redirect URI to create authentication requests.
+During app registration, you'll specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID, and the redirect URI to create authentication requests.
### Step 2.1: Register the app
Open your web app in a code editor such as Visual Studio Code. Under the project
|`EDIT_PROFILE_POLICY_AUTHORITY`|The **Profile editing** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<profile-edit-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<reset-password-user-flow-name>` with the name of your reset password user flow such as `B2C_1_edit_profile_node_app`. | |`AUTHORITY_DOMAIN`| The Azure AD B2C authority domain such as `https://<your-tenant-name>.b2clogin.com`. Replace `<your-tenant-name>` with the name of your tenant.| |`APP_REDIRECT_URI`| The application redirect URI where Azure AD B2C will return authentication responses (tokens). It matches the **Redirect URI** you set while registering your app in Azure portal, and it must be publicly accessible. Leave the value as is.|
-|`LOGOUT_ENDPOINT`| The Azure AD B2C sign out endpoint such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>/oauth2/v2.0/logout?post_logout_redirect_uri=http://localhost:3000`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`.|
+|`LOGOUT_ENDPOINT`| The Azure AD B2C sign-out endpoint such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>/oauth2/v2.0/logout?post_logout_redirect_uri=http://localhost:3000`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`.|
Your final configuration file should look like the following sample:
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
Previously updated : 06/08/2022 Last updated : 07/07/2022
The application registrations and the application architecture are described in
## Step 1: Configure your user flow --- A combined **Sign in and sign up** user flow, such as `susi`. This user flow also supports the **Forgot your password** experience.-- A **Profile editing** user flow, such as `edit_profile`.-- A **Password reset** user flow, such as `reset_password`.-
-Azure AD B2C prepends `B2C_1_` to the user flow name. For example, `susi` becomes `B2C_1_susi`.
## Step 2: Register your web app and API
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
+
+ Title: Configure authentication in a sample React SPA by using Azure Active Directory B2C
+description: Learn how to use Azure Active Directory B2C to sign in and sign up users in a React SPA.
++++++ Last updated : 07/07/2022+++++
+# Configure authentication in a sample React single-page application by using Azure Active Directory B2C
+
+This article uses a sample React single-page application (SPA) to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your React apps.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to securely sign in a user to an application. This React sample uses [MSAL React](https://www.npmjs.com/package/@azure/msal-react) and the [MSAL Browser](https://www.npmjs.com/package/@azure/msal-browser) Node packages. MSAL is a Microsoft-provided library that simplifies adding authentication and authorization support to React SPAs.
+
+### Sign in flow
+
+The sign-in flow involves the following steps:
+
+1. The user opens the app and selects **Sign in**.
+1. The app starts an authentication request to Azure AD B2C.
+1. The user [signs up or signs in](add-sign-up-and-sign-in-policy.md) and [resets the password](add-password-reset-policy.md), or signs in with a [social account](add-identity-provider.md).
+1. Upon successful sign-in, Azure AD B2C returns an authorization code to the app. The app takes the following actions:
+ 1. Exchanges the authorization code for an ID token, access token, and refresh token.
+ 1. Reads the ID token claims.
+ 1. Stores the access token and refresh token in an in-memory cache for later use. The access token allows the user to call protected resources, such as a web API. The refresh token is used to acquire a new access token.
+
+### App registration
+
+To enable your app to sign in with Azure AD B2C and call a web API, you must register two applications in the Azure AD B2C directory:
+
+- The *single-page application* (React) registration enables your app to sign in with Azure AD B2C. During app registration, you specify the *redirect URI*. The redirect URI is the endpoint to which the user is redirected after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. This article uses the example **App ID: 1**.
+
+- The *web API* registration enables your app to call a protected web API. The registration exposes the web API permissions (scopes). The app registration process generates an application ID that uniquely identifies your web API. This article uses the example **App ID: 2**. Grant your app (**App ID: 1**) permissions to the web API scopes (**App ID: 2**).
+
+The following diagram describes the app registrations and the app architecture.
+
+![Diagram that describes a single-page application with web A P I, registrations, and tokens.](./media/configure-authentication-sample-react-spa-app/spa-app-with-api-architecture.png)
+
+### Call to a web API
++
+### Sign out flow
++
+## Prerequisites
+
+Before you follow the procedures in this article, make sure that your computer is running:
+
+* [Visual Studio Code](https://code.visualstudio.com/) or another code editor.
+* [Node.js runtime](https://nodejs.org/en/download/) and [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm). To test that you have Node.js and npm correctly installed on your machine, you can type `node --version` and `npm --version` in a terminal or command prompt.
+
+## Step 1: Configure your user flow
++
+## Step 2: Register your React SPA and API
+
+In this step, you create the registrations for the React SPA and the web API app. You also specify the scopes of your web API.
+
+### 2.1 Register the web API application
++
+### 2.2 Configure scopes
++
+### 2.3 Register the React app
+
+Follow these steps to create the React app registration:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. For **Name**, enter a name for the application. For example, enter **MyApp**.
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Single-page application (SPA)**, and then enter `http://localhost:3000` in the URL box.
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
+1. Select **Register**.
+1. Record the **Application (client) ID** value for use in a later step when you configure the web application.
+
+ ![Screenshot that shows how to get the React application I D.](./media/configure-authentication-sample-react-spa-app/get-azure-ad-b2c-app-id.png)
+
+### 2.5 Grant permissions
++
+## Step 3: Get the React sample code
+
+This sample demonstrates how a React single-page application can use Azure AD B2C for user sign-up and sign-in. Then the app acquires an access token and calls a protected web API.
+
+ [Download a .zip file](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/archive/refs/heads/main.zip) of the sample, or clone the sample from the [GitHub repository](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial) by using the following command:
+
+ ```
+ git clone https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial
+ ```
+
+Open the *3-Authorization-II/2-call-api-b2c/SPA* folder with Visual Studio Code.
+
+### 3.1 Configure the React sample
+
+Now that you've obtained the SPA sample, update the code with your Azure AD B2C and web API values. In the *3-Authorization-II/2-call-api-b2c/SPA* folder, under the *src* folder, open the *authConfig.js* file. Update the keys with the corresponding values:
++
+|Section |Key |Value |
+||||
+| b2cPolicies | names |The user flows or custom policies that you created in [step 1](#step-1-configure-your-user-flow). |
+| b2cPolicies | authorities | Replace `your-tenant-name` with your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, use `contoso.onmicrosoft.com`. Then, replace the policy name with the user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). For example: `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`. |
+| b2cPolicies | authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso.onmicrosoft.com`. |
+| Configuration | clientId | The React application ID from [step 2.3](#23-register-the-react-app). |
+| protectedResources| endpoint| The URL of the web API: `http://localhost:5000/api/todolist`. |
+| protectedResources| scopes| The web API scopes that you created in [step 2.2](#22-configure-scopes). For example: `b2cScopes: ["https://<your-tenant-name>.onmicrosoft.com/tasks-api/tasks.read"]`. |
+
+Your resulting *src/authConfig.js* code should look similar to the following sample:
+
+```typescript
+export const b2cPolicies = {
+ names: {
+ signUpSignIn: "b2c_1_susi_reset_v2",
+ editProfile: "b2c_1_edit_profile_v2"
+ },
+ authorities: {
+ signUpSignIn: {
+ authority: "https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/b2c_1_susi_reset_v2",
+ },
+ editProfile: {
+ authority: "https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/b2c_1_edit_profile_v2"
+ }
+ },
+ authorityDomain: "your-tenant-name.b2clogin.com"
+ };
+
+
+export const msalConfig: Configuration = {
+ auth: {
+ clientId: '<your-MyApp-application-ID>',
+ authority: b2cPolicies.authorities.signUpSignIn.authority,
+ knownAuthorities: [b2cPolicies.authorityDomain],
+ redirectUri: '/',
+ },
+ // More configuration here
+ }
+
+export const protectedResources = {
+ todoListApi: {
+ endpoint: "http://localhost:5000/api/todolist",
+ scopes: ["https://your-tenant-namee.onmicrosoft.com/tasks-api/tasks.read"],
+ },
+}
+```
+
+## Step 4: Configure the web API
+
+Now that the web API is registered and you've defined its scopes, configure the web API code to work with your Azure AD B2C tenant. Open the *3-Authorization-II/2-call-api-b2c/API* folder with Visual Studio Code.
++
+In the sample folder, open the *config.json* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token that the web app passes as a bearer token. Update the following properties of the app settings:
+
+|Section |Key |Value |
+||||
+|credentials|tenantName| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso`.|
+|credentials|clientID| The web API application ID from step [2.1](#21-register-the-web-api-application). In the [earlier diagram](#app-registration), it's the application with **App ID: 2**.|
+|credentials| issuer| (Optional) The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
+|policies|policyName|The user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). If your application uses multiple user flows or custom policies, specify only one. For example, use the sign-up or sign-in user flow.|
+| protectedRoutes| scopes | The scopes of your web API application registration from [step 2.5](#25-grant-permissions). |
+
+Your final configuration file should look like the following JSON:
+
+```json
+{
+ "credentials": {
+ "tenantName": "<your-tenant-name>",
+ "clientID": "<your-webapi-application-ID>",
+ "issuer": "https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/"
+ },
+ "policies": {
+ "policyName": "b2c_1_susi"
+ },
+ "protectedRoutes": {
+ "hello": {
+ "endpoint": "/hello",
+ "scopes": ["demo.read"]
+ }
+ }
+ // More settings here
+}
+```
+
+## Step 5: Run the React SPA and web API
+
+You're now ready to test the React scoped access to the API. In this step, run both the web API and the sample React application on your local machine. Then, sign in to the React application, and select the **TodoList** button to start a request to the protected API.
+
+### Run the web API
+
+1. Open a console window and change to the directory that contains the web API sample. For example:
+
+ ```console
+ cd 3-Authorization-II/2-call-api-b2c/API
+ ```
+
+1. Run the following commands:
+
+ ```console
+ npm install && npm update
+ node index.js
+ ```
+
+ The console window displays the port number where the application is hosted:
+
+ ```console
+ Listening on port 5000...
+ ```
+
+### Run the React application
+
+1. Open another console window and change to the directory that contains the React sample. For example:
+
+ ```console
+ cd 3-Authorization-II/2-call-api-b2c/SPA
+ ```
+
+1. Run the following commands:
+
+ ```console
+ npm install && npm update
+ npm start
+ ```
+
+ The console window displays the port number of where the application is hosted:
+
+ ```console
+ Listening on port 3000...
+ ```
+
+1. In your browser, go to `http://localhost:3000` to view the application.
+1. Select **Sign In**.
+
+ ![Screenshot that shows the React sample app with the login link.](./media/configure-authentication-sample-react-spa-app/sample-app-sign-in.png)
+
+1. Choose **Sign in using Popup**, or **Sign in using Redirect**.
+1. Complete the sign-up or sign in process. Upon successful sign in, you should see your profile.
+1. From the menu, select **Hello API**.
+1. Check out the result of the REST API call. The following screenshot shows the React sample REST API return value.
+
+ ![Screenshot that shows the React sample app with the user profile, and the call to the A P I.](./media/configure-authentication-sample-react-spa-app/sample-app-call-api.png)
+
+## Deploy your application
+
+In a production application, the redirect URI for the app registration is typically a publicly accessible endpoint where your app is running, like `https://contoso.com`.
+
+You can add and modify redirect URIs in your registered applications at any time. The following restrictions apply to redirect URIs:
+
+* The reply URL must begin with the scheme `https`.
+* The reply URL is case-sensitive. Its case must match the case of the URL path of your running application.
+
+## Next steps
+
+* [Learn more about the code sample](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial)
+* [Enable authentication in your own React application](enable-authentication-React-spa-app.md)
+* [Configure authentication options in your React application](enable-authentication-React-spa-app-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Enable Authentication React Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app-options.md
+
+ Title: Enable React application options by using Azure Active Directory B2C
+description: Enable the use of React application options in several ways.
++++++ Last updated : 07/07/2022+++++
+# Configure authentication options in a React application by using Azure Active Directory B2C
+
+This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your React single-page application (SPA). Before you start, familiarize yourself with the article [Configure authentication in an React SPA](configure-authentication-sample-react-spa-app.md) or [Enable authentication in your own React SPA](enable-authentication-react-spa-app.md).
++
+## Sign-in and sign-out behavior
++
+You can configure your single-page application to sign in users with MSAL.js in two ways:
+
+- **Pop-up window**: The authentication happens in a pop-up window, and the state of the application is preserved. Use this approach if you don't want users to move away from your application page during authentication. There are [known issues with pop-up windows on Internet Explorer](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups).
+ - To sign in with pop-up windows, use the `loginPopup` method.
+ - To sign out with pop-up windows, use the `logoutPopup` method.
+- **Redirect**: The user is redirected to Azure AD B2C to complete the authentication flow. Use this approach if users have browser constraints or policies where pop-up windows are disabled.
+ - To sign in with redirection, use the `loginRedirect` method.
+ - To sign out with redirection, use the `logoutRedirect` method.
+
+The following sample demonstrates how to sign in and sign out:
+
+#### [Pop-up](#tab/popup)
++
+```javascript
+// src/components/NavigationBar.jsx
+instance.loginPopup(loginRequest)
+ .catch((error) => console.log(error))
+
+instance.logoutPopup({ postLogoutRedirectUri: "/", mainWindowRedirectUri: "/" })
+```
+
+#### [Redirect](#tab/redirect)
+
+```javascript
+// src/components/NavigationBar.jsx
+instance.loginRedirect(loginRequest)
+
+instance.logoutRedirect({ postLogoutRedirectUri: "/" })
+```
++++
+To use your custom domain for your tenant ID in the authentication URL, follow the guidance in [Enable custom domains](custom-domain.md). Open the `src/authConfig.js` MSAL configuration object and change `authorities` and `knownAuthorities` to use your custom domain name and tenant ID.
+
+The following JavaScript shows the MSAL configuration object before the change:
+
+```javascript
+const msalConfig = {
+ auth: {
+ ...
+ authority: "https://fabrikamb2c.b2clogin.com/fabrikamb2c.onmicrosoft.com/B2C_1_susi",
+ knownAuthorities: ["fabrikamb2c.b2clogin.com"],
+ ...
+ },
+ ...
+}
+```
+
+The following JavaScript shows the MSAL configuration object after the change:
+
+```javascript
+export const b2cPolicies = {
+ names: {
+ signUpSignIn: "b2c_1_susi",
+ forgotPassword: "b2c_1_reset",
+ editProfile: "b2c_1_edit_profile"
+ },
+ authorities: {
+ signUpSignIn: {
+ authority: "https://custom.domain.com/00000000-0000-0000-0000-000000000000/b2c_1_susi",
+ },
+ forgotPassword: {
+ authority: "https://custom.domain.com/00000000-0000-0000-0000-000000000000/b2c_1_reset",
+ },
+ editProfile: {
+ authority: "https://custom.domain.com/00000000-0000-0000-0000-000000000000/b2c_1_edit_profile"
+ }
+ },
+ authorityDomain: "custom.domain.com"
+}
+```
++
+1. If you use a custom policy, add the required input claim as described in [Set up direct sign-in](direct-signin.md#prepopulate-the-sign-in-name).
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object.
+1. Set the `loginHint` attribute with the corresponding sign-in hint.
+
+The following code snippets demonstrate how to pass the sign-in hint parameter. They use `bob@contoso.com` as the attribute value.
+
+#### [Pop-up](#tab/popup)
+
+```javascript
+// src/components/NavigationBar.jsx
+loginRequest.loginHint = "bob@contoso.com";
+instance.loginPopup(loginRequest);
+```
+
+#### [Redirect](#tab/redirect)
+
+```javascript
+// src/components/NavigationBar.jsx
+loginRequest.loginHint = "bob@contoso.com";
+instance.loginRedirect(loginRequest);
+```
+
+
+++
+1. Check the domain name of your external identity provider. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider).
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object.
+1. Set the `domainHint` attribute with the corresponding domain hint.
+
+The following code snippets demonstrate how to pass the domain hint parameter. They use `facebook.com` as the attribute value.
+
+#### [Pop-up](#tab/popup)
+
+```javascript
+// src/components/NavigationBar.jsx
+loginRequest.domainHint = "facebook.com";
+instance.loginPopup(loginRequest);
+```
+
+#### [Redirect](#tab/redirect)
+
+```javascript
+loginRequest.domainHint = "facebook.com";
+instance.loginRedirect(loginRequest);
+```
+
+
++
+1. [Configure Language customization](language-customization.md).
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes.
+1. Add the `ui_locales` parameter with the corresponding language code to the `extraQueryParameters` attributes.
+
+The following code snippets demonstrate how to pass the domain hint parameter. They use `es-es` as the attribute value.
+
+#### [Pop-up](#tab/popup)
+
+```javascript
+// src/components/NavigationBar.jsx
+loginRequest.extraQueryParameters = {"ui_locales" : "es-es"};
+instance.loginPopup(loginRequest);
+```
+
+#### [Redirect](#tab/redirect)
+
+```javascript
+loginRequest.extraQueryParameters = {"ui_locales" : "es-es"};
+instance.loginRedirect(loginRequest);
+```
+
+
+
++
+1. Configure the [ContentDefinitionParameters](customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) element.
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes.
+1. Add the custom query string parameter, such as `campaignId`. Set the parameter value.
+
+The following code snippets demonstrate how to pass a custom query string parameter. They use `germany-promotion` as the attribute value.
++
+#### [Pop-up](#tab/popup)
+
+```javascript
+// src/components/NavigationBar.jsx
+loginRequest.extraQueryParameters = {"campaignId": 'germany-promotion'};
+instance.loginPopup(loginRequest);
+```
+
+#### [Redirect](#tab/redirect)
+
+```javascript
+loginRequest.extraQueryParameters = {"campaignId": 'germany-promotion'};
+instance.loginRedirect(loginRequest);
+```
+
+
+++
+1. In your custom policy, define the [technical profile of an ID token hint](id-token-hint.md).
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes.
+1. Add the `id_token_hint` parameter with the corresponding variable that stores the ID token.
+
+The following code snippets demonstrate how to define an ID token hint:
+
+#### [Pop-up](#tab/popup)
+
+```javascript
+// src/components/NavigationBar.jsx
+loginRequest.extraQueryParameters = {"id_token_hint": idToken};
+instance.loginPopup(loginRequest);
+```
+
+#### [Redirect](#tab/redirect)
+
+```javascript
+loginRequest.extraQueryParameters = {"id_token_hint": idToken};
+instance.loginRedirect(loginRequest);
+```
+
+
+++
+To configure MSAL logging, in *src/authConfig.js*, configure the following keys:
+
+- `loggerCallback` is the logger callback function.
+- `logLevel` lets you specify the level of logging. Possible values: `Error`, `Warning`, `Info`, and `Verbose`.
+- `piiLoggingEnabled` enables the input of personal data. Possible values: `true` or `false`.
+
+The following code snippet demonstrates how to configure MSAL logging:
+
+```javascript
+export const msalConfig = {
+ ...
+ system: {
+ loggerOptions: {
+ loggerCallback: (logLevel, message, containsPii) => {
+ console.log(message);
+ },
+ logLevel: LogLevel.Verbose,
+ piiLoggingEnabled: false
+ }
+ }
+ ...
+}
+```
+
+## Next steps
+
+- Learn more: [MSAL.js configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react/docs).
+
active-directory-b2c Enable Authentication React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app.md
+
+ Title: Enable authentication in a React application by using Azure Active Directory B2C building blocks
+description: Use the building blocks of Azure Active Directory B2C to sign in and sign up users in a React application.
++++++ Last updated : 07/07/2022+++++
+# Enable authentication in your own React Application by using Azure Active Directory B2C
+
+This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own React single-page application (SPA). Learn how to integrate a React application with the [MSAL for React](https://www.npmjs.com/package/@azure/msal-react) authentication library.
+
+Use this article with the related article titled [Configure authentication in a sample React single-page application](./configure-authentication-sample-React-spa-app.md). Substitute the sample React app with your own React app. After you complete the steps in this article, your application will accept sign-ins via Azure AD B2C.
+
+## Prerequisites
+
+Review the prerequisites and integration steps in the [Configure authentication in a sample React single-page application](configure-authentication-sample-React-spa-app.md) article.
+
+## Step 1: Create a React app project
+
+You can use an existing React app, or [create a new React App](https://reactjs.org/docs/create-a-new-react-app.html). To create a new project, run the following commands in your command shell:
++
+```
+npx create-react-app my-app
+cd my-app
+npm start
+```
+
+## Step 2: Install the dependencies
+
+To install the [MSAL Browser](https://www.npmjs.com/package/@azure/msal-browser) and [MSAL React](https://www.npmjs.com/package/@azure/msal-react) libraries in your application, run the following command in your command shell:
+
+```
+npm i @azure/msal-browser @azure/msal-react
+```
+
+Install the [react-router-dom](https://www.npmjs.com/package/react-router-dom) version 5.*. The react-router-dom package contains bindings for using React Router in web applications. Run the following command in your command shell:
+
+```
+npm i react-router-dom@5.3.3
+```
++
+Install the [Bootstrap for React](https://www.npmjs.com/package/react-bootstrap) (optional, for UI):
+
+```
+npm i bootstrap react-bootstrap
+```
+
+## Step 3: Add the authentication components
+
+The sample code is made up of the following components. Add these components from the sample React app to your own app:
+
+- [public/https://docsupdatetracker.net/index.html](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/public/https://docsupdatetracker.net/index.html)- The [bundling process](https://reactjs.org/docs/code-splitting.html) uses this file as a template and injects the React components into the `<div id="root">` element. If you open it directly in the browser, you'll see an empty page.
+
+- [src/authConfig.js](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/authConfig.js) - A configuration file that contains information about your Azure AD B2C identity provider and the web API service. The React app uses this information to establish a trust relationship with Azure AD B2C, sign in and sign out the user, acquire tokens, and validate the tokens.
+
+- [src/index.js](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/index.js) - The JavaScript entry point to your application. This JavaScript file:
+ - Mounts the `App` as the root component into the *public/https://docsupdatetracker.net/index.html* page's `<div id="root">` element.
+ - Initiates the MSAL `PublicClientApplication` library with the configuration defined in the *authConfig.js* file. The MSAL React should be instantiated outside of the component tree to prevent it from being reinstantiated on re-renders.
+ - After instantiation of the MSAL library, the JavaScript code passes the `msalInstance` as props to your application components. For example, `<App instance={msalInstance} />`.
+
+- [src/App.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/App.jsx) - Defines the **App** and **Pages** components:
+
+ - The **App** component is the top level component of your app. It wraps everything between `MsalProvider` component. All components underneath MsalProvider will have access to the PublicClientApplication instance via context and all hooks and components provided by MSAL React. The App component mounts the [PageLayout](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/components/PageLayout.jsx) and its [Pages](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/App.jsx#L18) child element.
+
+ - The **Pages** component registers and unregister the MSAL event callbacks. The events are used to handle MSAL errors. It also defines the routing logic of the app.
+
+ > [!IMPORTANT]
+ > If the App component file name is `App.js`, change it to `App.jsx`.
+
+- [src/pages/Hello.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/pages/Hello.jsx) - Demonstrate how to call a protected resource with OAuth2 bearer token.
+ - It uses the [useMsal](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/hooks.md) hook that returns the PublicClientApplication instance.
+ - With PublicClientApplication instance, it acquires an access token to call the REST API.
+ - Invokes the [callApiWithToken](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/fetch.js) function to fetch the data from the REST API and renders the result using the **DataDisplay** component.
+
+- [src/components/NavigationBar.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/components/NavigationBar.jsx) - The app top navigation bar with the sign-in, sign-out, edit profile and call REST API reset buttons.
+ - It uses the [AuthenticatedTemplate](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md#authenticatedtemplate-and-unauthenticatedtemplate) and UnauthenticatedTemplate, which only render their children if a user is authenticated or unauthenticated, respectively.
+ - Handle the login and sign out with redirection and popup window events.
+
+- [src/components/PageLayout.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/components/PageLayout.jsx)
+ - The common layout that provides the user with a consistent experience as they navigate from page to page. The layout includes common user interface elements such as the app header, **NavigationBar** component, footer and its child components.
+ - It uses the [AuthenticatedTemplate](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md#authenticatedtemplate-and-unauthenticatedtemplate) which renders its children only if a user is authenticated.
+
+- [src/components/DataDisplay.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/components/DataDisplay.jsx) - Renders the data return from the REST API call.
+
+- [src/styles/App.css](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/styles/App.css) and [src/styles/index.css](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/styles/index.css) - CSS styling files for the app.
+
+- [src/fetch.js](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/fetch.js) - Fetches HTTP requests to the REST API.
+
+## Step 4: Configure your React app
+
+After you [add the authentication components](#step-3-add-the-authentication-components), configure your React app with your Azure AD B2C settings. Azure AD B2C identity provider settings are configured in the *auth_config_b2c.json* file and `B2CConfiguration` class.
+
+For guidance, see [Configure your React app](enable-authentication-react-spa-app.md#step-4-configure-your-react-app).
+
+## Step 5: Run the React application
+
+1. From Visual Studio code, open a new terminal and run the following commands:
+
+ ```console
+ npm install && npm update
+ npm start
+ ```
+
+ The console window displays the port number of where the application is hosted:
+
+ ```console
+ Listening on port 3000...
+ ```
+
+1. To call a REST API, follow the guidance how to [run the web API](configure-authentication-sample-react-spa-app.md#run-the-web-api)
+
+1. In your browser, go to `http://localhost:3000` to view the application
++
+## Next steps
+
+- [Configure authentication options in your own React application by using Azure AD B2C](enable-authentication-React-spa-app-options.md)
+- Check out the [MSAL for React documentation](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react/docs)
+- [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory Datawiza With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
-# Tutorial: Configure Datawiza with Azure Active Directory for secure hybrid access
+# Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza
-In this sample tutorial, learn how to integrate Azure Active Directory (Azure AD) with [Datawiza](https://www.datawiza.com/) for secure hybrid access.
+Datawiza's [Datawiza Access Broker (DAB)](https://www.datawiza.com/access-broker) extends Azure AD to enable single sign-on (SSO) and provide granular access controls to protect on-premises and cloud-hosted applications, such as Oracle E-Business Suite, Microsoft IIS, and SAP. By using this solution, enterprises can quickly transition from legacy web access managers (WAMs), such as Symantec SiteMinder, NetIQ, Oracle, and IBM, to Azure AD without rewriting applications. Enterprises can also use Datawiza as a no-code or low-code solution to integrate new applications to Azure AD. This approach enables enterprises to implement their Zero Trust strategy while saving engineering time and reducing costs.
-Datawiza's [Datawiza Access Broker (DAB)](https://www.datawiza.com/access-broker) extends Azure AD to enable single sign-on (SSO) and provide granular access controls to protect on-premises and cloud-hosted applications, such as Oracle E-Business Suite, Microsoft IIS, and SAP.
+In this tutorial, learn how to integrate Azure Active Directory (Azure AD) with [Datawiza](https://www.datawiza.com/) for [hybrid access](../devices/concept-azure-ad-join-hybrid.md).
-By using this solution, enterprises can quickly transition from legacy web access managers (WAMs), such as Symantec SiteMinder, NetIQ, Oracle, and IBM, to Azure AD without rewriting applications. Enterprises can also use Datawiza as a no-code or low-code solution to integrate new applications to Azure AD. This approach saves engineering time, reduces cost significantly, and delivers the project in a secured manner.
-
-## Prerequisites
-
-To get started, you need:
--- An Azure subscription. If you don\'t have a subscription, you can get a [trial account](https://azure.microsoft.com/free/).--- An [Azure AD tenant](../fundamentals/active-directory-access-create-new-tenant.md)
-that's linked to your Azure subscription.
--- [Docker](https://docs.docker.com/get-docker/) and [docker-compose](https://docs.docker.com/compose/install/), which are required to run DAB. Your applications can run on any platform, such as a virtual machine and bare metal.--- An application that you'll transition from a legacy identity system to Azure AD. In this example, DAB is deployed on the same server as the application. The application runs on localhost: 3001, and DAB proxies traffic to the application via localhost: 9772. The traffic to the application reaches DAB first and is then proxied to the application.-
-## Scenario description
+## Datawiza with Azure AD Authentication Architecture
Datawiza integration includes the following components:
Datawiza integration includes the following components:
- Datawiza Cloud Management Console (DCMC) - A centralized management console that manages DAB. DCMC provides UI and RESTful APIs for administrators to manage the DAB configuration and access control policies.
-The following architecture diagram shows the implementation.
+The following diagram describes the authentication architecture orchestrated by Datawiza in a hybrid environment.
![Architecture diagram that shows the authentication process that gives a user access to an on-premises application.](./media/datawiza-with-azure-active-directory/datawiza-architecture-diagram.png)
The following architecture diagram shows the implementation.
| 4. | DAB evaluates access policies and calculates attribute values to be included in HTTP headers forwarded to the application. During this step, DAB may call out to the identity provider to retrieve the information needed to set the header values correctly. DAB sets the header values and sends the request to the application. | | 5. | The user is authenticated and has access to the application.|
-## Onboard with Datawiza
+## Prerequisites
-To integrate your on-premises or cloud-hosted application with Azure AD, sign in to [Datawiza Cloud Management
-Console](https://console.datawiza.com/) (DCMC).
+To get started, you need:
-## Create an application on DCMC
+- An Azure subscription. If you don\'t have a subscription, you can get a [trial account](https://azure.microsoft.com/free/).
-In the next step, you create an application on DCMC and generate a key pair for the app. The key pair consists of a `PROVISIONING_KEY` and `PROVISIONING_SECRET`. To create the app and generate the key pair, follow the instructions in [Datawiza Cloud Management Console](https://docs.datawiza.com/step-by-step/step2.html).
+- An [Azure AD tenant](../fundamentals/active-directory-access-create-new-tenant.md)
+that's linked to your Azure subscription.
-For Azure AD, Datawiza offers a convenient [one-click integration](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html). This method to integrate Azure AD with DCMC can create an application registration on your behalf in your Azure AD tenant.
+- [Docker](https://docs.docker.com/get-docker/) and [docker-compose](https://docs.docker.com/compose/install/), which are required to run DAB. Your applications can run on any platform, such as a virtual machine and bare metal.
-![Screenshot of the Datawiza Configure I D P page. Boxes for name, protocol, and other values are visible. An automatic generator option is turned on.](./media/datawiza-with-azure-active-directory/configure-idp.png)
+- An on-premises or cloud-hosted application that you'll transition from a legacy identity system to Azure AD. In this example, DAB is deployed on the same server as the application. The application runs on localhost: 3001, and DAB proxies traffic to the application via localhost: 9772. The traffic to the application reaches DAB first and is then proxied to the application.
-Instead, if you want to use an existing web application in your Azure AD tenant, you can disable the option and populate the fields of the form. You need the tenant ID, client ID, and client secret. For more information about creating a web application and getting these values, see [Microsoft Azure AD in the Datawiza documentation](https://docs.datawiza.com/idp/azure.html).
+## Configure Datawiza Cloud Management Console
-![Screenshot of the Datawiza Configure I D P page. Boxes for name, protocol, and other values are visible. An automatic generator option is turned off.](./media/datawiza-with-azure-active-directory/use-form.png)
+1. Sign in to [Datawiza Cloud Management Console](https://console.datawiza.com/) (DCMC).
-## Run DAB with a header-based application
+2. Create an application on DCMC and generate a key pair for the app. The key pair consists of a `PROVISIONING_KEY` and `PROVISIONING_SECRET`. To create the app and generate the key pair, follow the instructions in [Datawiza Cloud Management Console](https://docs.datawiza.com/step-by-step/step2.html).
-You can use either Docker or Kubernetes to run DAB. The docker image is needed to create a sample header-based application.
+3. Register your application in Azure AD by using Datawiza's convenient [one-click integration](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html).
-To run DAB with a header-based application, follow these steps:
+![Screenshot of the Datawiza Configure I D P page. Boxes for name, protocol, and other values are visible. An automatic generator option is turned on.](./media/datawiza-with-azure-active-directory/configure-idp.png)
+
+To use an existing web application, you can manually populate the fields of the form. You'll need the tenant ID, client ID, and client secret. For more information about creating a web application and getting these values, see [Microsoft Azure AD in the Datawiza documentation](https://docs.datawiza.com/idp/azure.html).
+
+![Screenshot of the Datawiza Configure I D P page. Boxes for name, protocol, and other values are visible. An automatic generator option is turned off.](./media/datawiza-with-azure-active-directory/use-form.png)
-1. Use either Docker or Kubernetes to run DAB:
+4. Run DAB using either Docker or Kubernetes. The docker image is needed to create a sample header-based application.
- For Docker-specific instructions, see [Deploy Datawiza Access Broker With Your App](https://docs.datawiza.com/step-by-step/step3.html). - For Kubernetes-specific instructions, see [Deploy Datawiza Access Broker with a Web App using Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html).
To run DAB with a header-based application, follow these steps:
- "3001:3001" ```
-1. To sign in to the container registry and download the images of DAB and the header-based application, follow the instructions in [Important Step](https://docs.datawiza.com/step-by-step/step3.html#important-step).
+5. Sign in to the container registry and download the images of DAB and the header-based application by following the instructions in this [Important Step](https://docs.datawiza.com/step-by-step/step3.html#important-step).
-1. Run the following command:
+6. Run the following command:
`docker-compose -f docker-compose.yml up` The header-based application should now have SSO enabled with Azure AD.
-1. In a browser, go to `http://localhost:9772/`. An Azure AD sign-in page appears.
-
-## Pass user attributes to the header-based application
-
-DAB gets user attributes from Azure AD and can pass these attributes to the application via a header or cookie.
+7. In a browser, go to `http://localhost:9772/`. An Azure AD sign-in page appears.
-To pass user attributes such as an email address, a first name, and a last name to the header-based application, follow the instructions in [Pass User Attributes](https://docs.datawiza.com/step-by-step/step4.html).
+8. Pass user attributes to the header-based application. DAB gets user attributes from Azure AD and can pass these attributes to the application via a header or cookie. To pass user attributes such as an email address, a first name, and a last name to the header-based application, follow the instructions in [Pass User Attributes](https://docs.datawiza.com/step-by-step/step4.html).
-After successfully configuring the user attributes, you should see a green check mark next to each attribute.
+9. Confirm you have successfully configured user attributes by observing a green check mark next to each attribute.
![Screenshot that shows the Datawiza application home page. Green check marks are visible next to the host, email, firstname, and lastname attributes.](./media/datawiza-with-azure-active-directory/datawiza-application-home-page.png)
After successfully configuring the user attributes, you should see a green check
1. Go to the application URL. DAB should redirect you to the Azure AD sign-in page.
-1. After successfully authenticating, you should be redirected to DAB.
+2. After successfully authenticating, you should be redirected to DAB.
DAB evaluates policies, calculates headers, and sends you to the upstream application. Your requested application should appear.
DAB evaluates policies, calculates headers, and sends you to the upstream applic
- [Configure Datawiza with Azure AD B2C](../../active-directory-b2c/partner-datawiza.md) -- [Datawiza documentation](https://docs.datawiza.com)
+- [Datawiza documentation](https://docs.datawiza.com)
active-directory How To View Associated Resources For An Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-associated-resources-for-an-identity.md
+
+ Title: View associated resources for a user-assigned managed identity
+description: Step-by-step instructions for viewing the Azure resources that are associated with a user-assigned managed identity
+
+documentationcenter: ''
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 06/20/2022++++
+# View associated Azure resources for a user-assigned managed identity (Preview)
+
+This article will explain how to view the Azure resources that are associated with a user-assigned managed identity. This feature is available in public preview.
+
+## Prerequisites
+
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md).
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/).
++
+## View resources for a user-assigned managed identity
+
+Being able to quickly see which Azure resources are associated with a user-assigned managed identity gives you greater visibility into your environment. You can quickly identify unused identities that can be safely deleted, and know which resources will be affected by changing the permissions or group membership of a managed identity.
+
+### Portal
+
+- From the **Azure portal** search for **Managed Identities**.
+- Select a managed identity
+- In the left-hand menu, select the **Associated resources** link
+- A list of the Azure resources associated with the managed identity will be displayed
++
+Select the resource name to be brought to its summary page.
+
+#### Filtering and sorting by resource type
+Filter the resources by typing in the filter box at the top of the summary page. You can filter by the name, type, resource group, and subscription ID.
+
+Select the column title to sort alphabetically, ascending or descending.
+
+### REST API
+
+The list of associated resources can also be accessed using the REST API. This endpoint is separate to the API endpoint used to retrieve a list of user-assigned managed identities. You'll need the following information:
+ - Subscription ID
+ - Resource name of the user-assigned managed identity that you want to view the resources for
+ - Resource group of the user-assigned managed identity
+
+*Request format*
+```
+https://management.azure.com/subscriptions/{resourceID of user-assigned identity}/listAssociatedResources?$filter={filter}&$orderby={orderby}&$skip={skip}&$top={top}&$skiptoken={skiptoken}&api-version=2021-09-30-preview
+```
+
+*Parameters*
+
+| Parameter | Example |Description |
+||||
+| $filter | ```'type' eq 'microsoft.cognitiveservices/account' and contains(name, 'test')``` | An OData expression that allows you to filter any of the available fields: name, type, resourceGroup, subscriptionId, subscriptionDisplayName<br/><br/>The following operations are supported: ```and```, ```or```, ```eq``` and ```contains``` |
+| $orderby | ```name asc``` | An OData expression that allows you to order by any of the available fields |
+| $skip | 50 | The number of items you want to skip while paging through the results. |
+| $top | 10 | The number of resources to return. 0 will return only a count of the resources. |
+
+Below is a sample request to the REST API:
+```http
+POST https://management.azure.com/subscriptions/aab111d1-1111-43e2-8d11-3bfc47ab8111/resourceGroups/devrg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/devIdentity/listAssociatedResources?$filter={filter}&$orderby={orderby}&$skip={skip}&$top={top}&skipToken={skipToken}&api-version=2021-09-30-preview
+```
+
+Below is a sample response from the REST API:
+```json
+{
+ "totalCount": 2,
+ "value": [{
+ "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test1",
+ "name": "test1",
+ "type": "microsoft.cognitiveservices/accounts",
+ "resourceGroup": "testrg",
+ "subscriptionId": "{subId}",
+ "subscriptionDisplayName": "TestSubscription"
+ },
+ {
+ "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test2",
+ "name": "test2",
+ "type": "microsoft.cognitiveservices/accounts",
+ "resourceGroup": "testrg",
+ "subscriptionId": "{subId}",
+ "subscriptionDisplayName": "TestSubscription"
+ }
+ ],
+ "nextLink": "https://management.azure.com/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testid?skiptoken=ew0KICAiJGlkIjogIjEiLA0KICAiTWF4Um93cyI6IDIsDQogICJSb3dzVG9Ta2lwIjogMiwNCiAgIkt1c3RvQ2x1c3RlclVybCI6ICJodHRwczovL2FybXRvcG9sb2d5Lmt1c3RvLndpbmRvd3MubmV0Ig0KfQ%253d%253d&api-version=2021"
+}
+
+```
+
+### Command Line Interface
+To view the associated resources for a user-assigned managed identity, run the following command:
+```azurecli
+az identity list-resources --resource-group <ResourceGroupName> --name <ManagedIdentityName>
+```
+
+The response will look like this:
+```json
+[
+ {
+ "id": "/subscriptions/XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130/resourceGroups/ProductionServices/providers/Microsoft.Compute/virtualMachines/linux-prod-1-US",
+ "name": "linux-prod-1-US",
+ "resourceGroup": "productionservices",
+ "subscriptionDisplayName": "Visual Studio Enterprise Subscription",
+ "subscriptionId": "XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130",
+ "type": "microsoft.compute/virtualmachines"
+ },
+ {
+ "id": "/subscriptions/XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130/resourceGroups/ProductionServices/providers/Microsoft.Web/sites/prodStatusCheck-US",
+ "name": "prodStatusCheck-US",
+ "resourceGroup": "productionservices",
+ "subscriptionDisplayName": "Visual Studio Enterprise Subscription",
+ "subscriptionId": "XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130",
+ "type": "microsoft.web/sites"
+ },
+ {
+ "id": "/subscriptions/XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130/resourceGroups/ProductionServices/providers/Microsoft.Web/sites/salesApp-US-1",
+ "name": "salesApp-US-1",
+ "resourceGroup": "productionservices",
+ "subscriptionDisplayName": "Visual Studio Enterprise Subscription",
+ "subscriptionId": "XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130",
+ "type": "microsoft.web/sites"
+ },
+ {
+ "id": "/subscriptions/XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130/resourceGroups/ProductionServices/providers/Microsoft.Web/sites/salesPortal-us-2",
+ "name": "salesPortal-us-2",
+ "resourceGroup": "productionservices",
+ "subscriptionDisplayName": "Visual Studio Enterprise Subscription",
+ "subscriptionId": "XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130",
+ "type": "microsoft.web/sites"
+ },
+ {
+ "id": "/subscriptions/XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130/resourceGroups/vmss/providers/Microsoft.Compute/virtualMachineScaleSets/vmsstest",
+ "name": "vmsstest",
+ "resourceGroup": "vmss",
+ "subscriptionDisplayName": "Visual Studio Enterprise Subscription",
+ "subscriptionId": "XXXX-XXXX-XXXX-XXXX-XXXfc47ab8130",
+ "type": "microsoft.compute/virtualmachinescalesets"
+ }
+]
+```
+
+### REST API using PowerShell
+There's no specific PowerShell command for returning the associated resources of a managed identity, but you can use the REST API in PowerShell by using the following command:
+
+```PowerShell
+Invoke-AzRestMethod -Path "/subscriptions/XXX-XXX-XXX-XXX/resourceGroups/test-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/test-identity-name/listAssociatedResources?api-version=2021-09-30-PREVIEW&%24orderby=name%20asc&%24skip=0&%24top=100" -Method Post
+```
+
+>[!NOTE]
+> All resources associated with an identity will be returned, regardless of the user's permissions. The user only needs to have access to read the managed identity. This means that more resources may be visible than the user can see elsewhere in the portal. This is to provide full visibility of the identity's usage. If the user doesn't have access to an associated resource, an error will be displayed if they try to access it from the list.
+
+## Delete a user-assigned managed identity
+When you select the delete button for a user-assigned managed identity, you'll see a list of up to 10 associated resources for that identity. The full count will be displayed at the top of the pane. This list allows you to see which resources will be affected by deleting the identity. You'll be asked to confirm your decision.
++
+This confirmation process is only available in the portal. To view an identity's resources before deleting it using the REST API, retrieve the list of resources manually in advance.
+
+## Limitations
+ - This functionality is available in all public regions, and will be available in USGov and China in the coming weeks.
+ - API requests for associated resources are limited to one per second per tenant. If you exceed this limit, you may receive a `HTTP 429` error. This limit doesn't apply to retrieving a list of user-assigned managed identities.
+ - Azure Resources types that are in preview, or their support for Managed identities is in preview, may not appear in the associated resources list until fully generally available. This list includes Service Fabric clusters, Blueprints, and Machine learning services.
+ - This functionality is limited to tenants with fewer than 5,000 subscriptions. An error will be displayed if the tenant has greater than 5,000 subscriptions.
+ - The list of associated resources will display the resource type, not display name.
+ - Azure Policy assignments appear in the list, but their names aren't displayed correctly.
+ - This functionality isn't yet available through PowerShell.
+
+## Next steps
+
+* [Managed identities for Azure resources](./overview.md)
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
The Dapr extension support varies depending on how you manage the runtime.
**Self-managed** For self-managed runtime, the Dapr extension supports:-- [The latest version of Dapr and 1 previous version (N-1)][dapr-supported-version]
+- [The latest version of Dapr and 2 previous versions (N-2)][dapr-supported-version]
- Upgrading minor version incrementally (for example, 1.5 -> 1.6 -> 1.7) Self-managed runtime requires manual upgrade to remain in the support window. To upgrade Dapr via the extension, follow the [Update extension instance instructions][update-extension].
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
The following IP addresses are divided by **Azure Environment**. When allowing i
| Azure Public| South Central US| 20.188.77.119, 20.97.32.190| | Azure Public| South India| 20.44.33.246| | Azure Public| Southeast Asia| 40.90.185.46|
-| Azure Public| Switzerland North| 51.107.0.91|
+| Azure Public| Switzerland North| 51.107.246.176, 51.107.0.91|
| Azure Public| Switzerland West| 51.107.96.8| | Azure Public| UAE Central| 20.37.81.41| | Azure Public| UAE North| 20.46.144.85|
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Linux containers:
| **Storage accounts** | Azure Storage account. It must contain an Azure Files share. | | **Share name** | Files share to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
- | **Mount path** | Directory inside your Windows container that you want to mount. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`).|
+ | **Mount path** | Directory inside your Windows container that you want to mount. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`) as it's not supported.|
::: zone-end ::: zone pivot="container-linux" | Setting | Description |
application-gateway Mutual Authentication Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-troubleshooting.md
There are a number of potential causes for failures in the access logs. Below is
* **Unable to get local issuer certificate:** Similar to unable to get issuer certificate, the issuer certificate of the client certificate couldn't be found. This normally means the trusted client CA certificate chain is not complete on the Application Gateway. Validate that the trusted client CA certificate chain uploaded on the Application Gateway is complete. * **Unable to verify the first certificate:** Unable to verify the client certificate. This error occurs specifically when the client presents only the leaf certificate, whose issuer is not trusted. Validate that the trusted client CA certificate chain uploaded on the Application Gateway is complete. * **Unable to verify the client certificate issuer:** This error occurs when the configuration *VerifyClientCertIssuerDN* is set to true. This typically happens when the Issuer DN of the client certificate doesn't match the *ClientCertificateIssuerDN* extracted from the trusted client CA certificate chain uploaded by the customer. For more information about how Application Gateway extracts the *ClientCertificateIssuerDN*, check out [Application Gateway extracting issuer DN](./mutual-authentication-overview.md#verify-client-certificate-dn). As best practice, make sure you're uploading one certificate chain per file to Application Gateway.
+* **Unsupported certificate purpose:** Ensure the client certificate designates Extended Key Usage for Client Authentication ([1.3.6.1.5.5.7.3.2](https://oidref.com/1.3.6.1.5.5.7.3.2)). More details on definition of extended key usage and object identifier for client authentication can be found in [RFC 3280](https://www.rfc-editor.org/rfc/rfc3280) and [RFC 5280](https://www.rfc-editor.org/rfc/rfc5280).
For more information on how to extract the entire trusted client CA certificate chain to upload to Application Gateway, see [how to extract trusted client CA certificate chains](./mutual-authentication-certificate-management.md).
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
The June release is the latest update to the Form Recognizer Studio. There are considerable UX and accessbility improvements addressed in this update:
-* 🆕 **Code sample for Javascript and C#**. Studio code tab now includes sample codes written in Javascript and C# in addition to the already existing Python code.
-* 🆕 **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload UI.
-* 🆕 **New feature for custom projects**. Custom projects now support creating storage account and file directories when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
+* 🆕 **Code sample for Javascript and C#**. The Studio code tab now adds Javascript and C# code samples in addition to the existing Python one.
+* 🆕 **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload user interface.
+* 🆕 **New feature for custom projects**. Custom projects now support creating storage account and blobs when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
### Form Recognizer v3.0 preview release
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Azure Automation supports three types of source control:
> > If you have both a Run As account and managed identity enabled, then managed identity is given preference. If you want to use a Run As account instead, you can [create an Automation variable](./shared-resources/variables.md) of BOOLEAN type named `AUTOMATION_SC_USE_RUNAS` with a value of `true`.
+> [!NOTE]
+> According to [this](https://docs.microsoft.com/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops#application-connection-policies) Azure DevOps documentation, **Third-party application access via OAuth** policy is defaulted to **off** for all new organizations. So if you try to configure source control in Azure Automation with **Azure Devops (Git)** as source control type without enabling **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps then you might get **SourceControl securityToken is invalid** error. Hence to avoid this error, make sure you first enable **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps.
+ ## Configure source control This section tells how to configure source control for your Automation account. You can use either the Azure portal or PowerShell.
Currently, you can't use the Azure portal to update the PAT in source control. W
## Next steps * For integrating source control in Azure Automation, see [Azure Automation: Source Control Integration in Azure Automation](https://azure.microsoft.com/blog/azure-automation-source-control-13/).
-* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
+* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
- [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version 1.1.0 or later. > [!NOTE]
-> You can't set up this feature for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc.
+> You can't set up this feature for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc. This feature isn't supported on AKS on Azure Stack HCI.
## Set up Azure AD applications
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
-## Enable Cluster Connect feature
-
-You can enable the Cluster Connect on any Azure Arc-enabled Kubernetes cluster by running the following command on a machine where the `kubeconfig` file is pointed to the cluster of concern:
-
-```azurecli
-az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $RESOURCE_GROUP
-```
- ## Azure Active Directory authentication option ### [Azure CLI](#tab/azure-cli)
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
- For an Azure AD user account: ```azurecli
- AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query objectId -o tsv)
+ AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query userPrincipalName -o tsv)
``` - For an Azure AD application:
azure-arc Kubernetes Resource View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/kubernetes-resource-view.md
The Azure portal includes a Kubernetes resource view for easy access to the Kube
- An existing Kubernetes cluster [connected](quickstart-connect-cluster.md) to Azure as an Azure Arc-enabled Kubernetes resource. -- [Cluster Connect feature has to be enabled](cluster-connect.md#enable-cluster-connect-feature) on the Azure Arc-enabled Kubernetes cluster.- - [Service account token](cluster-connect.md#service-account-token-authentication-option) for authentication to the cluster. ## View Kubernetes resources
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
- * If you want to connect a OpenShift cluster to Azure Arc, execute the following command one time on your cluster before running `az connectedk8s connect`:
-
- ```bash
- oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa
- ```
>[!NOTE] > The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
If your cluster is behind an outbound proxy or firewall, verify that websocket c
### Cluster Connect feature disabled
-If the Cluster Connect feature is disabled on the cluster, then `az connectedk8s proxy` will fail to establish a session with the cluster.
+If the `clusterconnect-agent` and `kube-aad-proxy` pods are missing, then the cluster connect feature is likely disabled on the cluster, and `az connectedk8s proxy` will fail to establish a session with the cluster.
```azurecli az connectedk8s proxy -n AzureArcTest -g AzureArcTest
az connectedk8s proxy -n AzureArcTest -g AzureArcTest
Cannot connect to the hybrid connection because no agent is connected in the target arc resource. ```
-To resolve this error, [enable the Cluster Connect feature](cluster-connect.md#enable-cluster-connect-feature) on your cluster.
+To resolve this error, enable the Cluster Connect feature on your cluster.
+
+```azurecli
+az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $RESOURCE_GROUP
+```
## Enable custom locations using service principal
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md
Open Postman and follow these steps:
:::image type="content" source="./media/functions-manually-run-non-http/functions-manually-run-non-http-body.png" alt-text="Postman body settings." border="true":::
+ > [!NOTE]
+ > If you don't want to pass data into the function, you must still pass an empty dictionary `{}` as the body of the POST request.
+ 1. Select **Send**. :::image type="content" source="./media/functions-manually-run-non-http/functions-manually-run-non-http-send.png" alt-text="Send a request with Postman." border="true":::
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
The following table lists preinstalled system libraries in Docker images for the
| Functions runtime | Debian version | Python versions | ||||
-| Version 2.x | Stretch | [Python 3.6](https://github.com/Azure/azure-functions-docker/blob/master/host/2.0/stretch/amd64/python/python36/python36.Dockerfile)<br/>[Python 3.7](https://github.com/Azure/azure-functions-docker/blob/master/host/2.0/stretch/amd64/python/python37/python37.Dockerfile) |
+| Version 2.x | Stretch | [Python 3.7](https://github.com/Azure/azure-functions-docker/blob/dev/host/4/bullseye/amd64/python/python37/python37.Dockerfile) |
| Version 3.x | Buster | [Python 3.6](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python36/python36.Dockerfile)<br/>[Python 3.7](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python37/python37.Dockerfile)<br />[Python 3.8](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python38/python38.Dockerfile)<br/> [Python 3.9](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python39/python39.Dockerfile)| ## Python worker extensions
azure-maps Power Bi Visual Add Bar Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bar-chart-layer.md
description: In this article, you will learn how to use the bar chart layer in a
Last updated 11/29/2021-+
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
description: In this article, you will learn how to use the bubble layer in an A
Last updated 11/29/2021-+
azure-maps Power Bi Visual Add Reference Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md
description: In this article, you will learn how to use the reference layer in A
Last updated 11/29/2021-+
azure-maps Power Bi Visual Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-tile-layer.md
description: In this article, you will learn how to use the tile layer in Azure
Last updated 11/29/2021-+
azure-maps Power Bi Visual Geocode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md
description: In this article, you'll learn about geocoding in Azure Maps Power B
Last updated 03/16/2022-+
azure-maps Power Bi Visual Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-get-started.md
description: In this article, you will learn how to use Azure Maps Power BI visu
Last updated 11/29/2021-+
azure-maps Power Bi Visual Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-manage-access.md
description: In this article, you will learn how to manage Azure Maps Power BI v
Last updated 11/29/2021-++
azure-maps Power Bi Visual Show Real Time Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-show-real-time-traffic.md
description: In this article, you will learn how to show real-time traffic on an
Last updated 11/29/2021-+
azure-maps Power Bi Visual Understanding Layers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md
description: In this article, you will learn about the different layers availabl
Last updated 11/29/2021-+
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
Title: Migration tools for legacy agent to Azure Monitor agent
-description: This article describes various migration tools and helpers available for migrating from the existing legacy agents to the new Azure Monitor agent (AMA) and data collection rules (DCR).
+ Title: Tools for migrating to Azure Monitor Agent from legacy agents
+description: This article describes various migration tools and helpers available for migrating from existing legacy agents to the new Azure Monitor agent (AMA) and data collection rules (DCR).
-- Previously updated : 7/6/2022 +++ Last updated : 6/22/2022
+# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics Agent to Azure Monitor Agent and track the status of the migration in my account.
+
-# Migration tools for Log Analytics agent to Azure Monitor Agent
-The [Azure Monitor agent (AMA)](azure-monitor-agent-overview.md) collects monitoring data from the guest operating system of Azure virtual machines, scale sets, on premise and multi-cloud servers and Windows client devices. It uploads the data to Azure Monitor destinations where it can be used by different features, insights, and other services such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). All of the data collection configuration is handled via [Data Collection Rules](../essentials/data-collection-rule-overview.md).
-The Azure Monitor agent is meant to replace the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines. By comparison, it is more **secure, cost-effective, performant, manageable and reliable**. You must migrate from [Log Analytics agent] to [Azure Monitor agent] before **August 2024**. To make this process easier and automated, use agent migration described in this article.
+# Migration tools for Log Analytics Agent to Azure Monitor Agent
+
+Azure Monitor Agent (AMA) replaces the Log Analytics Agent (MM) include enhanced security, cost-effectiveness, performance, manageability and reliability. This article explains how to use the AMA Migration Helper and DCR Config Generator tools to help automate and track the migration from Log Analytics Agent to Azure Monitor Agent.
+
+![Flow diagram that shows the steps involved in agent migration and how the migration tools help in generating DCRs and tracking the entire migration process.](media/azure-monitor-agent-migration/mma-to-ama-migration-steps.png)
+
+> [!IMPORTANT]
+> Do not remove the legacy agents if being used by other [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features) (like Microsoft Defender for Cloud, Sentinel, VM Insights, etc). You can use the migration helper to discover which solutions and services you use today that depend on the legacy agents.
+
+## Installing and using AMA Migration Helper (preview)
-## AMA Migration Helper (preview)
-A workbook-based solution in Azure Monitor that helps you discover **what to migrate** and **track progress** as you move from legacy Log Analytics agents to Azure Monitor agent on your virtual machines, scale sets, on premise and Arc-enabled servers in your subscriptions. Use this single glass pane view to expedite your agent migration journey.
+AMA Migration Helper is a workbook-based Azure Monitor solution that helps you **discover what to migrate** and **track progress** as you move from Log Analytics Agent to Azure Monitor Agent. Use this single pane of glass view to expedite and track the status of your agent migration journey.
-The workbook is available under the *Community Git repo > Azure Monitor* option, linked [here](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook)
+To set up the AMA Migration Helper workbook in the Azure portal:
-1. Open Azure portal > Monitor > Workbooks
-2. Click ΓÇÿ+ NewΓÇÖ
-3. Click on the ΓÇÿAdvanced EditorΓÇÖ </> button
-4. Copy and paste the workbook JSON content here.
-5. Click ΓÇÿApplyΓÇÖ to load the workbook. Finally click ΓÇÿDone EditingΓÇÖ. YouΓÇÖre now ready to use the workbook
-6. Select subscriptions and workspaces drop-downs to view relevant information
+1. From the **Monitor** menu, select **Workbooks** > **+ New** > **Advanced Editor** (**</>**).
+1. Copy and paste the content from the [AMA Migration Helper file in the Azure Monitor Community GitHub repository](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook) into the editor.
+1. Select **Apply** to load the workbook.
+1. Select **Done Editing**.
+ YouΓÇÖre now ready to use the workbook.
-## DCR Config Generator (preview)
-The Azure Monitor agent relies only on [Data Collection rules](../essentials/data-collection-rule-overview.md) for configuration, whereas the legacy agent pulls all its configuration from Log Analytics workspaces. Use this tool to parse legacy agent configuration from your workspaces and automatically generate corresponding rules. You can then associate the rules to machines running the new agent using built-in association policies.
+1. Select the **Subscriptions** and **Workspaces** dropdowns to view relevant information.
+
+ :::image type="content" source="media/azure-monitor-migration-tools/ama-migration-helper.png" lightbox="media/azure-monitor-migration-tools/ama-migration-helper.png" alt-text="Screenshot of the Azure Monitor Agent Migration Helper workbook. The screenshot highlights the Subscription and Workspace dropdowns and shows the Azure Virtual Machines tab, on which you can track which agent is deployed on each virtual machine.":::
+
+## Installing and using DCR Config Generator (preview)
+Azure Monitor Agent relies only on [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) for configuration, whereas Log Analytics Agent inherits its configuration from Log Analytics workspaces.
+
+Use the DCR Config Generator tool to parse Log Analytics Agent configuration from your workspaces and generate corresponding data collection rules automatically. You can then associate the rules to machines running the new agent using built-in association policies.
> [!NOTE]
-> Additional configuration for [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features) dependent on agent are not yet supported in this tool.
--
-1. **Prerequisites**
- - PowerShell version 7.1.3 or higher is recommended (minimum version 5.1)
- - Primarily uses `Az Powershell module` to pull workspace agent configuration information
- - You must have read access for the specified workspace resource
- - `Connect-AzAccount` and `Select-AzSubscription` will be used to set the context for the script to run so proper Azure credentials will be needed
-2. [Download the PowerShell script](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/DCR%20Config%20Generator)
-2. Run the script using one of the options below:
- - Option 1
- # [PowerShell](#tab/ARMAgentPowerShell)
- ```powershell
- .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath
- ```
- - Option 2 (if you are just looking for the DCR payload json)
- # [PowerShell](#tab/ARMAgentPowerShell)
- ```powershell
- $dcrJson = Get-DCRJson -ResourceGroupName $rgName -WorkspaceName $workspaceName -PlatformType $platformType $dcrJson | ConvertTo-Json -Depth 10 | Out-File "<filepath>\OutputFiles\dcr_output.json"
- ```
+> DCR Config Generator does not currently support additional configuration for [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features) dependent on Log Analytics Agent.
+
+### Prerequisites
+To install DCR Config Generator, you need:
+
+1. PowerShell version 5.1 or higher. We recommend using PowerShell version 7.1.3 or higher.
+1. Read access for the specified workspace resources.
+1. The `Az Powershell` module to pull workspace agent configuration information.
+1. The Azure credentials for running `Connect-AzAccount` and `Select-AzSubscription`, which set the context for the script to run.
+
+To install DCR Config Generator:
+
+1. [Download the PowerShell script](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/DCR%20Config%20Generator).
+
+1. Run the script:
+
+ Option 1:
+
+ ```powershell
+ .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath
+ ```
+ Option 2 (if you just want the DCR payload JSON file):
+
+ ```powershell
+ $dcrJson = Get-DCRJson -ResourceGroupName $rgName -WorkspaceName $workspaceName -PlatformType $platformType $dcrJson | ConvertTo-Json -Depth 10 | Out-File "<filepath>\OutputFiles\dcr_output.json"
+ ```
+
+ **Parameters**
- **Parameters**
-
- | Parameter | Required? | Description |
- ||||
- | SubscriptionId | Yes | Subscription ID that contains the target workspace |
- | ResourceGroupName | Yes | Resource Group that contains the target workspace |
- | WorkspaceName | Yes | Name of the target workspace |
- | DCRName | Yes | Name of the new generated DCR to create |
- | Location | Yes | Region location for the new DCR |
- | FolderPath | No | Local path to store the output. Current directory will be used if nothing is provided |
+ | Parameter | Required? | Description |
+ ||||
+ | `SubscriptionId` | Yes | ID of the subscription that contains the target workspace. |
+ | `ResourceGroupName` | Yes | Resource group that contains the target workspace. |
+ | `WorkspaceName` | Yes | Name of the target workspace. |
+ | `DCRName` | Yes | Name of the new DCR. |
+ | `Location` | Yes | Region location for the new DCR. |
+ | `FolderPath` | No | Path in which to save the new data collection rules. By default, Azure Monitor uses the current directory. |
-3. Review the output data collection rule(s). There are two separate ARM templates that can be produced (based on agent configuration of the target workspace):
- - Windows ARM Template and Parameter Files: will be created if target workspace contains Windows Performance Counters and/or Windows Events
- - Linux ARM Template and Parameter Files: will be created if target workspace contains Linux Performance Counters and/or Linux Syslog Events
+1. Review the output data collection rules. The script can produce two types of ARM template files, depending on the agent configuration in the target workspace:
+
+ - Windows ARM template and parameter files - if the target workspace contains Windows performance counters or Windows events.
+ - Linux ARM template and parameter files - if the target workspace contains Linux performance counters or Linux Syslog events.
-4. Use the rule association built-in policies and other available methods to associate generated rules with machines running the new agent. [Learn more](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association)
+1. Use the built-in rule association policies to [associate the generated data collection rules with virtual machines](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) running the new agent.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Title: Migrate from legacy agents to the new Azure Monitor agent description: This article provides guidance for migrating from the existing legacy agents to the new Azure Monitor agent (AMA) and data collection rules (DCR). -- Previously updated : 02/09/2022 -++ - Last updated : 6/22/2022 +
+# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics agent to Azure Monitor Agent and track the status of the migration in my account.
# Migrate to Azure Monitor agent from Log Analytics agent
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
+
+ Title: Alert processing rules for Azure Monitor alerts
+description: Understanding what alert processing rules in Azure Monitor are and how to configure and manage them.
+ Last updated : 2/23/2022++++
+# Alert processing rules
+
+<a name="configuring-an-action-rule"></a>
+<a name="suppression-of-alerts"></a>
+
+> [!NOTE]
+> The previous name for alert processing rules was **action rules**. The Azure resource type of these rules remains **Microsoft.AlertsManagement/actionRules** for backward compatibility.
+
+Alert processing rules allow you to apply processing on **fired alerts**. You may be familiar with Azure Monitor alert rules, which are rules that generate new alerts. Alert processing rules are different; they are rules that modify the fired alerts themselves as they are being fired. You can use alert processing rules to add [action groups](./action-groups.md) or remove (suppress) action groups from your fired alerts. Alert processing rules can be applied to different resource scopes, from a single resource to an entire subscription. They can also allow you to apply various filters or have the rule work on a pre-defined schedule.
+
+## What are alert processing rules useful for?
+
+Some common use cases for alert processing rules include:
+
+### Notification suppression during planned maintenance
+
+Many customers set up a planned maintenance time for their resources, either on a one-off basis or on a regular schedule. The planned maintenance may cover a single resource like a virtual machine, or multiple resources like all virtual machines in a resource group. So, you may want to stop receiving alert notifications for those resources during the maintenance window. In other cases, you may prefer to not receive alert notifications at all outside of your business hours. Alert processing rules allow you to achieve that.
+
+You could alternatively suppress alert notifications by disabling the alert rules themselves at the beginning of the maintenance window, and re-enabling them once the maintenance is over. In that case, the alerts won't fire in the first place. However, that approach has several limitations:
+ * This approach is only practical if the scope of the alert rule is exactly the scope of the resources under maintenance. For example, a single alert rule might cover multiple resources, but only a few of those resources are going through maintenance. So, if you disable the alert rule, you will not be alerted when the remaining resources covered by that rule run into issues.
+ * You may have many alert rules that cover the resource. Updating all of them is time consuming and error prone.
+ * You might have some alerts that are not created by an alert rule at all, like alerts from Azure Backup.
+
+In all these cases, an alert processing rule provides an easy way to achieve the notification suppression goal.
+
+### Management at scale
+
+Most customers tend to define a few action groups that are used repeatedly in their alert rules. For example, they may want to call a specific action group whenever any high severity alert is fired. As their number of alert rule grows, manually making sure that each alert rule has the right set of action groups is becoming harder.
+
+Alert processing rules allow you to specify that logic in a single rule, instead of having to set it consistently in all your alert rules. They also cover alert types that are not generated by an alert rule.
+
+### Add action groups to all alert types
+
+Azure Monitor alert rules let you select which action groups will be triggered when their alerts are fired. However, not all Azure alert sources let you specify action groups. Some examples of such alerts include [Azure Backup alerts](../../backup/backup-azure-monitoring-built-in-monitor.md), [VM Insights guest health alerts](../vm/vminsights-health-alerts.md), [Azure Stack Edge](../../databox-online/azure-stack-edge-gpu-manage-device-event-alert-notifications.md), and Azure Stack Hub.
+
+For those alert types, you can use alert processing rules to add action groups.
+
+> [!NOTE]
+> Alert processing rules do not affect [Azure Service Health](../../service-health/service-health-overview.md) alerts.
+
+## Alert processing rule properties
+<a name="filter-criteria"></a>
+
+An alert processing rule definition covers several aspects:
+
+### Which fired alerts are affected by this rule?
+
+**SCOPE**
+Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, or specific resource group, or an entire subscription. **The alert processing rule will apply to alerts that fired on resources within that scope**.
+
+**FILTERS**
+You can also define filters to narrow down which specific subset of alerts are affected within the scope. The available filters are:
+
+* **Alert Context (payload)** - the rule will apply only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type.
+* **Alert rule id** - the rule will apply only to alerts from a specific alert rule. The value should be the full resource ID, for example `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`.
+You can locate the alert rule ID by opening a specific alert rule in the portal, clicking "Properties", and copying the "Resource ID" value. You can also locate it by listing your alert rules from PowerShell or CLI.
+* **Alert rule name** - the rule will apply only to alerts with this alert rule name. Can also be useful with a "Contains" operator.
+* **Description** - the rule will apply only to alerts that contain the specified string within the alert rule description field.
+* **Monitor condition** - the rule will apply only to alerts with the specified monitor condition, either "Fired" or "Resolved".
+* **Monitor service** - the rule will apply only to alerts from any of the specified monitor services.
+For example, use "Platform" to have the rule apply only to metric alerts.
+* **Resource** - the rule will apply only to alerts from the specified Azure resource.
+For example, you can use this filter with "Does not equal" to exclude one or more resources when the rule's scope is a subscription.
+* **Resource group** - the rule will apply only to alerts from the specified resource groups.
+For example, you can use this filter with "Does not equal" to exclude one or more resource groups when the rule's scope is a subscription.
+* **Resource type** - the rule will apply only to alerts on resource from the specified resource types, such as virtual machines. You can use "Equals" to match one or more specific resources, or you can use contains to match a resource type and all its child resources.
+For example, use `resource type contains "MICROSOFT.SQL/SERVERS"` to match both SQL servers and all their child resources, like databases.
+* **Severity** - the rule will apply only to alerts with the selected severities.
+
+**FILTERS BEHAVIOR**
+* If you define multiple filters in a rule, all of them apply - there is a logical AND between all filters.
+ For example, if you set both `resource type = "Virtual Machines"` and `severity = "Sev0"`, then the rule will apply only for Sev0 alerts on virtual machines in the scope.
+* Each filter may include up to five values, and there is a logical OR between the values.
+ For example, if you set `description contains ["this", "that"]`, then the rule will apply only to alerts whose description contains either "this" or "that".
+
+### What should this rule do?
+
+Choose one of the following actions:
+
+* **Suppression**
+This action removes all the action groups from the affected fired alerts. So, the fired alerts will not invoke any of their action groups (not even at the end of the maintenance window). Those fired alerts will still be visible when you list your alerts in the portal, Azure Resource Graph, API, PowerShell etc.
+The suppression action has a higher priority over the "apply action groups" action - if a single fired alert is affected by different alert processing rules of both types, the action groups of that alert will be suppressed.
+
+* **Apply action groups**
+This action adds one or more action groups to the affected fired alerts.
+
+### When should this rule apply?
+
+You may optionally control when will the rule apply. By default, the rule is always active. However, you can select a one-off window for this rule to apply, or have a recurring window such as a weekly recurrence.
+
+## Configuring an alert processing rule
+
+### [Portal](#tab/portal)
+
+You can access alert processing rules by navigating to the **Alerts** home page in Azure Monitor.
+Once there, you can click **Alert processing rules** to see and manage your existing rules, or click **Create** --> **Alert processing rules** to open the new alert processing rule wizard.
++
+Lets review the new alert processing rule wizard.
+In the first tab (**Scope**), you select which fired alerts are covered by this rule. Pick the **scope** of resources whose alerts will be covered - you may choose multiple resources and resource groups, or an entire subscription. You may also optionally add **filters**, as documented above.
++
+In the second tab (**Rule settings**), you select which action to apply on the affected alerts. Choose between **Suppression** or **Apply action group**. If you choose the apply action group, you can either select existing action groups by clicking **Add action groups**, or create a new action group.
++
+In the third tab (**Scheduling**), you select an optional schedule for the rule. By default the rule works all the time, unless you disable it. However, you can set it to work **on a specific time**, or **set up a recurring schedule**.
+Let's see an example of a schedule for a one-off, overnight, planned maintenance. It starts in the evening until the next morning, in a specific timezone:
++
+Let's see an example of a more complex schedule, covering an "outside of business hours" case. It has a recurring schedule with two recurrences - a daily one from the afternoon until the morning, and a weekly one covering Saturday and Sunday (full days).
++
+In the fourth tab (**Details**), you give this rule a name, pick where it will be stored, and optionally add a description for your reference. In the fifth tab (**Tags**), you optionally add tags to the rule, and finally in the last tab you can review and create the alert processing rule.
+
+### [Azure CLI](#tab/azure-cli)
+
+You can use the Azure CLI to work with alert processing rules. See the `az monitor alert-processing-rules` [page in the Azure CLI docs](/cli/azure/monitor/alert-processing-rule) for detailed documentation and examples.
+
+### Prepare your environment
+
+1. **Install the Auzre CLI**
+
+ Follow the [Installation instructions for the Azure CLI](/cli/azure/install-azure-cli).
+
+ Alternatively, you can use Azure Cloud Shell, which is an interactive shell environment that you use through your browser. To start a Cloud Shell:
+
+ - Open Cloud Shell by going to [https://shell.azure.com](https://shell.azure.com)
+
+ - Select the **Cloud Shell** button on the menu bar at the upper right corner in the [Azure portal](https://portal.azure.com)
+
+1. **Sign in**
+
+ If you're using a local installation of the CLI, sign in using the `az login` [command](/cli/azure/reference-index#az-login). Follow the steps displayed in your terminal to complete the authentication process.
+
+ ```azurecli
+ az login
+ ```
+
+1. **Install the `alertsmanagement` extension**
+
+ In order to use the `az monitor alert-processing-rule` commands, install the `alertsmanagement` preview extension.
+
+ ```azurecli
+ az extension add --name alertsmanagement
+ ```
+
+ The following output is expected.
+
+ ```output
+ The installed extension 'alertsmanagement' is in preview.
+ ```
+
+ To learn more about Azure CLI extensions, check [Use extension with Azure CLI](/cli/azure/azure-cli-extensions-overview?).
+
+### Create an alert processing rule with the Azure CLI
+
+Use the `az monitor alert-processing-rule create` command to create alert processing rules.
+For example, to create a rule that adds an action group to all alerts in a subscription, run:
+
+```azurecli
+az monitor alert-processing-rule create \
+ --name 'AddActionGroupToSubscription' \
+ --rule-type AddActionGroups \
+ --scopes "/subscriptions/SUB1" \
+ --action-groups "/subscriptions/SUB1/resourcegroups/RG1/providers/microsoft.insights/actiongroups/AG1" \
+ --resource-group RG1 \
+ --description "Add action group AG1 to all alerts in the subscription"
+```
+
+The [CLI documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter.
+
+### [PowerShell](#tab/powershell)
+
+You can use PowerShell to work with alert processing rules. See the `*-AzAlertProcessingRule` commands [in the PowerShell docs](/powershell/module/az.alertsmanagement) for detailed documentation and examples.
++
+### Create an alert processing rule using PowerShell
+
+Use the `Set-AzAlertProcessingRule` command to create alert processing rules.
+For example, to create a rule that adds an action group to all alerts in a subscription, run:
+
+```powershell
+Set-AzAlertProcessingRule `
+ -Name AddActionGroupToSubscription `
+ -AlertProcessingRuleType AddActionGroups `
+ -Scope /subscriptions/SUB1 `
+ -ActionGroupId /subscriptions/SUB1/resourcegroups/RG1/providers/microsoft.insights/actiongroups/AG1 `
+ -ResourceGroupName RG1 `
+ -Description "Add action group AG1 to all alerts in the subscription"
+```
+
+The [PowerShell documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter.
+
+* * *
+
+## Managing alert processing rules
+
+### [Portal](#tab/portal)
+
+You can view and manage your alert processing rules from the list view:
++
+From here, you can enable, disable, or delete alert processing rules at scale by selecting the check box next to them. Clicking on an alert processing rule will open it for editing - you can enable or disable the rule in the fourth tab (**Details**).
+
+### [Azure CLI](#tab/azure-cli)
+
+You can view and manage your alert processing rules using the [az monitor alert-processing-rules](/cli/azure/monitor/alert-processing-rule) commands from Azure CLI.
+
+Before you manage alert processing rules with the Azure CLI, prepare your environment using the instructions provided in [Configuring an alert processing rule](#configuring-an-alert-processing-rule).
+
+```azurecli
+# List all alert processing rules for a subscription
+az monitor alert-processing-rules list
+
+# Get details of an alert processing rule
+az monitor alert-processing-rules show --resource-group RG1 --name MyRule
+
+# Update an alert processing rule
+az monitor alert-processing-rules update --resource-group RG1 --name MyRule --status Disabled
+
+# Delete an alert processing rule
+az monitor alert-processing-rules delete --resource-group RG1 --name MyRule
+```
+
+### [PowerShell](#tab/powershell)
+
+You can view and manage your alert processing rules using the [\*-AzAlertProcessingRule](/powershell/module/az.alertsmanagement) commands from Azure CLI.
+
+Before you manage alert processing rules with the Azure CLI, prepare your environment using the instructions provided in [Configuring an alert processing rule](#configuring-an-alert-processing-rule).
+
+```powershell
+# List all alert processing rules for a subscription
+Get-AzAlertProcessingRule
+
+# Get details of an alert processing rule
+Get-AzAlertProcessingRule -ResourceGroupName RG1 -Name MyRule | Format-List
+
+# Update an alert processing rule
+Update-AzAlertProcessingRule -ResourceGroupName RG1 -Name MyRule -Enabled False
+
+# Delete an alert processing rule
+Remove-AzAlertProcessingRule -ResourceGroupName RG1 -Name MyRule
+```
+
+* * *
+
+## Next steps
+
+- [Learn more about alerts in Azure](./alerts-overview.md)
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
You can set up availability tests for any HTTP or HTTPS endpoint that's accessib
There are four types of availability tests: * [URL ping test (classic)](monitor-web-app-availability.md): You can create this simple test through the portal to validate whether an endpoint is responding and measure performance associated with that response. You can also set custom success criteria coupled with more advanced features, like parsing dependent requests and allowing for retries.
-* [Standard test (Preview)](availability-standard-tests.md): This single request test is similar to the URL ping test. It includes SSL certificate validity, proactive lifetime check, HTTP request verb (for example `GET`, `HEAD`, or `POST`), custom headers, and custom data associated with your HTTP request.
+* [Standard test](availability-standard-tests.md): This single request test is similar to the URL ping test. It includes SSL certificate validity, proactive lifetime check, HTTP request verb (for example `GET`, `HEAD`, or `POST`), custom headers, and custom data associated with your HTTP request.
* [Multi-step web test (classic)](availability-multistep.md): You can play back this recording of a sequence of web requests to test more complex scenarios. Multi-step web tests are created in Visual Studio Enterprise and uploaded to the portal, where you can run them. * [Custom TrackAvailability test](availability-azure-functions.md): If you decide to create a custom application to run availability tests, you can use the [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) method to send the results to Application Insights.
See the dedicated [troubleshooting article](https://docs.microsoft.com/troublesh
## Next steps * [Availability alerts](availability-alerts.md)
-* [Multi-step web tests](availability-multistep.md)
* [URL tests](monitor-web-app-availability.md)
+* [Standard Tests](availability-standard-tests.md)
+* [Multi-step web tests](availability-multistep.md)
* [Create and run custom availability tests using Azure Functions](availability-azure-functions.md) * [Web Tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
Last updated 07/13/2021
Standard tests are a single request test that is similar to the [URL ping test](monitor-web-app-availability.md) but more advanced. In addition to validating whether an endpoint is responding and measuring the performance, Standard tests also includes SSL certificate validity, proactive lifetime check, HTTP request verb (for example `GET`,`HEAD`,`POST`, etc.), custom headers, and custom data associated with your HTTP request.
-> [!NOTE]
-> Standard tests are currently in public preview. These preview versions are provided without a service level agreement. Certain features might not be supported or might have constrained capabilities.
-
-> [!NOTE]
-> There are currently no additional charges for the preview feature Standard tests. Pricing for features that are in preview will be announced in the future and a notice provided prior to start of billing. Should you choose to continue using Standard tests after the notice period, you will be billed at the applicable rate.
- To create an availability test, you need use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md). > [!TIP]
To create an availability test, you need use an existing Application Insights re
To create a standard test: 1. Go to your Application Insights resource and select the **Availability** pane.
-1. Select **Add Standard (preview) test**.
+1. Select **Add Standard test**.
:::image type="content" source="./media/availability-standard-test/standard-test.png" alt-text="Screenshot of Availability pane with add standard test tab open." lightbox="./media/availability-standard-test/standard-test.png":::
-1. Input your test name, URL and additional settings (explanation below), then select **Create**.
+1. Input your test name, URL and other settings (explanation below), then select **Create**.
|Setting | Explanation | |--|-| |**URL** | The URL can be any web page you want to test, but it must be visible from the public internet. The URL can include a query string. So, for example, you can exercise your database a little. If the URL resolves to a redirect, we follow it up to 10 redirects.|
-|**Parse dependent requests**| Test requests images, scripts, style files, and other files that are part of the web page under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources cannot be successfully downloaded within the timeout for the whole test. If the option isn't checked, the test only requests the file at the URL you specified. Enabling this option results in a stricter check. The test could fail for cases, which may not be noticeable when manually browsing the site. |
+|**Parse dependent requests**| Test requests images, scripts, style files, and other files that are part of the web page under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option isn't checked, the test only requests the file at the URL you specified. Enabling this option results in a stricter check. The test could fail for cases, which may not be noticeable when manually browsing the site. |
|**Enable retries**| When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. **We recommend this option**. On average, about 80% of failures disappear on retry.| | **SSL certificate validation test** | You can verify the SSL certificate on your website to make sure it's correctly installed, valid, trusted, and doesn't give any errors to any of your users. |
-| **Proactive lifetime check** | This enables you to define a set time period before your SSL certificate expires. Once it expires, your test will fail. |
+| **Proactive lifetime check** | This setting enables you to define a set time period before your SSL certificate expires. Once it expires, your test will fail. |
|**Test frequency**| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.| |**Test locations**| The places from where our servers send web requests to your URL. **Our minimum number of recommended test locations is five** to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations.| | **Custom headers** | Key value pairs that define the operating parameters. |
To create a standard test:
|Setting| Explanation| |-||
-| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site have not been received within this period. If you selected **Parse dependent requests**, then all the images, style files, scripts, and other dependent resources must have been received within this period.|
+| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, then all the images, style files, scripts, and other dependent resources must have been received within this period.|
| **HTTP response** | The returned status code that is counted as a success. 200 is the code that indicates that a normal web page has been returned.| | **Content match** | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes you might have to update it. **Only English characters are supported with content match** |
To create a standard test:
|Setting| Explanation| |-||
-|**Near-realtime (Preview)** | We recommend using Near-realtime alerts. Configuring this type of alert is done after your availability test is created. |
+|**Near-realtime** | We recommend using Near-realtime alerts. Configuring this type of alert is done after your availability test is created. |
|**Alert location threshold**|We recommend a minimum of 3/5 locations. The optimal relationship between alert location threshold and the number of test locations is **alert location threshold** = **number of test locations - 2, with a minimum of five test locations.**| ## Location population tags The following population tags can be used for the geo-location attribute when deploying an availability URL ping test using Azure Resource Manager.
-### Azure gov
+### Azure Government
| Display Name | Population Name | |-||
To edit, temporarily disable, or delete a test, select the ellipses next to a te
:::image type="content" source="./media/monitor-web-app-availability/edit.png" alt-text="View test details. Edit and Disable a web test." border="false":::
-You might want to disable availability tests or the alert rules associated with them while you are performing maintenance on your service.
+You might want to disable availability tests or the alert rules associated with them while you're performing maintenance on your service.
## If you see failures
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
This guide will walk you through migrating a classic Application Insights resour
Workspace-based resources enable common Azure role-based access control (Azure RBAC) across your resources, and eliminate the need for cross-app/workspace queries.
-**Workspace-based resources are currently available in all commercial regions and Azure US Government**
+**Workspace-based resources are currently available in all commercial regions and Azure US Government.**
## New capabilities
-Workspace-based Application Insights allows you to take advantage of all the latest capabilities of Azure Monitor and Log Analytics including:
+Workspace-based Application Insights allows you to take advantage of all the latest capabilities of Azure Monitor and Log Analytics, including:
* [Customer-Managed Keys (CMK)](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys that only you have access to. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints.
Workspace-based Application Insights allows you to take advantage of all the lat
* Faster data ingestion via Log Analytics streaming ingestion. > [!NOTE]
-> After migrating to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources may be stored in a common Log Analytics workspace. You will still be able to pull data from a specific Application Insights resource, [as described here](#understanding-log-queries).
+> After migrating to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources may be stored in a common Log Analytics workspace. You will still be able to pull data from a specific Application Insights resource, as described under [Understanding log queries](#understanding-log-queries).
## Migration process
-When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate will change the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
+When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate will change the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). The migration process is **permanent, and cannot be reversed**. Once you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. However, once you migrate you're able to change the target workspace as often as needed.
-If you don't need to migrate an existing resource and instead want to create a new workspace-based Application Insights resource, use the [workspace-based resource creation guide](create-workspace-resource.md).
+If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource, use the [workspace-based resource creation guide](create-workspace-resource.md).
## Pre-requisites
This section walks through migrating a classic Application Insights resource to
1. From your Application Insights resource, select **Properties** under the **Configure** heading in the left-hand menu bar.
- ![Properties highlighted in red box](./media/convert-classic-resource/properties.png)
+![Properties highlighted in red box](./media/convert-classic-resource/properties.png)
2. Select **`Migrate to Workspace-based`**.
- ![Migrate resource button](./media/convert-classic-resource/migrate.png)
+![Migrate resource button](./media/convert-classic-resource/migrate.png)
3. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription, or in a different subscription that shares the same Azure AD tenant. The Log Analytics workspace does not have to be in the same resource group as the Application Insights resource.
- ![Migration wizard UI with option to select targe workspace](./media/convert-classic-resource/migration.png)
-
+> [!NOTE]
+> Migrating to a workspace-based resource can take up to 24 hours, but is usually faster than that. Please rely on accessing data through your Application Insights resource while waiting for the migration process to complete. Once completed, you will start seeing new data stored in the Log Analytics workspace tables.
+![Migration wizard UI with option to select targe workspace](./media/convert-classic-resource/migration.png)
+
Once your resource is migrated, you'll see the corresponding workspace info in the **Overview** pane: ![Workspace Name](./media/create-workspace-resource/workspace-name.png) Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
-> [!NOTE]
+> [!TIP]
> After migrating to a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights. ## Understanding log queries
To write queries against the [new workspace-based table structure/schema](#works
To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](#appmetrics).
-If you have multiple Application Insights resources store their telemetry in one Log Analytics workspace, but you only want to query data from one specific Application Insights resource, you have two options:
+If you have multiple Application Insights resources store telemetry in one Log Analytics workspace, but you only want to query data from one specific Application Insights resource, you have two options:
- Option 1: Go to the desired Application Insights resource and open the **Logs** tab. All queries from this tab will automatically pull data from the selected Application Insights resource. - Option 2: Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and open the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in ```_ResourceId``` property that is available in all application specific tables.
You can check your current retention settings for Log Analytics under **General*
## Workspace-based resource changes
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This simplifies your configuration by allowing you to more easily analyze data across multiple solutions and to leverage the capabilities of workspaces.
+Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This simplifies your configuration by allowing you to analyze data across multiple solutions more easily, and to leverage the capabilities of workspaces.
### Classic data structure The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
The structure of a Log Analytics workspace is described in [Log Analytics worksp
| exceptions | AppExceptions | Exceptions thrown by the application runtime, captures both server side and client-side (browsers) exceptions. | | traces | AppTraces | Detailed logs (traces) emitted through application code/logging frameworks recorded via TrackTrace(). |
+> [!CAUTION]
+> Do not take a production dependency on the Log Analytics tables, until you see new telemetry records show up directly in Log Analytics. This might take up to 24 hours after the migration process started.
+ ### Table schemas The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor
+ Title: IP addresses used by Azure Monitor | Microsoft Docs
description: This article discusses server firewall exceptions that are required by Application Insights. Last updated 01/27/2020
You need to open some outgoing ports in your server's firewall to allow the Appl
| Purpose | URL | IP | Ports | | | | | | | Telemetry | dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com<br/>*.in.applicationinsights.azure.com<br/><br/> || 443 |
-| Live Metrics | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com<br/><br/>{region}.livediagnostics.monitor.azure.com<br/>*Example for {region}: westus2*<br/><br/> |20.49.111.32/29<br/>13.73.253.112/29| 443 |
+| Live Metrics | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com<br/><br/>{region}.livediagnostics.monitor.azure.com<br/><br/>*Example for {region}: westus2<br/>Find all supported regions in [this table](#addresses-grouped-by-region-azure-public-cloud).*|20.49.111.32/29<br/>13.73.253.112/29| 443 |
++ > [!NOTE] > These addresses are listed by using Classless Interdomain Routing notation. As an example, an entry like `51.144.56.112/28` is equivalent to 16 IPs that start at `51.144.56.112` and end at `51.144.56.127`.
You need to open some outgoing ports in your server's firewall to allow the Appl
Status Monitor configuration is needed only when you're making changes.
-| Purpose | URL | IP | Ports |
-| | | | |
-| Configuration |`management.core.windows.net` | |`443` |
-| Configuration |`management.azure.com` | |`443` |
-| Configuration |`login.windows.net` | |`443` |
-| Configuration |`login.microsoftonline.com` | |`443` |
-| Configuration |`secure.aadcdn.microsoftonline-p.com` | |`443` |
-| Configuration |`auth.gfx.ms` | |`443` |
-| Configuration |`login.live.com` | |`443` |
-| Installation | `globalcdn.nuget.org`, `packages.nuget.org` ,`api.nuget.org/v3/index.json` `nuget.org`, `api.nuget.org`, `dc.services.vsallin.net` | |`443` |
+| Purpose | URL | Ports |
+| | | |
+| Configuration |`management.core.windows.net` |`443` |
+| Configuration |`management.azure.com` |`443` |
+| Configuration |`login.windows.net` |`443` |
+| Configuration |`login.microsoftonline.com` |`443` |
+| Configuration |`secure.aadcdn.microsoftonline-p.com` |`443` |
+| Configuration |`auth.gfx.ms` |`443` |
+| Configuration |`login.live.com` |`443` |
+| Installation | `globalcdn.nuget.org`, `packages.nuget.org` ,`api.nuget.org/v3/index.json` `nuget.org`, `api.nuget.org`, `dc.services.vsallin.net` |`443` |
## Availability tests
Download [China cloud IP addresses](https://www.microsoft.com/download/details.a
#### Addresses grouped by region (Azure public cloud)
-| Continent/Country | Region | IP |
-| | | |
-|Asia|East Asia|52.229.216.48/28<br/>20.189.111.16/29|
-||Southeast Asia|52.139.250.96/28<br/>23.98.106.152/29|
-|Australia|Australia Central|20.37.227.104/29<br/><br/>|
-||Australia Central 2|20.53.60.224/31<br/><br/>|
-||Australia East|20.40.124.176/28<br/>20.37.198.232/29|
-||Australia Southeast|20.42.230.224/29<br/><br/>|
-|Brazil|Brazil South|191.233.26.176/28<br/>191.234.137.40/29|
-||Brazil Southeast|20.206.0.196/31<br/><br/>|
-|Canada|Canada Central|52.228.86.152/29<br/><br/>|
-||Canada East|52.242.40.208/31<br/><br/>|
-|Europe|North Europe|52.158.28.64/28<br/>20.50.68.128/29|
-||West Europe|51.144.56.96/28<br/>40.113.178.32/29|
-|France|France Central|20.40.129.32/28<br/>20.43.44.216/29|
-||France South|20.40.129.96/28<br/>52.136.191.12/31|
-|Germany|Germany North|51.116.75.92/31<br/><br/>|
-||Germany West Central|20.52.95.50/31<br/><br/>|
-|India|Central India|52.140.108.216/29<br/><br/>|
-||South India|20.192.153.106/31<br/><br/>|
-||West India|20.192.84.164/31<br/><br/>|
-||Jio India Central|20.192.50.200/29<br/><br/>|
-||Jio India West|20.193.194.32/29<br/><br/>|
-|Israel|Israel Central|20.217.44.250/31<br/><br/>|
-|Japan|Japan East|52.140.232.160/28<br/>20.43.70.224/29|
-||Japan West|20.189.194.102/31<br/><br/>|
-|Korea|Korea Central|20.41.69.24/29<br/><br/>|
-|Norway|Norway East|51.120.235.248/29<br/><br/>|
-||Norway West|51.13.143.48/31<br/><br/>|
-|Poland|Poland Central|20.215.4.250/31<br/><br/>|
-|Qatar|Qatar Central|20.21.39.224/29<br/><br/>|
-|South Africa|South Africa North|102.133.219.136/29<br/><br/>|
-||South Africa West|102.37.86.196/31<br/><br/>|
-|Sweden|Sweden Central|51.12.25.192/29<br/><br/>|
-||Sweden South|51.12.17.128/29<br/><br/>|
-|Switzerland|Switzerland North|51.107.52.200/29<br/><br/>|
-||Switzerland West|51.107.148.8/29<br/><br/>|
-|Taiwan|Taiwan North|51.53.28.214/31<br/><br/>|
-||Taiwan Northwest|51.53.172.214/31<br/><br/>|
-|United Arab Emirates|UAE Central|20.45.95.68/31<br/><br/>|
-||UAE North|20.38.143.44/31<br/>40.120.87.204/31|
-|United Kingdom|UK South|51.105.9.128/28<br/>51.104.30.160/29|
-||UK West|20.40.104.96/28<br/>51.137.164.200/29|
-|United States|Central US|13.86.97.224/28<br/>20.40.206.232/29|
-||East US|20.42.35.32/28<br/>20.49.111.32/29|
-||East US 2|20.49.102.24/29<br/><br/>|
-||North Central US|23.100.224.16/28<br/>20.49.114.40/29|
-||South Central US|20.45.5.160/28<br/>13.73.253.112/29|
-||West Central US|52.150.154.24/29<br/><br/>|
-||West US|40.91.82.48/28<br/>52.250.228.8/29|
-||West US 2|40.64.134.128/29<br/><br/>|
-||West US 3|20.150.241.64/29<br/><br/>|
+> [!NOTE]
+> Add the subdomain of the corresponding region to the Live Metrics URL from the [Outgoing ports](#outgoing-ports) table.
+
+| Continent/Country | Region | Subdomain | IP |
+| | | | |
+|Asia|East Asia|eastasia|52.229.216.48/28<br/>20.189.111.16/29|
+||Southeast Asia|southeastasia|52.139.250.96/28<br/>23.98.106.152/29|
+|Australia|Australia Central|australiacentral|20.37.227.104/29<br/><br/>|
+||Australia Central 2|australiacentral2|20.53.60.224/31<br/><br/>|
+||Australia East|australiaeast|20.40.124.176/28<br/>20.37.198.232/29|
+||Australia Southeast|australiasoutheast|20.42.230.224/29<br/><br/>|
+|Brazil|Brazil South|brazilsouth|191.233.26.176/28<br/>191.234.137.40/29|
+||Brazil Southeast|brazilsoutheast|20.206.0.196/31<br/><br/>|
+|Canada|Canada Central|canadacentral|52.228.86.152/29<br/><br/>|
+|Europe|North Europe|northeurope|52.158.28.64/28<br/>20.50.68.128/29|
+||West Europe|westeurope|51.144.56.96/28<br/>40.113.178.32/29|
+|France|France Central|francecentral|20.40.129.32/28<br/>20.43.44.216/29|
+||France South|francesouth|20.40.129.96/28<br/>52.136.191.12/31|
+|Germany|Germany West Central|germanywestcentral|20.52.95.50/31<br/><br/>|
+|India|Central India|centralindia|52.140.108.216/29<br/><br/>|
+||South India|southindia|20.192.153.106/31<br/><br/>|
+|Japan|Japan East|japaneast|52.140.232.160/28<br/>20.43.70.224/29|
+||Japan West|japanwest|20.189.194.102/31<br/><br/>|
+|Korea|Korea Central|koreacentral|20.41.69.24/29<br/><br/>|
+|Norway|Norway East|norwayeast|51.120.235.248/29<br/><br/>|
+||Norway West|norwaywest|51.13.143.48/31<br/><br/>|
+|Qatar|Qatar Central|qatarcentral|20.21.39.224/29<br/><br/>|
+|South Africa|South Africa North|southafricanorth|102.133.219.136/29<br/><br/>|
+|Switzerland|Switzerland North|switzerlandnorth|51.107.52.200/29<br/><br/>|
+||Switzerland West|switzerlandwest|51.107.148.8/29<br/><br/>|
+|United Arab Emirates|UAE North|uaenorth|20.38.143.44/31<br/>40.120.87.204/31|
+|United Kingdom|UK South|uksouth|51.105.9.128/28<br/>51.104.30.160/29|
+||UK West|ukwest|20.40.104.96/28<br/>51.137.164.200/29|
+|United States|Central US|centralus|13.86.97.224/28<br/>20.40.206.232/29|
+||East US|eastus|20.42.35.32/28<br/>20.49.111.32/29|
+||East US 2|eastus2|20.49.102.24/29<br/><br/>|
+||North Central US|northcentralus|23.100.224.16/28<br/>20.49.114.40/29|
+||South Central US|southcentralus|20.45.5.160/28<br/>13.73.253.112/29|
+||West US|westus|40.91.82.48/28<br/>52.250.228.8/29|
+||West US 2|westus2|40.64.134.128/29<br/><br/>|
+||West US 3|westus3|20.150.241.64/29<br/><br/>|
+
+#### Upcoming regions (Azure public cloud)
+
+> [!NOTE]
+> The following regions are not supported yet, but will be added in the near future.
+
+| Continent/Country | Region | Subdomain | IP |
+| | | | |
+|Canada|Canada East|TBD|52.242.40.208/31<br/><br/>|
+|Germany|Germany North|TBD|51.116.75.92/31<br/><br/>|
+|India|West India|TBD|20.192.84.164/31<br/><br/>|
+||Jio India Central|TBD|20.192.50.200/29<br/><br/>|
+||Jio India West|TBD|20.193.194.32/29<br/><br/>|
+|Israel|Israel Central|TBD|20.217.44.250/31<br/><br/>|
+|Poland|Poland Central|TBD|20.215.4.250/31<br/><br/>|
+|South Africa|South Africa West|TBD|102.37.86.196/31<br/><br/>|
+|Sweden|Sweden Central|TBD|51.12.25.192/29<br/><br/>|
+||Sweden South|TBD|51.12.17.128/29<br/><br/>|
+|Taiwan|Taiwan North|TBD|51.53.28.214/31<br/><br/>|
+||Taiwan Northwest|TBD|51.53.172.214/31<br/><br/>|
+|United Arab Emirates|UAE Central|TBD|20.45.95.68/31<br/><br/>|
+|United States|West Central US|TBD|52.150.154.24/29<br/><br/>|
### Discovery API
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Replicas|Yes|Replica Count|Count|Maximum|Number of replicas count of container app|revisionName|
-|Requests|Yes|Requests|Count|Total|Requests processed|revisionName, podName, statusCodeCategory, statusCode|
-|RestartCount|Yes|Replica Restart Count|Count|Maximum|Restart count of container app replicas|revisionName, podName|
-|RxBytes|Yes|Network In Bytes|Bytes|Total|Network received bytes|revisionName, podName|
-|TxBytes|Yes|Network Out Bytes|Bytes|Total|Network transmitted bytes|revisionName, podName|
-|UsageNanoCores|Yes|CPU Usage|NanoCores|Average|CPU consumed by the container app, in nano cores. 1,000,000,000 nano cores = 1 core|revisionName, podName|
-|WorkingSetBytes|Yes|Memory Working Set Bytes|Bytes|Average|Container App working set memory used in bytes.|revisionName, podName|
+|Replicas|No|Replica Count|Count|Maximum|Number of replicas count of container app|revisionName|
+|Requests|No|Requests|Count|Total|Requests processed|revisionName, podName, statusCodeCategory, statusCode|
+|RestartCount|No|Replica Restart Count|Count|Maximum|Restart count of container app replicas|revisionName, podName|
+|RxBytes|No|Network In Bytes|Bytes|Total|Network received bytes|revisionName, podName|
+|TxBytes|No|Network Out Bytes|Bytes|Total|Network transmitted bytes|revisionName, podName|
+|UsageNanoCores|No|CPU Usage|NanoCores|Average|CPU consumed by the container app, in nano cores. 1,000,000,000 nano cores = 1 core|revisionName, podName|
+|WorkingSetBytes|No|Memory Working Set Bytes|Bytes|Average|Container App working set memory used in bytes.|revisionName, podName|
## Microsoft.AppConfiguration/configurationStores
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The charge for searching against Basic Logs is based on the GB of data scanned i
See [Configure Basic Logs in Azure Monitor](basic-logs-configure.md) for details on Basic Logs including how to configure them and query their data. ## Log data retention and archive
-In addition to data ingestion, there is a charge for the retention of data in each Log Analytics workspace. You can set the retention period for the entire workspace or for each table. After this period, the data is either removed or archived. Archived Logs have a reduced retention charge, and there is a charge search against them. Use Archive Logs to reduce your costs for data that you must store for compliance or occasional investigation.
+In addition to data ingestion, there is a charge for the retention of data in each Log Analytics workspace. You can set the retention period for the entire workspace or for each table. After this period, the data is either removed or archived. Archived Logs have a reduced retention charge, and there is a charge to search against them. Use Archive Logs to reduce your costs for data that you must store for compliance or occasional investigation.
See [Configure data retention and archive policies in Azure Monitor Logs](data-retention-archive.md) for details on data retention and archiving including how to configure these settings and access archived data.
azure-monitor Workbooks Chart Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
Last updated 07/05/2022
# Chart visualizations
-Workbooks allow monitoring data to be presented as charts. Supported chart types include line, bar, bar categorical, area, scatter plots, pie, and time. Authors can choose to customize the height, width, color palette, legend, titles, no-data message, etc. of the chart and customize axis types and series colors using chart settings.
+Workbooks can take the data returned from queries in various formats to create different visualizations from that data, such as area, line, bar, or time visualizations.
+
+You can present monitoring data as charts. Supported chart types include:
+
+- Line
+- Bar
+- Bar categorical
+- Area
+- Scatter plot
+- Pie
+- Time
+
+You can choose to customize the:
+
+- Height
+- Width
+- Color palette
+- Legend
+- Titles
+- No-data message
+- Other characteristics
+
+You can also customize axis types and series colors by using chart settings.
Workbooks support charts for both logs and metric data sources. ## Log charts
-Azure Monitor logs gives resources owners detailed information about the workings of their apps and infrastructure. Unlike metrics, log information is not collected by default and requires some kind of collection on-boarding. However, when present logs provide a lot of information about the state of the resource and data useful for diagnostics. Workbooks allow presenting log data as visual charts for user analysis.
+Azure Monitor logs give you detailed information about your apps and infrastructure. Log information isn't collected by default, and you have to configure data collection. Logs provide information about the state of the resource and data that's useful for diagnostics. You can use workbooks to present log data as visual charts for user analysis.
-### Adding a log chart
+### Add a log chart
-The example below shows the trend of requests to an app over the previous days.
+The following example shows the trend of requests to an app over the previous days.
-1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
-2. Use the **Add query** link to add a log query control to the workbook.
-3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target.
-4. Use the Query editor to enter the [KQL](/azure/kusto/query/) for your analysis (for example, trend of requests).
-5. Set the visualization to one of: **Area**, **Bar**, **Bar (categorical)**, **Line**, **Pie**, **Scatter**, or **Time**.
-6. Set other parameters if needed - like time range, visualization, size, color palette, and legend.
+1. Switch the workbook to edit mode by selecting **Edit** on the toolbar.
+1. Use the **Add query** link to add a log query control to the workbook.
+1. For **Query type**, select **Log**. For **Resource type**, select, for example, **Application Insights**, and select the resources to target.
+1. Use the query editor to enter the [KQL](/azure/kusto/query/) for your analysis. An example is trend of requests.
+1. Set **Visualization** to **Area**, **Bar**, **Bar (categorical)**, **Line**, **Pie**, **Scatter**, or **Time**.
+1. Set other parameters like time range, visualization, size, color palette, and legend, if needed.
-[![Screenshot of log chart in edit mode](./media/workbooks-chart-visualizations/log-chart.png)](./media/workbooks-chart-visualizations/log-chart.png#lightbox)
+[![Screenshot that shows a log chart in edit mode.](./media/workbooks-chart-visualizations/log-chart.png)](./media/workbooks-chart-visualizations/log-chart.png#lightbox)
### Log chart parameters
-| Parameter | Explanation | Example |
+| Parameter | Description | Examples |
| - |:-|:-|
-| `Query Type` | The type of query to use. | Log, Azure Resource Graph, etc. |
-| `Resource Type` | The resource type to target. | Application Insights, Log Analytics, or Azure-first |
-| `Resources` | A set of resources to get the metrics value from. | MyApp1 |
-| `Time Range` | The time window to view the log chart. | Last hour, Last 24 hours, etc. |
-| `Visualization` | The visualization to use. | Area, Bar, Line, Pie, Scatter, Time, bar categorical |
-| `Size` | The vertical size of the control. | Small, medium, large, or full |
-| `Color palette` | The color palette to use in the chart. Ignored in multi-metric or segmented mode. | Blue, green, red, etc. |
-| `Legend` | The aggregation function to use for the legend. | Sum or Average of values or Max, Min, First, Last value |
-| `Query` | Any KQL query that returns data in the format expected by the chart visualization. | _requests \| make-series Requests = count() default = 0 on timestamp from ago(1d) to now() step 1h_ |
+| Query type | The type of query to use. | Logs, Azure Resource Graph |
+| Resource type | The resource type to target. | Application Insights, Log Analytics, or Azure-first |
+| Resources | A set of resources to get the metrics value from. | MyApp1 |
+| Time range | The time window to view the log chart. | Last hour, last 24 hours |
+| Visualization | The visualization to use. | Area, bar, line, pie, scatter, time, bar (categorical) |
+| Size | The vertical size of the control. | Small, medium, large, or full |
+| Color palette | The color palette to use in the chart. Ignored in multi-metric or segmented mode. | Blue, green, red |
+| Legend | The aggregation function to use for the legend. | Sum or average of values or max, min, first, last value |
+| Query | Any KQL query that returns data in the format expected by the chart visualization. | _requests \| make-series Requests = count() default = 0 on timestamp from ago(1d) to now() step 1h_ |
### Time-series charts
-Time series charts like area, bar, line, scatter, and time can be easily created using the query control in Workbooks. The key is having time and metric information in the result set.
+You can use the workbook's query control to create time-series charts such as area, bar, line, scatter, and time. You must have time and metric information in the result set to create a time-series chart.
-#### Simple time-series
+#### Simple time series
-The query below returns a table with two columns: *timestamp* and *Requests*. The query control uses *timestamp* for the X-axis and *Requests* for the Y-axis.
+The following query returns a table with two columns: `timestamp` and `Requests`. The query control uses `timestamp` for the x-axis and `Requests` for the y-axis.
```kusto requests | summarize Requests = count() by bin(timestamp, 1h) ```
-[![Screenshot of a simple time-series log line chart.](./media/workbooks-chart-visualizations/log-chart-line-simple.png)](./media/workbooks-chart-visualizations/log-chart-line-simple.png#lightbox)
+[![Screenshot that shows a simple time-series log line chart.](./media/workbooks-chart-visualizations/log-chart-line-simple.png)](./media/workbooks-chart-visualizations/log-chart-line-simple.png#lightbox)
-#### Time-series with multiple metrics
+#### Time series with multiple metrics
-The query below returns a table with three columns: *timestamp*, *Requests*, and *Users*. The query control uses *timestamp* for the X-axis and *Requests* & *Users* as separate series on the Y-axis.
+The following query returns a table with three columns: `timestamp`, `Requests`, and `Users`. The query control uses `timestamp` for the x-axis and `Requests` and `Users` as separate series on the y-axis.
```kusto requests | summarize Requests = count(), Users = dcount(user_Id) by bin(timestamp, 1h) ```
-[![Screenshot of a time-series with multiple metrics log line chart.](./media/workbooks-chart-visualizations/log-chart-line-multi-metric.png)](./media/workbooks-chart-visualizations/log-chart-line-multi-metric.png#lightbox)
+[![Screenshot that shows a time series with multiple metrics log line chart.](./media/workbooks-chart-visualizations/log-chart-line-multi-metric.png)](./media/workbooks-chart-visualizations/log-chart-line-multi-metric.png#lightbox)
-#### Segmented Time-series
+#### Segmented time series
-The query below returns a table with three columns: *timestamp*, *Requests*, and *RequestName* where *RequestName* is a categorical column with the names of requests. The query control here uses *timestamp* for the X-axis and adds a series per value of *RequestName*.
+The following query returns a table with three columns: `timestamp`, `Requests`, and `RequestName`, where `RequestName` is a categorical column with the names of requests. The query control here uses `timestamp` for the x-axis and adds a series per value of `RequestName`.
``` requests | summarize Request = count() by bin(timestamp, 1h), RequestName = name ```
-[![Screenshot of a segmented time-series log line chart.](./media/workbooks-chart-visualizations/log-chart-line-segmented.png)](./media/workbooks-chart-visualizations/log-chart-line-segmented.png#lightbox)
+[![Screenshot that shows a segmented time-series log line chart.](./media/workbooks-chart-visualizations/log-chart-line-segmented.png)](./media/workbooks-chart-visualizations/log-chart-line-segmented.png#lightbox)
### Summarize vs. make-series
-The examples in the previous section use the `summarize` operator because it is easier to understand. However, summarize does have a major limitation as it omits the results row if there are no items in the bucket. It can have the effect of shifting the chart time window depending on whether the empty buckets are in the front or backside of the time range.
+The examples in the previous section use the `summarize` operator because it's easier to understand. The `summarize` operator's major limitation is that it omits the results row if there are no items in the bucket. If the results row is omitted, depending on where the empty buckets are in the time range, the chart time window might shift.
-It is usually better to use the `make-series` operator to create time series data, which has the option to provide default values for empty buckets.
+We recommend using the `make-series` operator to create time-series data. You can provide default values for empty buckets.
-The following query uses the `make-series` operator.
+The following query uses the `make-series` operator:
```kusto requests | make-series Requests = count() default = 0 on timestamp from ago(1d) to now() step 1h by RequestName = name ```
-The query below shows a similar chart with the `summarize` operator
+The following query shows a similar chart with the `summarize` operator:
```kusto requests | summarize Request = count() by bin(timestamp, 1h), RequestName = name ```
-Even though the queries return results in different formats, when a user sets the visualization to area, line, bar, or time, Workbooks understands how to handle the data to create the visualization.
-
-[![Screenshot of a log line chart made from a make-series query](./media/workbooks-chart-visualizations/log-chart-line-make-series.png)](./media/workbooks-chart-visualizations/log-chart-line-make-series.png#lightbox)
+[![Screenshot that shows a log line chart made from a make-series query.](./media/workbooks-chart-visualizations/log-chart-line-make-series.png)](./media/workbooks-chart-visualizations/log-chart-line-make-series.png#lightbox)
### Categorical bar chart or histogram
-Categorical charts allow users to represent a dimension or column on the X-axis of a chart, this is especially useful in histograms. The example below shows the distribution of requests by their result code.
+You can represent a dimension or column on the x-axis by using categorical charts. Categorical charts are useful for histograms. The following example shows the distribution of requests by their result code:
```kusto requests
requests
| order by Requests desc ```
-The query returns two columns: *Requests* metric and *Result* category. Each value of the *Result* column will get its own bar in the chart with height proportional to the *Requests metric*.
+The query returns two columns: `Requests` metric and `Result` category. Each value of the `Result` column is represented by a bar in the chart with height proportional to the `Requests metric`.
-[![Screenshot of a categorical bar chart for requests by result code](./media/workbooks-chart-visualizations/log-chart-categorical-bar.png)](./media/workbooks-chart-visualizations/log-chart-categorical-bar.png#lightbox)
+[![Screenshot that shows a categorical bar chart for requests by result code.](./media/workbooks-chart-visualizations/log-chart-categorical-bar.png)](./media/workbooks-chart-visualizations/log-chart-categorical-bar.png#lightbox)
### Pie charts
-Pie charts allow the visualization of numerical proportion. The example below shows the proportion of requests by their result code.
+Pie charts allow the visualization of numerical proportion. The following example shows the proportion of requests by their result code:
```kusto requests
requests
| order by Requests desc ```
-The query returns two columns: *Requests* metric and *Result* category. Each value of the *Result* column will get its own slice in the pie with size proportional to the *Requests* metric.
+The query returns two columns: `Requests` metric and `Result` category. Each value of the `Result` column gets its own slice in the pie with size proportional to the `Requests` metric.
-[![Screenshot of a pie chart with slices representing result code](./media/workbooks-chart-visualizations/log-chart-pie-chart.png)](./media/workbooks-chart-visualizations/log-chart-pie-chart.png#lightbox)
+[![Screenshot that shows a pie chart with slices representing result code.](./media/workbooks-chart-visualizations/log-chart-pie-chart.png)](./media/workbooks-chart-visualizations/log-chart-pie-chart.png#lightbox)
## Metric charts
-Most Azure resources emit metric data about state and health (for example, CPU utilization, storage availability, count of database transactions, failing app requests, etc.). Workbooks allow the visualization of this data as time-series charts.)
+Most Azure resources emit metric data about their state and health. Examples include CPU utilization, storage availability, count of database transactions, and failing app requests. You can use workbooks to create visualizations of this data as time-series charts.
-### Adding a metric chart
+### Add a metric chart
-The following example will show the number of transactions in a storage account over the prior hour. This allows the storage owner to see the transaction trend and look for anomalies in behavior.
+The following example shows the number of transactions in a storage account over the prior hour. This information allows the storage owner to see the transaction trend and look for anomalies in behavior.
-1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
-2. Use the **Add metric** link to add a metric control to the workbook.
-3. Select a resource type (for example, Storage Account), the resources to target, the metric namespace and name, and the aggregation to use.
-4. Set other parameters if needed - like time range, split-by, visualization, size, and color palette.
+1. Switch the workbook to edit mode by selecting **Edit** on the toolbar.
+1. Use the **Add metric** link to add a metric control to the workbook.
+1. Select a resource type, for example, **Storage account**. Select the resources to target, the metric namespace and name, and the aggregation to use.
+1. Set other parameters like time range, split by, visualization, size, and color palette, if needed.
-[![Screenshot of metric chart in edit mode](./media/workbooks-chart-visualizations/metric-chart.png)](./media/workbooks-chart-visualizations/metric-chart.png#lightbox)
+[![Screenshot that shows a metric chart in edit mode.](./media/workbooks-chart-visualizations/metric-chart.png)](./media/workbooks-chart-visualizations/metric-chart.png#lightbox)
### Metric chart parameters
-| Parameter | Explanation | Example |
+| Parameter | Description | Examples |
| - |:-|:-|
-| `Resource Type` | The resource type to target. | Storage or Virtual Machine. |
-| `Resources` | A set of resources to get the metrics value from. | MyStorage1 |
-| `Namespace` | The namespace with the metric. | Storage > Blob |
-| `Metric` | The metric to visualize. | Storage > Blob > Transactions |
-| `Aggregation` | The aggregation function to apply to the metric. | Sum, Count, Average, etc. |
-| `Time Range` | The time window to view the metric in. | Last hour, Last 24 hours, etc. |
-| `Visualization` | The visualization to use. | Area, Bar, Line, Scatter, Grid |
-| `Split By` | Optionally split the metric on a dimension. | Transactions by Geo type |
-| `Size` | The vertical size of the control. | Small, medium, or large |
-| `Color palette` | The color palette to use in the chart. Ignored if the `Split by` parameter is used. | Blue, green, red, etc. |
+| Resource type | The resource type to target. | Storage or virtual machine |
+| Resources | A set of resources to get the metrics value from. | MyStorage1 |
+| Namespace | The namespace with the metric. | Storage > Blob |
+| Metric | The metric to visualize. | Storage > Blob > Transactions |
+| Aggregation | The aggregation function to apply to the metric. | Sum, count, average |
+| Time range | The time window to view the metric in. | Last hour, last 24 hours |
+| Visualization | The visualization to use. | Area, bar, line, scatter, grid |
+| Split by | Optionally split the metric on a dimension. | Transactions by geo type |
+| Size | The vertical size of the control. | Small, medium, or large |
+| Color palette | The color palette to use in the chart. Ignored if the `Split by` parameter is used. | Blue, green, red |
### Examples Transactions split by API name as a line chart:
-[![Screenshot of a metric line chart for Storage transactions split by API name](./media/workbooks-chart-visualizations/metric-chart-storage-split-line.png)](./media/workbooks-chart-visualizations/metric-chart-storage-split-line.png#lightbox)
+[![Screenshot that shows a metric line chart for storage transactions split by API name.](./media/workbooks-chart-visualizations/metric-chart-storage-split-line.png)](./media/workbooks-chart-visualizations/metric-chart-storage-split-line.png#lightbox)
Transactions split by response type as a large bar chart:
-[![Screenshot of a large metric bar chart for Storage transactions split by response type](./media/workbooks-chart-visualizations/metric-chart-storage-bar-large.png)](./media/workbooks-chart-visualizations/metric-chart-storage-bar-large.png#lightbox)
+[![Screenshot that shows a large metric bar chart for storage transactions split by response type.](./media/workbooks-chart-visualizations/metric-chart-storage-bar-large.png)](./media/workbooks-chart-visualizations/metric-chart-storage-bar-large.png#lightbox)
Average latency as a scatter chart:
-[![Screenshot of a metric scatter chart for Storage latency](./media/workbooks-chart-visualizations/metric-chart-storage-scatter.png)](./media/workbooks-chart-visualizations/metric-chart-storage-scatter.png#lightbox)
+[![Screenshot that shows a metric scatter chart for storage latency.](./media/workbooks-chart-visualizations/metric-chart-storage-scatter.png)](./media/workbooks-chart-visualizations/metric-chart-storage-scatter.png#lightbox)
## Chart settings
-Authors can use chart settings to customize which fields are used in the chart axes, the axis units, custom formatting, ranges, grouping behaviors, legends, and series colors.
+You can use chart settings to customize which fields are used in the:
+
+- Chart axes
+- Axis units
+- Custom formatting
+- Ranges
+- Grouping behaviors
+- Legends
+- Series colors
-### The settings tab
+### Settings tab
-The settings tab controls:
+The **Settings** tab controls:
-- The axis settings, including which fields, custom formatting that allows users to set the number formatting to the axis values and custom ranges.-- Grouping settings, including which field, the limits before an "Others" group is created.-- Legend settings, including showing metrics (series name, colors, and numbers) at the bottom, and/or a legend (series names and colors).
+- **X-axis Settings**, **Y-axis Settings**: Includes which fields. You can use custom formatting to set the number formatting to the axis values and custom ranges.
+- **Grouping Settings**: Includes which field. Sets the limits before an "Others" group is created.
+- **Legend Settings**: Shows metrics like series name, colors, and numbers at the bottom, and a legend like series names and colors.
-![Screenshot of chart settings.](./media/workbooks-chart-visualizations/chart-settings.png)
+![Screenshot that shows chart settings.](./media/workbooks-chart-visualizations/chart-settings.png)
#### Custom formatting
-Number formatting options include:
+Number formatting options are shown in this table.
-| Formatting option | Explanation |
+| Formatting option | Description |
|:- |:-|
-| `Units` | The units for the column - various options for percentage, counts, time, byte, count/time, bytes/time, etc. For example, the unit for a value of 1234 can be set to milliseconds and it's rendered as 1.234s. |
-| `Style` | The format to render it as - decimal, currency, percent. |
-| `Show group separator` | Checkbox to show group separators. Renders 1234 as 1,234 in the US. |
-| `Minimum integer digits` | Minimum number of integer digits to use (default 1). |
-| `Minimum fractional digits` | Minimum number of fractional digits to use (default 0). |
-| `Maximum fractional digits` | Maximum number of fractional digits to use. |
-| `Minimum significant digits` | Minimum number of significant digits to use (default 1). |
-| `Maximum significant digits` | Maximum number of significant digits to use. |
+| Units | The units for the column, such as various options for percentage, counts, time, byte, count/time, and bytes/time. For example, the unit for a value of 1234 can be set to milliseconds and it's rendered as 1.234s. |
+| Style | The format to render it as, such as decimal, currency, and percent. |
+| Show grouping separators | Checkbox to show group separators. Renders 1234 as 1,234 in the US. |
+| Minimum integer digits | Minimum number of integer digits to use (default 1). |
+| Minimum fractional digits | Minimum number of fractional digits to use (default 0). |
+| Maximum fractional digits | Maximum number of fractional digits to use. |
+| Minimum significant digits | Minimum number of significant digits to use (default 1). |
+| Maximum significant digits | Maximum number of significant digits to use. |
+
+![Screenshot that shows x-axis settings.](./media/workbooks-chart-visualizations/number-format-settings.png)
+
+### Series Settings tab
-![Screenshot of X axis settings.](./media/workbooks-chart-visualizations/number-format-settings.png)
+You can adjust the labels and colors shown for series in the chart with the **Series Settings** tab:
-### The series tab
+- **Series name**: This field is used to match a series in the data and, if matched, the display label and color are displayed.
+- **Comment**: This field is useful for template authors because this comment might be used by translators to localize the display labels.
-The series setting tab lets you adjust the labels and colors shown for series in the chart.
+![Screenshot that shows series settings.](./media/workbooks-chart-visualizations/series-settings.png)
-- The `Series name` field is used to match a series in the data and if matched, the display label and color will be displayed.-- The `Comment` field is useful for template authors, as this comment may be used by translators to localize the display labels.
+## Next steps
-![Screenshot of series settings.](./media/workbooks-chart-visualizations/series-settings.png)
+- Learn how to create a [tile in workbooks](workbooks-tile-visualizations.md).
+- Learn how to create [interactive workbooks](workbooks-interactive.md).
azure-monitor Workbooks Grid Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-grid-visualizations.md
# Grid visualizations
-Grids or tables are a common way to present data to users. Workbooks allow users to individually style the columns of the grid to provide a rich UI for their reports.
+Grids or tables are a common way to present data to users. You can individually style the columns of grids in workbooks to provide a rich UI for your reports. While a plain table shows data, it's hard to read and insights won't always be apparent. Styling the grid can help make it easier to read and interpret the data.
-The example below shows a grid that combines icons, heatmaps, and spark-bars to present complex information. The workbook also provides sorting, a search box and a go-to-analytics button.
+The following example shows a grid that combines icons, heatmaps, and spark bars to present complex information. The workbook also provides sorting, a search box, and a go-to-analytics button.
-[![Screenshot of log based grid](./media/workbooks-grid-visualizations/grid.png)](./media/workbooks-grid-visualizations/grid.png#lightbox)
+[![Screenshot that shows a log-based grid.](./media/workbooks-grid-visualizations/grid.png)](./media/workbooks-grid-visualizations/grid.png#lightbox)
-## Adding a log-based grid
+## Add a log-based grid
-1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
-2. Select **Add query** to add a log query control to the workbook.
-3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target.
-4. Use the Query editor to enter the KQL for your analysis (for example, VMs with memory below a threshold)
-5. Set the visualization to **Grid**
-6. Set other parameters if needed - like time range, size, color palette, and legend.
+1. Switch the workbook to edit mode by selecting **Edit** on the toolbar.
+1. Select **Add query** to add a log query control to the workbook.
+1. For **Query type**, select **Log**. For **Resource type**, select, for example, **Application Insights**, and select the resources to target.
+1. Use the query editor to enter the KQL for your analysis. An example is VMs with memory below a threshold.
+1. Set **Visualization** to **Grid**.
+1. Set parameters like time range, size, color palette, and legend, if needed.
-[![Screenshot of log based grid query](./media/workbooks-grid-visualizations/grid-query.png)](./media/workbooks-grid-visualizations/grid-query.png#lightbox)
+[![Screenshot that shows a log-based grid query.](./media/workbooks-grid-visualizations/grid-query.png)](./media/workbooks-grid-visualizations/grid-query.png#lightbox)
## Log chart parameters
-| Parameter | Explanation | Example |
+| Parameter | Description | Examples |
| - |:-|:-|
-|Query Type| The type of query to use. | Log, Azure Resource Graph, etc. |
-|Resource Type| The resource type to target. | Application Insights, Log Analytics, or Azure-first |
+|Query type| The type of query to use. | Logs, Azure Resource Graph |
+|Resource type| The resource type to target. | Application Insights, Log Analytics, or Azure-first |
|Resources| A set of resources to get the metrics value from. | MyApp1 |
-|Time Range| The time window to view the log chart. | Last hour, Last 24 hours, etc. |
+|Time range| The time window to view the log chart. | Last hour, last 24 hours |
|Visualization| The visualization to use. | Grid | |Size| The vertical size of the control. | Small, medium, large, or full | |Query| Any KQL query that returns data in the format expected by the chart visualization. | _requests \| summarize Requests = count() by name_ |
-## Simple Grid
+## Simple grid
-Workbooks can render KQL results as a simple table. The grid below shows the count of requests and unique users per requests type in an app.
+Workbooks can render KQL results as a simple table. The following grid shows the count of requests and unique users per request type in an app:
```kusto requests
requests
| order by Requests desc ```
-[![Screenshot of a log based grid in edit mode](./media/workbooks-grid-visualizations/log-chart-simple-grid.png)](./media/workbooks-grid-visualizations/log-chart-simple-grid.png#lightbox)
+[![Screenshot that shows a log-based grid in edit mode.](./media/workbooks-grid-visualizations/log-chart-simple-grid.png)](./media/workbooks-grid-visualizations/log-chart-simple-grid.png#lightbox)
## Grid styling
-While a plain table shows data, it is hard to read and insights won't always be apparent. Styling the grid can help make it easier to read and interpret the data.
+Columns styled as heatmaps:
-Below is the same grid from the previous section styled as heatmaps.
+[![Screenshot that shows a log-based grid with columns styled as heatmaps.](./media/workbooks-grid-visualizations/log-chart-grid-heatmap.png)](./media/workbooks-grid-visualizations/log-chart-grid-heatmap.png#lightbox)
-[![Screenshot of a log based grid with columns styled as heatmaps](./media/workbooks-grid-visualizations/log-chart-grid-heatmap.png)](./media/workbooks-grid-visualizations/log-chart-grid-heatmap.png#lightbox)
+Columns styled as bars:
+[![Screenshot that shows a log-based grid with columns styled as bars.](./media/workbooks-grid-visualizations/log-chart-grid-bar.png)](./media/workbooks-grid-visualizations/log-chart-grid-bar.png#lightbox)
-Here is the same grid styled as bars:
-[![Screenshot of a log based grid with columns styled as bars](./media/workbooks-grid-visualizations/log-chart-grid-bar.png)](./media/workbooks-grid-visualizations/log-chart-grid-bar.png#lightbox)
-
-### Styling a grid column
+### Style a grid column
1. Select the **Column Setting** button on the query control toolbar.
-2. In the **Edit column settings**, select the column to style.
-3. Choose a column renderer (for example heatmap, bar, bar underneath, etc.) and related settings to style your column.
+1. In the **Edit column settings** pane, select the column to style.
+1. In **Column renderer**, select **Heatmap**, **Bar**, or **Bar underneath** and select related settings to style your column.
-Below is an example that styles the **Request** column as a bar:
+The following example shows the **Requests** column styled as a bar:
-[![Screenshot of a log based grid with request column styled as a bar.](./media/workbooks-grid-visualizations/log-chart-grid-column-settings-start.png)](./media/workbooks-grid-visualizations/log-chart-grid-column-settings-start.png#lightbox)
+[![Screenshot that shows a log-based grid with the Requests column styled as a bar.](./media/workbooks-grid-visualizations/log-chart-grid-column-settings-start.png)](./media/workbooks-grid-visualizations/log-chart-grid-column-settings-start.png#lightbox)
-This usually is taking the user to some other view with context coming from the cell or may open up a url.
+This option usually takes you to some other view with context coming from the cell, or it might open a URL.
### Custom formatting
-Workbooks also allow users to set the number formatting of their cell values. They can do so by clicking on the **Custom formatting** checkbox when available.
+You can also set the number formatting of your cell values in workbooks. To set this formatting, select the **Custom formatting** checkbox when it's available.
-| Formatting option | Explanation |
+| Formatting option | Description |
|:- |:-|
-|Units| The units for the column - various options for percentage, counts, time, byte, count/time, bytes/time, etc. For example, the unit for a value of 1234 can be set to milliseconds and it's rendered as 1.234 s. |
-|Style| The format to render it as - decimal, currency, percent. |
+|Units| The units for the column with various options for percentage, counts, time, byte, count/time, and bytes/time. For example, the unit for a value of 1234 can be set to milliseconds and it's rendered as 1.234 s. |
+|Style| The format used to render it, such as decimal, currency, percent. |
|Show group separator| Checkbox to show group separators. Renders 1234 as 1,234 in the US. | |Minimum integer digits| Minimum number of integer digits to use (default 1). | |Minimum fractional digits| Minimum number of fractional digits to use (default 0). | |Maximum fractional digits| Maximum number of fractional digits to use. | |Minimum significant digits| Minimum number of significant digits to use (default 1). | |Maximum significant digits| Maximum number of significant digits to use. |
-|Custom text for missing values| When a data point does not have a value, show this custom text instead of a blank. |
+|Custom text for missing values| When a data point doesn't have a value, show this custom text instead of a blank. |
### Custom date formatting
-When the author has specified that a column is set to the Date/Time renderer, the author can specify custom date formatting options by using the *Custom date formatting* checkbox.
+When you've specified that a column is set to the date/time renderer, you can specify custom date formatting options by using the **Custom date formatting** checkbox.
-| Formatting option | Explanation |
+| Formatting option | Description |
|:- |:-|
-|Style| The format to render a date as short, long, full formats, or a time as short or long time formats. |
-|Show time as| Allows the author to decide between showing the time in local time (default), or as UTC. Depending on the date format style selected, the UTC/time zone information may not be displayed. |
+|Style| The format to render a date as short, long, or full, or a time as short or long. |
+|Show time as| Allows you to decide between showing the time in local time (default) or as UTC. Depending on the date format style selected, the UTC/time zone information might not be displayed. |
## Custom column width setting
-The author can customize the width of any column in the grid using the **Custom Column Width** field in **Column Settings**.
+You can customize the width of any column in the grid by using the **Custom Column Width** field in **Column Settings**.
-![Screenshot of column settings with the custom column width field indicated in a red box](./media/workbooks-grid-visualizations/custom-column-width-setting.png)
+![Screenshot that shows column settings with the Custom Column Width field indicated in a red box.](./media/workbooks-grid-visualizations/custom-column-width-setting.png)
-If the field is left blank, then the width will be automatically determined based on the number of characters in the column and the number of visible columns. The default unit is "ch" (characters).
+If the field is left blank, the width is automatically determined based on the number of characters in the column and the number of visible columns. The default unit is "ch," which is an abbreviation for "characters."
-Selecting the blue **(Current Width)** button in the label will fill the text field with the selected column's current width. If a value is present in the custom width field with no unit of measurement, then the default will be used.
+Selecting the **(Current Width)** button in the label fills the text field with the selected column's current width. If a value is present in the **Custom Column Width** field with no unit of measurement, the default is used.
The units of measurement available are:
The units of measurement available are:
| fr | fractional units | | % | percentage |
-Input validation - if validation fails, a red guidance message will popup below the field, but the user can still apply the width. If a value is present in the input, it will be parsed out. If no valid unit of measure is found, then the default will be used.
+**Input validation**: If validation fails, a red guidance message appears underneath the field, but you can still apply the width. If a value is present in the input, it's parsed out. If no valid unit of measure is found, the default is used.
-There is no minimum/maximum width as this is left up to the author's discretion. The custom column width field is disabled for hidden columns.
+You can set the width to any value. There's no minimum or maximum width. The **Custom Column Width** field is disabled for hidden columns.
## Examples
+Here are some examples.
+ ### Spark lines and bar underneath
-The example below shows requests counts and its trend by request name.
+The following example shows request counts and the trend by request name:
```kusto requests
requests
| order by Requests desc ```
-[![Screenshot of a log based grid with a bar underneath and a spark line](./media/workbooks-grid-visualizations/log-chart-grid-spark-line.png)](./media/workbooks-grid-visualizations/log-chart-grid-spark-line.png#lightbox)
+[![Screenshot that shows a log-based grid with a bar underneath and a spark line.](./media/workbooks-grid-visualizations/log-chart-grid-spark-line.png)](./media/workbooks-grid-visualizations/log-chart-grid-spark-line.png#lightbox)
### Heatmap with shared scales and custom formatting
-This example shows various request duration metrics and its counts. The heatmap renderer uses the minimum values set in settings or calculates a minimum and maximum value for the column, and assigns a background color from the selected palette for the cell based on the value of the cell relative to the minimum and maximum value of the column.
+This example shows various request duration metrics and the counts. The heatmap renderer uses the minimum values set in settings or calculates a minimum and maximum value for the column. It assigns a background color from the selected palette for the cell. The color is based on the value of the cell relative to the minimum and maximum value of the column.
``` requests
requests
| order by Requests desc ```
-[![Screenshot of a log based grid with a heatmap having a shared scale across columns](./media/workbooks-grid-visualizations/log-chart-grid-shared-scale.png)](./media/workbooks-grid-visualizations/log-chart-grid-shared-scale.png#lightbox)
+[![Screenshot that shows a log-based grid with a heatmap that has a shared scale across columns.](./media/workbooks-grid-visualizations/log-chart-grid-shared-scale.png)](./media/workbooks-grid-visualizations/log-chart-grid-shared-scale.png#lightbox)
-In the above example, a shared palette (green or red) and scale is used to color the columns (mean, median, p80, p95, and p99). A separate palette (blue) used for the request column.
+In the preceding example, a shared palette in green or red and a scale are used to color the columns **Mean**, **Median**, **p80**, **p95**, and **p99**. A separate palette in blue is used for the **Requests** column.
#### Shared scale To get a shared scale:
-1. Use regular expressions to select more than one column to apply a setting to. For example, set the column name to `Mean|Median|p80|p95|p99` to select them all.
-2. Delete default settings for the individual columns.
+1. Use regular expressions to select more than one column to apply a setting to. For example, set the column name to **Mean|Median|p80|p95|p99** to select them all.
+1. Delete default settings for the individual columns.
-This will cause the new multi-column setting to apply its settings to include a shared scale.
+The new multi-column setting applies its settings to include a shared scale.
-[![Screenshot of a log based grid setting to get a shared scale across columns](./media/workbooks-grid-visualizations/log-chart-grid-shared-scale-settings.png)](./media/workbooks-grid-visualizations/log-chart-grid-shared-scale-settings.png#lightbox)
+[![Screenshot that shows a log-based grid setting to get a shared scale across columns.](./media/workbooks-grid-visualizations/log-chart-grid-shared-scale-settings.png)](./media/workbooks-grid-visualizations/log-chart-grid-shared-scale-settings.png#lightbox)
### Icons to represent status
-The example below shows custom status of requests based on the p95 duration.
+The following example shows custom status of requests based on the p95 duration:
``` requests
requests
| project Status = case(p95 > 5000, 'critical', p95 > 1000, 'error', 'success'), name, p95 ```
-[![Screenshot of a log based grid with a heatmap having a shared scale across columns using the query above.](./media/workbooks-grid-visualizations/log-chart-grid-icons.png)](./media/workbooks-grid-visualizations/log-chart-grid-icons.png#lightbox)
+[![Screenshot that shows a log-based grid with a heatmap that has a shared scale across columns using the preceding query.](./media/workbooks-grid-visualizations/log-chart-grid-icons.png)](./media/workbooks-grid-visualizations/log-chart-grid-icons.png#lightbox)
+
+Supported icon names:
-Supported icon names include:
- cancelled - critical - disabled
Supported icon names include:
- Unknown - Blank
+## Fractional unit percentages
+
+The fractional unit, abbreviated as "fr," is a commonly used dynamic unit of measurement in various types of grids. As the window size or resolution changes, the fr width changes too.
+
+The following screenshot shows a table with eight columns that are 1fr width each and all are equal widths. As the window size changes, the width of each column changes proportionally.
+
+[![Screenshot that shows columns in a grid with a column-width value of 1fr each.](./media/workbooks-grid-visualizations/custom-column-width-fr.png)](./media/workbooks-grid-visualizations/custom-column-width-fr.png#lightbox)
+
+The following image shows the same table, except the first column is set to 50% width. This setting dynamically sets the column to half of the total grid width. Resizing the window continues to retain the 50% width unless the window size gets too small. These dynamic columns have a minimum width based on their contents.
-## Fractional units percentages
+The remaining 50% of the grid is divided up by the eight total fractional units. The **Kind** column is set to 2fr, so it takes up one-fourth of the remaining space. Because the other columns are 1fr each, they each take up one-eighth of the right half of the grid.
-The fractional unit (fr) is a commonly used dynamic unit of measurement in various types of grids. As the window size/resolution changes, the fr width changes as well.
+[![Screenshot that shows columns in a grid with one column-width value of 50% and the rest as 1fr each.](./media/workbooks-grid-visualizations/custom-column-width-fr2.png)](./media/workbooks-grid-visualizations/custom-column-width-fr2.png#lightbox)
-The screenshot below shows a table with eight columns that are 1fr width each and all equal widths. As the window size changes, the width of each column changes proportionally.
+Combining fr, %, px, and ch widths is possible and works similarly to the previous examples. The widths that are set by the static units (ch and px) are hard constants that won't change even if the window or resolution is changed.
-[![Screenshot of columns in grid with column width value of 1fr each](./media/workbooks-grid-visualizations/custom-column-width-fr.png)](./media/workbooks-grid-visualizations/custom-column-width-fr.png#lightbox)
+The columns set by % take up their percentage based on the total grid width. This width might not be exact because of previously minimum widths.
-The image below shows the same table, except the first column is set to 50% width. This will set the column to half of the total grid width dynamically. Resizing the window will continue to retain the 50% width unless the window size gets too small. These dynamic columns have a minimum width based on their contents. The remaining 50% of the grid is divided up by the eight total fractional units. The "kind" column below is set to 2fr, so it takes up one-fourth of the remaining space. As the other columns are 1fr each, they each take up one-eighth of the right half of the grid.
+The columns set with fr split up the remaining grid space based on the number of fractional units they're allotted.
-[![Screenshot of columns in grid with 1 column width value of 50% and the rest as 1fr each](./media/workbooks-grid-visualizations/custom-column-width-fr2.png)](./media/workbooks-grid-visualizations/custom-column-width-fr2.png#lightbox)
+[![Screenshot that shows columns in a grid with an assortment of different width units used.](./media/workbooks-grid-visualizations/custom-column-width-fr3.png)](./media/workbooks-grid-visualizations/custom-column-width-fr3.png#lightbox)
-Combining fr, %, px, and ch widths is possible and works similarly to the previous examples. The widths that are set by the static units (ch and px) are hard constants that won't change even if the window/resolution is changed. The columns set by % will take up their percentage based on the total grid width (might not be exact due to previously minimum widths). The columns set with fr will just split up the remaining grid space based on the number of fractional units they are allotted.
+## Next steps
-[![Screenshot of columns in grid with assortment of different width units used](./media/workbooks-grid-visualizations/custom-column-width-fr3.png)](./media/workbooks-grid-visualizations/custom-column-width-fr3.png#lightbox)
+* Learn how to create a [graph in workbooks](workbooks-graph-visualizations.md).
+* Learn how to create a [tile in workbooks](workbooks-tile-visualizations.md).
azure-monitor Workbooks Text Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text-visualizations.md
Last updated 07/05/2022
# Text visualizations
-Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of telemetry, information to help users interpret your data, section headings, etc.
+You can include text blocks in your workbooks. The text can be human analysis of telemetry, information to help users interpret your data, and section headings.
-[![Screenshot of Apdex table of text](./media/workbooks-text-visualizations/apdex.png)](./media/workbooks-text-visualizations/apdex.png#lightbox)
+[![Screenshot that shows the Apdex table of text.](./media/workbooks-text-visualizations/apdex.png)](./media/workbooks-text-visualizations/apdex.png#lightbox)
-Text is added through a Markdown control, which provides full formatting control. These include different heading and font styles, hyperlinks, tables, etc.
+Text is added through a Markdown control, which provides full formatting control. Formatting includes different heading and font styles, hyperlinks, and tables.
-Edit Mode:
+Edit mode:
-![Screenshot of a text step in edit mode.](./media/workbooks-text-visualizations/text-edit-mode.png)
+![Screenshot that shows a text step in edit mode.](./media/workbooks-text-visualizations/text-edit-mode.png)
-Preview Mode:
+Preview mode:
![Screenshot of a text component in edit mode on the preview tab.](./media/workbooks-text-visualizations/text-edit-mode-preview.png) ## Add a text control
-1. Switch the workbook to edit mode by clicking on **Edit** in the toolbar.
-2. Use the **Add text** link to add a text control to the workbook.
-3. Add Markdown in the editor field.
-4. Use the *Text Style* option to switch between plain markdown and markdown wrapped with the Azure portal's standard info/warning/success/error styling.
-5. Use the **Preview** tab to see how your content will look. While editing, the preview will show the content inside a scrollbar area to limit its size; however, at runtime the markdown content will expand to fill whatever space it needs, with no scrollbars.
-6. Select the **Done Editing** button to complete editing the component.
+1. Switch the workbook to edit mode by selecting **Edit** on the toolbar.
+1. Use the **Add text** link to add a text control to the workbook.
+1. Add Markdown in the editor field.
+1. Use the **Text Style** option to switch between plain Markdown and Markdown wrapped with the Azure portal's standard info, warning, success, or error styling.
+1. Use the **Preview** tab to see how your content will look. While you edit, the preview shows the content inside a scrollbar area to limit its size. At runtime, the Markdown content expands to fill whatever space it needs, with no scrollbars.
+1. Select **Done Editing** to finish editing the component.
> [!TIP] > Use this [Markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to learn about different formatting options. ## Text styles
-The following text styles are available for text component:
+The following text styles are available for the text component:
-| Style | Explanation |
+| Style | Description |
|--|-|
-| `plain` | No additional formatting is applied. |
-| `info` | The portal's "info" style, with a `ℹ` or similar icon and generally blue background. |
-| `error` | The portal's "error" style, with a `❌` or similar icon and generally red background. |
-| `success` | The portal's "success" style, with a `Γ£ö` or similar icon and generally green background. |
-| `upsell` | The portal's "upsell" style, with a `🚀` or similar icon and generally purple background. |
-| `warning` | The portal's "warning" style, with a `ΓÜá` or similar icon and generally blue background. |
+| plain | No other formatting is applied. |
+| info | The portal's info style, with an `ℹ` or similar icon and generally a blue background. |
+| error | The portal's error style, with an `❌` or similar icon and generally a red background. |
+| success | The portal's success style, with a `Γ£ö` or similar icon and generally a green background. |
+| upsell | The portal's upsell style, with a `🚀` or similar icon and generally a purple background. |
+| warning | The portal's warning style, with a `ΓÜá` or similar icon and generally a blue background. |
-Instead of picking a specific style, you may also choose a text parameter as the source of the style. The parameter value must be one of the above text values. The absence of a value or any unrecognized value will be treated as `plain` style.
+Instead of selecting a specific style, you can also choose a text parameter as the source of the style. The parameter value must be one of the preceding text values. The absence of a value or any unrecognized value is treated as plain style.
-Info style example:
+An info style example:
-![Screenshot of what the info style looks like.](./media/workbooks-text-visualizations/text-preview-info-style.png)
+![Screenshot that shows what the info style looks like.](./media/workbooks-text-visualizations/text-preview-info-style.png)
-Warning style example:
+A warning style example:
-![Screenshot of what the warning style looks like.](./media/workbooks-text-visualizations/text-warning-style.png)
+![Screenshot that shows what the warning style looks like.](./media/workbooks-text-visualizations/text-warning-style.png)
## Next steps
azure-monitor Workbooks Tile Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-tile-visualizations.md
Last updated 07/05/2022
# Tile visualizations
-Tiles are a useful way to present summary data in workbooks. The image below shows a common use case of tiles with app level summary on top of a detailed grid.
+Tiles are a useful way to present summary data in workbooks. The following example shows a common use case of tiles with app-level summary on top of a detailed grid.
-[![Screenshot of tile summary view](./media/workbooks-tile-visualizations/tiles-summary.png)](./media/workbooks-tile-visualizations/tiles-summary.png#lightbox)
+[![Screenshot that shows a tile summary view.](./media/workbooks-tile-visualizations/tiles-summary.png)](./media/workbooks-tile-visualizations/tiles-summary.png#lightbox)
-Workbook tiles support showing a title, subtitle, large text, icons, metric based gradients, spark line/bars, footer, etc.
+Workbook tiles support showing items like a title, subtitle, large text, icons, metric-based gradients, spark lines or bars, and footers.
-## Adding a tile
+## Add a tile
-1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
-2. Select **Add** then *Add query* to add a log query control to the workbook.
-3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target.
-4. Use the Query editor to enter the KQL for your analysis.
+1. Switch the workbook to edit mode by selecting the **Edit** toolbar button.
+1. Select **Add** > **Add query** to add a log query control to the workbook.
+1. For **Query type**, select **Logs**. For **Resource type**, select, for example, **Application Insights**, and select the resources to target.
+1. Use the query editor to enter the KQL for your analysis.
```kusto requests
Workbook tiles support showing a title, subtitle, large text, icons, metric base
| top 7 by Requests desc ```
-5. Set Size to **Full**.
-6. Set the visualization to **Tiles**.
-7. Select the **Tile Settings** button to open the settings pane.
- 1. In *Title*, set:
- * Use column: `name`.
- 2. In *Left*, set:
- * Use column: `Requests`.
- * Column renderer: `Big Number`.
- * Color palette: `Green to Red`
- * Minimum value: `0`.
- 3. In *Bottom*, set:
- * Use column: `appName`.
-8. Select the **Save and Close** button at the bottom of the pane.
-
-[![Screenshot of tile summary view with above query and tile settings.](./media/workbooks-tile-visualizations/tile-settings.png)](./media/workbooks-tile-visualizations/tile-settings.png#lightbox)
+1. Set **Size** to **Full**.
+1. Set **Visualization** to **Tiles**.
+1. Select the **Tile Settings** button to open the **Tile Settings** pane:
+ 1. In **Title**, set:
+ * **Use column**: `name`
+ 1. In **Left**, set:
+ * **Use column**: `Requests`
+ * **Column renderer**: `Big Number`
+ * **Color palette**: `Green to Red`
+ * **Minimum value**: `0`
+ 1. In **Bottom**, set:
+ * **Use column**: `appName`
+1. Select the **Save and Close** button at the bottom of the pane.
+
+[![Screenshot that shows a tile summary view with query and tile settings.](./media/workbooks-tile-visualizations/tile-settings.png)](./media/workbooks-tile-visualizations/tile-settings.png#lightbox)
The tiles in read mode:
-[![Screenshot of tile summary view in read mode.](./media/workbooks-tile-visualizations/tiles-read-mode.png)](./media/workbooks-tile-visualizations/tiles-read-mode.png#lightbox)
+[![Screenshot that shows a tile summary view in read mode.](./media/workbooks-tile-visualizations/tiles-read-mode.png)](./media/workbooks-tile-visualizations/tiles-read-mode.png#lightbox)
## Spark lines in tiles
-1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
-2. Add a time range parameter called `TimeRange`.
- 1. Select **Add** and then *Add parameters*.
- 2. In the parameter control, select **Add Parameter**.
- 3. Enter `TimeRange` in the *Parameter name* field and choose `Time range picker` for *Parameter type*.
- 4. Select **Save** at the top of the pane and then select **Done Editing** in the parameter control.
-3. Select **Add** then *Add query* to add a log query control below the parameter control.
-4. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target.
-5. Use the Query editor to enter the KQL for your analysis.
+1. Switch the workbook to edit mode by selecting **Edit** on the toolbar.
+1. Add a time range parameter called `TimeRange`.
+ 1. Select **Add** > **Add parameters**.
+ 1. In the parameter control, select **Add Parameter**.
+ 1. In the **Parameter name** field, enter `TimeRange`. For **Parameter type**, choose `Time range picker`.
+ 1. Select **Save** at the top of the pane and then select **Done Editing** in the parameter control.
+1. Select **Add** > **Add query** to add a log query control under the parameter control.
+1. For **Query type**, select **Logs**. For **Resource type**, select, for example, **Application Insights**, and select the resources to target.
+1. Use the query editor to enter the KQL for your analysis.
```kusto let topRequests = requests
The tiles in read mode:
| project-away name1, timestamp ```
-6. Select **Run Query**. (Make sure to set `TimeRange` to a value of your choosing before running the query.)
-7. Set the *Visualization* to "Tiles".
-8. Set the *Size* to "Full".
-9. Select **Tile Settings**.
- 1. In *Tile*, set:
- * Use column: `name`.
- 2. In *Subtile*, set:
- * Use column: `appNAme`.
- 3. In *Left*, set:
- * Use column:`Requests`.
- * Column renderer: `Big Number`.
- * Color palette: `Green to Red`.
- * Minimum value: `0`.
- 4. In *Bottom*, set:
- * Use column:`Trend`.
- * Column renderer: `Spark line`.
- * Color palette: `Green to Red`.
- * Minimum value: `0`.
-10. Select **Save and Close** at the bottom of the pane.
-
-![Screenshot of tile visualization with a spark line](./media/workbooks-tile-visualizations/spark-line.png)
+1. Select **Run Query**. Set `TimeRange` to a value of your choosing before you run the query.
+1. Set **Visualization** to **Tiles**.
+1. Set **Size** to **Full**.
+1. Select **Tile Settings**:
+ 1. In **Tile**, set:
+ * **Use column**: `name`
+ 1. In **Subtile**, set:
+ * **Use column**: `appNAme`
+ 1. In **Left**, set:
+ * **Use column**:`Requests`
+ * **Column renderer**: `Big Number`
+ * **Color palette**: `Green to Red`
+ * **Minimum value**: `0`
+ 1. In **Bottom**, set:
+ * **Use column**:`Trend`
+ * **Column renderer**: `Spark line`
+ * **Color palette**: `Green to Red`
+ * **Minimum value**: `0`
+1. Select **Save and Close** at the bottom of the pane.
+
+![Screenshot that shows tile visualization with a spark line.](./media/workbooks-tile-visualizations/spark-line.png)
## Tile sizes
-The author has an option to set the tile width in the tile settings.
+You have an option to set the tile width in the tile settings:
* `fixed` (default) The default behavior of tiles is to be the same fixed width, approximately 160 pixels wide, plus the space around the tiles.
- ![Screenshot displaying fixed width tiles](./media/workbooks-tile-visualizations/tiles-fixed.png)
+ ![Screenshot that shows fixed-width tiles.](./media/workbooks-tile-visualizations/tiles-fixed.png)
* `auto`
- Each title will shrink or grow to fit their contents however, the tiles are limited to the width of the tiles' view (no horizontal scrolling).
+ Each title shrinks or grows to fit their contents. The tiles are limited to the width of the tiles' view (no horizontal scrolling).
- ![Screenshot displaying auto width tiles](./media/workbooks-tile-visualizations/tiles-auto.png)
+ ![Screenshot that shows auto-width tiles.](./media/workbooks-tile-visualizations/tiles-auto.png)
* `full size`
- Each title will always be the full width of the tiles' view, one title per line.
+ Each title is always the full width of the tiles' view, with one title per line.
- ![Screenshot displaying full size width tiles](./media/workbooks-tile-visualizations/tiles-full.png)
+ ![Screenshot that shows full-size-width tiles.](./media/workbooks-tile-visualizations/tiles-full.png)
## Next steps
-* Tiles also support Composite bar renderer. To learn more visit [Composite Bar documentation](workbooks-composite-bar.md).
-* To learn more about time parameters like `TimeRange` visit [workbook time parameters documentation](workbooks-time.md).
+* Tiles also support the composite bar renderer. To learn more, see [Composite bar documentation](workbooks-composite-bar.md).
+* To learn more about time parameters like `TimeRange`, see [Workbook time parameters documentation](workbooks-time.md).
azure-monitor Workbooks Tree Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-tree-visualizations.md
Last updated 07/05/2022
# Tree visualizations
-Workbooks support hierarchical views via tree-grids. Trees allow some rows to be expandable into the next level for a drill-down experience.
+Workbooks support hierarchical views via tree grids. Trees allow some rows to be expandable into the next level for a drill-down experience.
-The example below shows container health metrics (working set size) visualized as a tree grid. The top-level nodes here are Azure Kubernetes Service (AKS) nodes, the next level are pods, and the final level are containers. Notice that you can still format your columns like in a grid (heatmap, icons, link). The underlying data source in this case is a Log Analytics workspace with AKS logs.
+The following example shows container health metrics of a working set size that are visualized as a tree grid. The top-level nodes here are Azure Kubernetes Service (AKS) nodes. The next-level nodes are pods, and the final-level nodes are containers. Notice that you can still format your columns like you do in a grid with heatmaps, icons, and links. The underlying data source in this case is a Log Analytics workspace with AKS logs.
-[![Screenshot of tile summary view](./media/workbooks-tree-visualizations/trees.png)](./media/workbooks-tree-visualizations/trees.png#lightbox)
+[![Screenshot that shows a tile summary view.](./media/workbooks-tree-visualizations/trees.png)](./media/workbooks-tree-visualizations/trees.png#lightbox)
+
+## Add a tree grid
+
+1. Switch the workbook to edit mode by selecting **Edit** on the toolbar.
+1. Select **Add** > **Add query** to add a log query control to the workbook.
+1. For **Query type**, select **Logs**. For **Resource type**, select, for example, **Application Insights**, and select the resources to target.
+1. Use the query editor to enter the KQL for your analysis.
-## Adding a tree grid
-1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
-2. Select **Add** then *Add query* to add a log query control to the workbook.
-3. Select the query type as **Log**, resource type (for example, Application Insights), and the resources to target.
-4. Use the Query editor to enter the KQL for your analysis
```kusto requests | summarize Requests = count() by ParentId = appName, Id = name
The example below shows container health metrics (working set size) visualized a
| project Name, Kind, Requests, Id, ParentId | order by Requests desc ```
-5. Set the visualization to **Grid**
-6. Select the **Column Settings** button to open the settings pane
-7. In the **Tree/Group By Settings** section at the bottom, set:
- * Tree Type: `Parent/Child`
- * Id Field: `Id`
- * Parent Id Field: `ParentId`
- * Show the expander on: `Name`
- * Select *Expand the top level of the tree* check box.
-8. In _Columns_ section at the top, set:
- * _Id_ - Column Renderer: `Hidden`
- * _Parent Id_ - Column Renderer: `Hidden`
- * _Requests_ - Column Renderer: `Bar`, Color: `Blue`, Minimum Value: `0`
-9. Select the **Save and Close** button at the bottom of the pane.
-
-[![Screenshot of tile summary view with the above query and settings.](./media/workbooks-tree-visualizations/tree-settings.png)](./media/workbooks-tree-visualizations/tree-settings.png#lightbox)
+
+1. Set **Visualization** to **Grid**.
+1. Select the **Column Settings** button to open the **Edit column settings** pane.
+1. In the **Columns** section at the top, set:
+ * **Id - Column renderer**: `Hidden`
+ * **Parent Id - Column renderer**: `Hidden`
+ * **Requests - Column renderer**: `Bar`
+ * **Color palette**: `Blue`
+ * **Minimum value**: `0`
+1. In the **Tree/Group By Settings** section at the bottom, set:
+ * **Tree type**: `Parent/Child`
+ * **Id Field**: `Id`
+ * **Parent Id Field**: `ParentId`
+ * **Show the expander on**: `Name`
+ * Select the **Expand the top level of the tree** checkbox.
+1. Select the **Save and Close** button at the bottom of the pane.
+
+[![Screenshot that shows a tile summary view with settings.](./media/workbooks-tree-visualizations/tree-settings.png)](./media/workbooks-tree-visualizations/tree-settings.png#lightbox)
## Tree settings
-| Setting | Explanation |
+| Setting | Description |
|:- |:-| | `Id Field` | The unique ID of every row in the grid. | | `Parent Id Field` | The ID of the parent of the current row. |
-| `Show the expander on` | The column on which to show the tree expander. It is common for tree grids to hide their ID and parent ID field because they are not very readable. Instead, the expander appears on a field with a more readable value like the name of the entity. |
-| `Expand the top level of the tree` | If checked, the tree grid will be expanded at the top level. Useful if you want to show more information by default. |
+| `Show the expander on` | The column on which to show the tree expander. It's common for tree grids to hide their ID and parent ID fields because they aren't very readable. Instead, the expander appears on a field with a more readable value like the name of the entity. |
+| `Expand the top level of the tree` | If selected, the tree grid expands at the top level. This option is useful if you want to show more information by default. |
## Grouping in a grid
-Grouping allows you to build hierarchical views similar to the ones above with significantly simpler queries. You do lose aggregation at the inner nodes of the tree, but that will be acceptable for some scenarios. Use *Group By* to build tree views when the underlying result set cannot be transformed into a proper free form, for example: alert, health, and metric data.
+You can use grouping to build hierarchical views similar to the ones shown in the preceding example with simpler queries. You do lose aggregation at the inner nodes of the tree, but that's acceptable for some scenarios. Use **Group By** to build tree views when the underlying result set can't be transformed into a proper tree form. Examples are alert, health, and metric data.
-## Adding a tree using grouping
+## Add a tree by using grouping
+
+1. Switch the workbook to edit mode by selecting **Edit** on the toolbar.
+1. Select **Add** > **Add query** to add a log query control to the workbook.
+1. For **Query type**, select **Logs**. For **Resource type**, select, for example, **Application Insights**, and select the resources to target.
+1. Use the query editor to enter the KQL for your analysis.
-1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
-2. Select **Add** then *Add query* to add a log query control to the workbook.
-3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target.
-4. Use the Query editor to enter the KQL for your analysis
```kusto requests | summarize Requests = count() by App = appName, RequestName = name | order by Requests desc ```
-1. Set the visualization to *Grid*.
-2. Select the **Column Settings button** to open the settings pane.
-3. In the *Tree/Group By Settings* section at the bottom, set:
- * Tree Type: `Group By`
- * Group By: `App`
- * Then By: `None`
- * Select *Expand the top level of the tree* check box.
-4. In *Columns* section at the top, set:
- * *App* - Column Renderer: `Hidden`
- * *Requests* - Column Renderer: `Bar`, Color: `Blue`, Minimum Value: `0`
-5. Select the **Save and Close** button at the bottom of the pane.
-
-[![Screenshot showing the creation of a tree visualization in workbooks](./media/workbooks-tree-visualizations/tree-group-create.png)](./media/workbooks-tree-visualizations/tree-group-create.png#lightbox)
+
+1. Set **Visualization** to **Grid**.
+1. Select the **Column Settings** button to open the **Edit column settings** pane.
+1. In the **Columns** section at the top, set:
+ * **App - Column renderer**: `Hidden`
+ * **Requests - Column renderer**: `Bar`
+ * **Color palette**: `Blue`
+ * **Minimum value**: `0`
+1. In the **Tree/Group By Settings** section at the bottom, set:
+ * **Tree type**: `Group By`
+ * **Group by**: `App`
+ * **Then by**: `None`
+ * Select the **Expand the top level of the tree** checkbox.
+1. Select the **Save and Close** button at the bottom of the pane.
+
+[![Screenshot that shows the creation of a tree visualization in workbooks.](./media/workbooks-tree-visualizations/tree-group-create.png)](./media/workbooks-tree-visualizations/tree-group-create.png#lightbox)
## Next steps
azure-vmware Enable Hcx Access Over Internet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-hcx-access-over-internet.md
+
+ Title: Enable HCX access over the internet
+description: This article describes how to access HCX over a public IP address using Azure VMware solution.
+ Last updated : 06/27/2022+
+# Enable HCX access over the internet
+
+This article describes how to access the HCX over a Public IP address using Azure VMware Solution. It also explains how to pair HCX sites, and create service mesh from on-premises to Azure VMware Solutions private cloud using Public IP. The service mesh allows you to migrate a workload from an on-premises datacenter to Azure VMware Solutions private cloud over the public internet.
+
+> [!IMPORTANT]
+> This solution is useful where the customer is not using Express Route or VPN connectivity with the Azure cloud. The on-premises HCX appliance should be reachable from the internet to establish HCX communication from on-premises to Azure VMware Solution private cloud.
+
+## Configure Public IP block
+
+Configure a Public IP block through portal by using the Public IP feature of the Azure VMware Solution private cloud.
+
+1. Sign in to Azure VMware Solution portal.
+1. Under **Workload Networking**, select **Public IP (preview)**.
+
+1. Select **+Public IP**.
+1. Enter the **Public IP name** and select the address space from the **Address space** drop-down list according to the number of IPs required, then select **Configure**.
+ >[!Note]
+ > It will take 15-20 minutes to configure the Public IP block on private cloud.
+
+After the Public IP is configured successfully, you should see it appear under the Public IP section. The provisioning state shows **Succeeded**. This Public IP block is configured as NSX-T segment on the Tier-1 router.
+
+For more information about how to enable a public IP to the NSX Edge for Azure VMware Solution, see [Enable Public IP to the NSX Edge for Azure VMware Solution](https://docs.microsoft.com/azure/azure-vmware/enable-public-ip-nsx-edge).
+
+## Create Public IP segment on NSX-T
+Before you create a Public IP segment, get your credentials for NSX-T Manager from Azure VMware Solution portal.
+
+1. Sign in to NSX-T Manager using credentials provided by the Azure VMware Solution portal.
+1. Under the **Manage** section, select **Identity**.
+1. Copy the NSX-T Manager admin user password.
+1. Browse the NSX-T Manger and paste the admin password in the password field, and select **Login**.
+1. Under the **Networking** section select **Connectivity** and **Segments**, then select **ADD SEGMENT**.
+1. Provide Segment name, select Tier-1 router as connected gateway, and provide public segment under subnets.
+1. Select **Save**. ΓÇ»
+
+## Assign public IP to HCX manager
+HCX manager of destination Azure VMware Solution SDDC should be reachable from the internet to do site pairing with source site. HCX Manager can be exposed by way of DNAT rule and a static null route. Because HCX Manager is in the provider space, not within the NSX-T environment, the null route is necessary to allow HCX Manager to route back to the client by way of the DNAT rule.
+
+### Add static null route to the T1 router
+1. Sign in to NSX-T manager, and select **Networking**.
+1. Under the **Connectivity** section, select **Tier-1 Gateways**.
+1. Edit the existing T1 gateway.
+1. Expand **STATIC ROUTES**.
+1. Select the number next to **Static Routes**.
+1. Select **ADD STATIC ROUTE**.
+ A pop-up window is displayed.
+1. Under **Name**, enter the name of the route.
+1. Under **network**, enter a non-overlapping /32 IP address under Network.
+ >[!NOTE]
+ > This address should not overlap with any other IP addresses on the network.
+1. Under **Next hops**, select **Set**.
+1. Select **NULL** as IP Address.
+ Leave defaults for Admin distance and scope.
+1. Select **ADD**, then select **APPLY**.
+1. Select **SAVE**, then select **CLOSE**.
+1. Select **CLOSE EDITING**.
+
+### Add NAT rule to T1 gateway
+
+1. Sign in to NSX-T Manager, and select **Networking**.
+1. Select **NAT**.
+1. Select the T1 Gateway.
+1. Select **ADD NAT RULE**.
+1. Add one SNAT rule for HCX Manager.
+ 1. The DNAT Rule Destination is the Public IP for HCX Manager. The Translated IP is the HCX Manager IP in the cloud.
+ 1. The SNAT Rule Source is the HCX Manager IP in the cloud. The Translated IP is the non-overlapping /32 IP from the Static Route.
+1. Make sure to set the Firewall option on DNAT rule to **Match External Address**.
+1. Create T1 Gateway Firewall rules to allow only expected traffic to the Public IP for HCX Manager and drop everything else.
+
+>[!NOTE]
+> HCX manager can now be accessed over the internet using public IP.
+
+### Create network profile for HCX at destination site
+1. Sign in to Destination HCX Manager.
+1. Select **Interconnect** and then select the **Network Profiles** tab.
+1. Select **Create Network Profile**.
+1. Select **NSX Networks** as network type under **Network**.
+1. Select the **Public-IP-Segment** created on NSX-T.
+1. Enter **Name**.
+1. Under IP pools, enter **IP Ranges** for HCX uplink, **Prefix Length** and **Gateway** of public IP segment.
+1. Scroll down and select the **HCX Uplink** checkbox under **HCX Traffic Type** as this profile will be used for HCX uplink.
+1. To create the Network Profile, select **Create**.
+
+### Pair site
+Site pairing is required to create service mesh between source and destination sites.
+
+1. Sign in to **Source** site HCX Manager.
+1. Select **Site Pairing** and select **ADD SITE PAIRING**.
+1. Enter the remote HCX URL and sign in credentials, then select **Connect**.
+
+After pairing is done, it will appear under site pairing.
+
+### Create service mesh
+Service Mesh will deploy HCX WAN Optimizer, HCX Network Extension and HCX-IX appliances.
+1. Sign in to **Source** site HCX Manager.
+1. Select **Interconnect** and then select the **Service Mesh** tab.
+1. Select **CREATE SERVICE MESH**.
+1. Select the **destination** site to create service mesh with and select **Continue**.
+1. Select the compute profiles for both sites and select **Continue**.
+1. Select the HCX services to be activated and select **Continue**.
+ >[!Note]
+ >Premium services require an additional HCX Enterprise license.
+1. Select the Network Profile of source site.
+1. Select the Network Profile of Destination that you created in the Network Profile section.
+1. Select **Continue**.
+1. Review the Transport Zone information, and select **Continue**.
+1. Review the Topological view, and select **Continue**.
+1. Enter the Service Mesh name and select **FINISH**.
+
+### Extend network
+The HCX Network Extension service provides layer 2 connectivity between sites. The extension service also allows you to keep the same IP and MAC addresses during virtual machine migrations.
+1. Sign in to **source** HCX Manager.
+1. Under the **Network Extension** section, select the site for which you want to extend the network, and select **EXTEND NETWORKS**.
+1. Select the network that you want to extend to destination site, and select **Next**.
+1. Enter the subnet details of network that you're extending.
+1. Select the destination first hop route (T1), and select **Submit**.
+1. Sign in to the **destination** NSX, you'll see Network 10.14.27.1/24 has been extended.
+
+After the network is extended to destination site, VMs can be migrated over Layer 2 Extension.
+
+## Next steps
+For detailed information on HCX network underlay minimum requirements, see [Network Underlay Minimum Requirements](https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html).
+++
azure-web-pubsub Concept Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-availability-zones.md
+
+ Title: Availability zones support in Azure Service
+description: Azure availability zones and zone redundancy in Azure Web PubSub Service
+++ Last updated : 07/06/2022+++
+# Availability zones support in Azure Web PubSub Service
+
+Azure Web PubSub Service uses [Azure availability zones](../availability-zones/az-overview.md#availability-zones) to provide high availability and fault tolerance within an Azure region.
+
+> [!NOTE]
+> Zone redundancy is a Premium tier feature. It is implicitly enabled when you create or upgrade to a Premium tier resource. Standard tier resources can be upgraded to Premium tier without downtime.
+
+## Zone redundancy
+
+Zone-enabled Azure regions (not all [regions support availability zones](../availability-zones/az-region.md)) have a minimum of three availability zones. A zone is one or more datacenters, each with its own independent power and network connections. All the zones in a region are connected by a dedicated low-latency regional network. If a zone fails, Azure Web PubSub Service traffic running on the affected zone is routed to other zones in the region.
+
+Azure Web PubSub Service uses availability zones in a *zone-redundant* manner. Zone redundancy means the service isn't constrained to run in a specific zone. Instead, total service is evenly distributed across multiple zones in a region. Zone redundancy reduces the potential for data loss and service interruption if one of the zones fails.
+
+## Next steps
+
+* Learn more about [regions that support availability zones](../availability-zones/az-region.md).
+* Learn more about designing for [reliability](/azure/architecture/framework/resiliency/app-design) in Azure.
azure-web-pubsub Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-custom-domain.md
+
+ Title: Configure a custom domain for Azure Web PubSub Service
+
+description: How to configure a custom domain for Azure Web PubSub Service
+++ Last updated : 07/07/2022+++
+# Configure a custom domain for Azure Web PubSub Service
+
+In addition to the default domain provided Azure Web PubSub Service, you can also add custom domains.
+
+## Prerequisites
+
+* Resource must be Premium tier
+* A custom certificate matching custom domain is stored in Azure Key Vault
+
+## Add a custom certificate
+
+Before you can add a custom domain, you need add a matching custom certificate first. A custom certificate is a sub resource of your Azure Web PubSub Service. It references a certificate in your Azure Key Vault. For security and compliance reasons, Azure Web PubSub Service doesn't permanently store your certificate. Instead it fetches it from your Key Vault on the fly and keeps it in memory.
+
+### Step 1: Grant your Azure Web PubSub Service resource access to Key Vault
+
+Azure Web PubSub Service uses Managed Identity to access your Key Vault. In order to authorize, it needs to be granted permissions.
+
+1. In the Azure portal, go to your Azure Web PubSub Service resource.
+1. In the menu pane, select **Identity**.
+1. Turn on either **System assigned** or **User assigned** identity. Click **Save**.
+
+ :::image type="content" alt-text="Screenshot of enabling managed identity." source="media\howto-custom-domain\portal-identity.png" :::
+
+1. Go to your Key Vault resource.
+1. In the menu pane, select **Access configuration**. Click **Go to access policies**.
+1. Click **Create**. Select **Secret Get** permission and **Certificate Get** permission. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of permissions selection in Key Vault." source="media\howto-custom-domain\portal-key-vault-permissions.png" :::
+
+1. Search for the Azure Web PubSub Service resource name or the user assigned identity name. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of principal selection in Key Vault." source="media\howto-custom-domain\portal-key-vault-principal.png" :::
+
+1. Skip **Application (optional)**. Click **Next**.
+1. In **Review + create**, click **Create**.
+
+### Step 2: Create a custom certificate
+
+1. In the Azure portal, go to your Azure Web PubSub Service resource.
+1. In the menu pane, select **Custom domain**.
+1. Under **Custom certificate**, click **Add**.
+
+ :::image type="content" alt-text="Screenshot of custom certificate management." source="media\howto-custom-domain\portal-custom-certificate-management.png" :::
+
+1. Fill in a name for the custom certificate.
+1. Click **Select from your Key Vault** to choose a Key Vault certificate. After selection the following **Key Vault Base URI**, **Key Vault Secret Name** should be automatically filled. Alternatively you can also fill in these fields manually.
+1. Optionally, you can specify a **Key Vault Secret Version** if you want to pin the certificate to a specific version.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a custom certificate." source="media\howto-custom-domain\portal-custom-certificate-add.png" :::
+
+Azure Web PubSub Service will then fetch the certificate and validate its content. If everything is good, the **Provisioning State** will be **Succeeded**.
+
+ :::image type="content" alt-text="Screenshot of an added custom certificate." source="media\howto-custom-domain\portal-custom-certificate-added.png" :::
+
+## Create a custom domain CNAME
+
+To validate the ownership of your custom domain, you need to create a CNAME record for the custom domain and point it to the default domain of Azure Web PubSub Service.
+
+For example, if your default domain is `contoso.webpubsub.azure.com`, and your custom domain is `contoso.example.com`, you need to create a CNAME record on `example.com` like:
+
+```
+contoso.example.com. 0 IN CNAME contoso.webpubsub.azure.com.
+```
+
+If you're using Azure DNS Zone, see [manage DNS records](../dns/dns-operations-recordsets-portal.md) for how to add a CNAME record.
+
+ :::image type="content" alt-text="Screenshot of adding a CNAME record in Azure DNS Zone." source="media\howto-custom-domain\portal-dns-cname.png" :::
+
+If you're using other DNS providers, follow provider's guide to create a CNAME record.
+
+## Add a custom domain
+
+A custom domain is another sub resource of your Azure Web PubSub Service. It contains all configurations for a custom domain.
+
+1. In the Azure portal, go to your Azure Web PubSub Service resource.
+1. In the menu pane, select **Custom domain**.
+1. Under **Custom domain**, click **Add**.
+
+ :::image type="content" alt-text="Screenshot of custom domain management." source="media\howto-custom-domain\portal-custom-domain-management.png" :::
+
+1. Fill in a name for the custom domain. It's the sub resource name.
+1. Fill in the domain name. It's the full domain name of your custom domain, for example, `contoso.com`.
+1. Select a custom certificate that applies to this custom domain.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a custom domain." source="media\howto-custom-domain\portal-custom-domain-add.png" :::
+
+## Verify a custom domain
+
+You can now access your Azure Web PubSub Service endpoint via the custom domain. To verify it, you can access the health API.
+
+Here's an example using cURL:
+
+#### [PowerShell](#tab/azure-powershell)
+
+```powershell
+PS C:\> curl.exe -v https://contoso.example.com/api/health
+...
+> GET /api/health HTTP/1.1
+> Host: contoso.example.com
+
+< HTTP/1.1 200 OK
+...
+PS C:\>
+```
+
+#### [Bash](#tab/azure-bash)
+
+```bash
+$ curl -vvv https://contoso.example.com/api/health
+...
+* SSL certificate verify ok.
+...
+> GET /api/health HTTP/2
+> Host: contoso.example.com
+...
+< HTTP/2 200
+...
+```
+
+--
+
+It should return `200` status code without any certificate error.
+
+## Next steps
+++ [How to enable managed identity for Azure Web PubSub Service](howto-use-managed-identity.md)++ [Get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)++ [What is Azure DNS](../dns/dns-overview.md)
backup Backup Azure Sql Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-backup-cli.md
+
+ Title: Back up SQL server databases in Azure VMs using Azure Backup via CLI
+description: Learn how to use CLI to back up SQL server databases in Azure VMs in the Recovery Services vault.
+ Last updated : 07/07/2022+++++
+# Back up SQL databases in Azure VM using Azure CLI
+
+Azure CLI is used to create and manage Azure resources from the Command Line or through scripts. This article describes how to back up an SQL database in Azure VM and trigger on-demand backups using Azure CLI. You can also perform these actions using the [Azure portal](backup-sql-server-database-azure-vms.md).
+
+This article assumes that you already have an SQL database installed on an Azure VM. (You can also [create a VM using Azure CLI](../virtual-machines/linux/quick-create-cli.md)).
+
+In this article, you'll learn how to:
+> [!div class="checklist"]
+>
+> * Create a Recovery Services vault
+> * Register SQL server and discover database(s) on it
+> * Enable backup on an SQL database
+> * Trigger an on-demand backup
+
+See the [currently supported scenarios](sql-support-matrix.md) for SQL in Azure VM.
++
+## Create a Recovery Services vault
+
+A Recovery Services vault is a logical container that stores the backup data for each protected resource, such as Azure VMs or workloads running on Azure VMs - for example, SQL or HANA databases. When the backup job for a protected resource runs, it creates a recovery point inside the Recovery Services vault. You can then use one of these recovery points to restore data to a given point in time.
+
+Create a Recovery Services vault with the [az backup vault create](/cli/azure/backup/vault#az-backup-vault-create) command. Use the resource group and location as that of the VM you want to protect. Learn how to create a VM using Azure CLI with [this VM quickstart](../virtual-machines/linux/quick-create-cli.md).
+
+For this article, we'll use:
+
+* A resource group named *SQLResourceGroup*
+* A VM named *testSQLVM*
+* Resources in the *westus2* location.
+
+Run the following command to create a vault named *SQLVault*.
+
+```azurecli-interactive
+az backup vault create --resource-group SQLResourceGroup \
+ --name SQLVault \
+ --location westus2
+```
+
+By default, the Recovery Services vault is set for Geo-Redundant storage. Geo-Redundant storage ensures your backup data is replicated to a secondary Azure region even if that's hundreds of miles away from the primary region. If the storage redundancy setting needs to be modified, use the [az backup vault backup-properties set](/cli/azure/backup/vault/backup-properties#az-backup-vault-backup-properties-set) command.
+
+```azurecli
+az backup vault backup-properties set \
+ --name SQLVault \
+ --resource-group SQLResourceGroup \
+ --backup-storage-redundancy "LocallyRedundant/GeoRedundant"
+```
+
+To verify if the vault is successfully created, use the [az backup vault list](/cli/azure/backup/vault#az-backup-vault-list) command. The response appears as:
+
+```output
+Location Name ResourceGroup
+ -
+westus2 SQLVault SQLResourceGroup
+```
+
+## Register and protect the SQL Server
+
+To register the SQL Server with the Recovery Services vault, use the [az backup container register](/cli/azure/backup/container#az-backup-container-register) command. *VMResourceId* is the resource ID of the VM that you created to install SQL.
+
+```azurecli-interactive
+az backup container register --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --workload-type SQLDataBase \
+ --backup-management-type AzureWorkload \
+ --resource-id VMResourceId
+```
+
+>[!NOTE]
+>If the VM isn't present in the same resource group as the vault, *SQLResourceGroup* uses the resource group where the vault was created.
+
+Registering the SQL server automatically discovers all its current databases. However, to discover any new databases that may be added in the future, see the [Discovering new databases added to the registered SQL server](backup-azure-sql-manage-cli.md#protect-the-new-databases-added-to-a-sql-instance) section.
+
+Use the [az backup container list](/cli/azure/backup/container#az-backup-container-list) command to verify if the SQL instance is successfully registered with your vault. The response appears as:
+
+```output
+Name Friendly Name Resource Group Type Registration Status
+ -- -- -
+VMAppContainer;Compute;SQLResourceGroup;testSQLVM testSQLVM SQLResourceGroup AzureWorkload Registered
+```
+
+>[!NOTE]
+>The column *name* in the above output refers to the container name. This container name is used in the next sections to enable backups and trigger them. For example, *VMAppContainer;Compute;SQLResourceGroup;testSQLVM*.
+
+## Enable backup on the SQL database
+
+The [az backup protectable-item list](/cli/azure/backup/protectable-item#az-backup-protectable-item-list) command lists all the databases discovered on the SQL instance that you registered in the previous step.
+
+```azurecli-interactive
+az backup protectable-item list --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --workload-type SQLDataBase \
+ --backup-management-type AzureWorkload \
+ --protectable-item-type SQLDataBase
+ --output table
+```
+
+You should find the database in this list that you want to back up, which appears as:
+
+```output
+Name Protectable Item Type ParentName ServerName IsProtected
+-- - --
+sqldatabase;mssqlserver;master SQLDataBase MSSQLServer testSQLVM NotProtected
+sqldatabase;mssqlserver;model SQLDataBase MSSQLServer testSQLVM NotProtected
+sqldatabase;mssqlserver;msdb SQLDataBase MSSQLServer testSQLVM NotProtected
+```
+
+Now, configure backup for the *sqldatabase;mssqlserver;master* database.
+
+To configure and protect backups on a database, one at a time, use the [az backup protection enable-for-azurewl](/cli/azure/backup/protection#az-backup-protection-enable-for-azurewl) command. Provide the name of the policy that you want to use. To create a policy using CLI, use the [az backup policy create](/cli/azure/backup/policy#az-backup-policy-create) command. For this article, we've used the *testSQLPolicy* policy.
+
+```azurecli-interactive
+az backup protection enable-for-azurewl --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --policy-name SQLPolicy \
+ --protectable-item-name "sqldatabase;mssqlserver;master" \
+ --protectable-item-type SQLDataBase \
+ --server-name testSQLVM \
+ --workload-type SQLDataBase \
+ --output table
+```
+
+You can use the same command, if you have an *SQL Always On Availability Group* and want to identify the protectable datasource within the availability group. Here, the protectable item type is *SQLAG*.
+
+To verify if the above backup configuration is complete, use the [az backup job list](/cli/azure/backup/job#az-backup-job-list) command. The output appears as:
+
+```output
+Name Operation Status Item Name Start Time UTC
+ - -
+e0f15dae-7cac-4475-a833-f52c50e5b6c3 ConfigureBackup Completed master 2019-12-03T03:09:210831+00:00
+```
+
+The [az backup job list](/cli/azure/backup/job#az-backup-job-list) command lists all backup jobs (scheduled or on-demand) that have run or are currently running on the protected database, in addition to other operations, such as register, configure backup, and delete backup data.
+
+>[!NOTE]
+>Azure Backup doesn't automatically adjust for daylight saving time changes when backing up an SQL database running in an Azure VM.
+>
+>Modify the policy manually as needed.
+
+## Enable auto-protection
+
+For seamless backup configuration, all databases added in the future can be automatically protected with a certain policy. To enable auto-protection, use the [az backup protection auto-enable-for-azurewl](/cli/azure/backup/protection#az-backup-protection-auto-enable-for-azurewl) command.
+
+As the instruction is to back up all future databases, the operation is done at a *SQLInstance* level.
+
+```azurecli-interactive
+az backup protection auto-enable-for-azurewl --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --policy-name SQLPolicy \
+ --protectable-item-name "sqlinstance;mssqlserver;" \
+ --protectable-item-type SQLInstance \
+ --server-name testSQLVM \
+ --workload-type MSSQL\
+ --output table
+```
+
+## Trigger an on-demand backup
+
+To trigger an on-demand backup, use the [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now) command.
+
+>[!NOTE]
+>The retention policy of an on-demand backup is determined by the underlying retention policy for the database.
+
+```azurecli-interactive
+az backup protection backup-now --resource-group SQLResourceGroup \
+ --item-name sqldatabase;mssqlserver;master \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --backup-type Full
+ --retain-until 01-01-2040
+ --output table
+```
+
+The output appears as:
+
+```output
+Name ResourceGroup
+ -
+e0f15dae-7cac-4475-a833-f52c50e5b6c3 sqlResourceGroup
+```
+
+The response provides you the job name. You can use this job name to track the job status using the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+
+## Next steps
+
+* Learn how to [restore an SQL database in Azure VM using CLI](backup-azure-sql-restore-cli.md).
+* Learn how to [back up an SQL database running in Azure VM using Azure portal](backup-sql-server-database-azure-vms.md).
backup Backup Azure Sql Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-manage-cli.md
+
+ Title: Manage SQL server databases in Azure VMs using Azure Backup via CLI
+description: Learn how to use CLI to manage SQL server databases in Azure VMs in the Recovery Services vault.
+ Last updated : 07/07/2022+++++
+# Manage SQL databases in an Azure VM using Azure CLI
+
+Azure CLI is used to create and manage Azure resources from the Command Line or through scripts. This article describes how to manage a backed-up SQL database on Azure VM using Azure CLI. You can also perform these actions using [the Azure portal](./manage-monitor-sql-database-backup.md).
+
+In this article, you'll learn how to:
+
+> [!div class="checklist"]
+>
+> * Monitor backup and restore jobs
+> * Protect new databases added to an SQL instance
+> * Change the policy
+> * Stop protection
+> * Resume protection
+
+If you've used [Back up an SQL database in Azure using CLI](backup-azure-sql-backup-cli.md) to back up your SQL database, then you're- using the following resources:
+
+* A resource group named *SQLResourceGroup*
+* A vault named *SQLVault*
+* Protected container named *VMAppContainer;Compute;SQLResourceGroup;testSQLVM*
+* Backed-up database/item named *sqldatabase;mssqlserver;master*
+* Resources in the *westus2* region
+
+Azure CLI eases the process of managing an SQL database running on an Azure VM that's backed-up using Azure Backup. The following sections describe each of the management operations.
+
+## Monitor backup and restore jobs
+
+Use the [az backup job list](/cli/azure/backup/job#az-backup-job-list) command to monitor completed or currently running jobs (backup or restore). CLI also allows you to [suspend a currently running job](/cli/azure/backup/job#az-backup-job-stop) or [wait until a job completes](/cli/azure/backup/job#az-backup-job-wait).
+
+```azurecli-interactive
+az backup job list --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --output table
+```
+
+The output appears as:
+
+```output
+Name Operation Status Item Name Start Time UTC
+ - -
+e0f15dae-7cac-4475-a833-f52c50e5b6c3 ConfigureBackup Completed master [testSQLVM] 2019-12-03T03:09:210831+00:00
+ccdb4dce-8b15-47c5-8c46-b0985352238f Backup (Full) Completed master [testSQLVM] 2019-12-01T10:30:58.867489+00:00
+4980af91-1090-49a6-ab96-13bc905a5282 Backup (Differential) Completed master [testSQLVM] 2019-12-01T10:36:00.563909+00:00
+F7c68818-039f-4a0f-8d73-e0747e68a813 Restore (Log) Completed master [testSQLVM] 2019-12-03T05:44:51.081607+00:00
+```
+
+## Change a policy
+
+To change the policy underlying the SQL backup configuration, use the [az backup policy set](/cli/azure/backup/policy#az-backup-policy-set) command. The name parameter in this command refers to the backup item whose policy you want to change. Here, replace the policy of the SQL database *sqldatabase;mssqlserver;master* with a new policy *newSQLPolicy*. You can create new policies using the [az backup policy create](/cli/azure/backup/policy#az-backup-policy-create) command.
+
+```azurecli-interactive
+az backup item set policy --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --policy-name newSQLPolicy \
+ --name sqldatabase;mssqlserver;master \
+```
+
+The output appears as:
+
+```output
+Name Operation Status Item Name Backup Management Type Start Time UTC Duration
+ -- -- --
+ba350996-99ea-46b1-aae2-e2096c1e28cd ConfigureBackup Completed master AzureWorkload 2022-06-22T08:24:03.958001+00:00 0:01:12.435765
+```
+
+## Create a differential backup policy
+
+To create a differential backup policy, use the [az backup policy create](/cli/azure/backup/policy#az-backup-policy-create) command with the following parameters:
+
+* **--backup-management-type**: Azure Workload.
+* **--workload-type**: SQL DataBase.
+* **--name**: Name of the policy.
+* **--policy**: JSON file with appropriate details for schedule and retention.
+* **--resource-group**: Resource group of the vault.
+* **--vault-name**: Name of the vault/
+
+Example:
+
+```azurecli
+az backup policy create --resource-group SQLResourceGroup --vault-name SQLVault --name SQLPolicy --backup-management-type AzureWorkload --policy SQLPolicy.json --workload-type SQLDataBase
+```
+
+Sample JSON (sqlpolicy.json):
+
+```json
+ "eTag": null,
+ "id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SQLResourceGroup/providers/Microsoft.RecoveryServices/vaults/SQLVault/backupPolicies/SQLPolicy",
+ "location": null,
+ "name": "sqlpolicy",
+ "properties": {
+ "backupManagementType": "AzureWorkload",
+ "workLoadType": "SQLDataBase",
+ "settings": {
+ "timeZone": "UTC",
+ "issqlcompression": false,
+ "isCompression": false
+ },
+ "subProtectionPolicy": [
+ {
+ "policyType": "Full",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Sunday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 104,
+ "durationType": "Weeks"
+ }
+ },
+ "monthlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Months"
+ }
+ },
+ "yearlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "monthsOfYear": [
+ "January"
+ ],
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": [
+ "2022-06-13T19:30:00Z"
+ ],
+ "retentionDuration": {
+ "count": 10,
+ "durationType": "Years"
+ }
+ }
+ }
+ },
+ {
+ "policyType": "Differential",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunDays": [
+ "Monday",
+ "Tuesday",
+ "Wednesday",
+ "Thursday",
+ "Friday",
+ "Saturday"
+ ],
+ "scheduleRunTimes": [
+ "2022-06-13T02:00:00Z"
+ ],
+ "scheduleWeeklyFrequency": 0
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ }
+ }
+ },
+ {
+ "policyType": "Log",
+ "schedulePolicy": {
+ "schedulePolicyType": "LogSchedulePolicy",
+ "scheduleFrequencyInMins": 120
+ },
+ "retentionPolicy": {
+ "retentionPolicyType": "SimpleRetentionPolicy",
+ "retentionDuration": {
+ "count": 15,
+ "durationType": "Days"
+ }
+ }
+ }
+ ],
+ "protectedItemsCount": 0
+ },
+ "resourceGroup": "SQLResourceGroup",
+ "tags": null,
+ "type": "Microsoft.RecoveryServices/vaults/backupPolicies"
+}
+```
+
+Once the policy is created successfully, the output of the command shows the policy JSON that you passed as a parameter while executing the command.
+
+You can modify the following section of the policy to specify the required backup frequency and retention for differential backups.
+
+For example:
+
+```json
+{
+ "policyType": "Differential",
+ "retentionPolicy": {
+ "retentionDuration": {
+ "count": 30,
+ "durationType": "Days"
+ },
+ "retentionPolicyType": "SimpleRetentionPolicy"
+ },
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunDays": [
+ "Monday",
+ "Tuesday",
+ "Wednesday",
+ "Thursday",
+ "Friday",
+ "Saturday"
+ ],
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunTimes": [
+ "2017-03-07T02:00:00+00:00"
+ ],
+ "scheduleWeeklyFrequency": 0
+ }
+}
+```
+
+Example:
+
+If you want to have differential backups only on *Saturday* and retain them for *60 days*, do the following changes in the policy:
+
+* Update **retentionDuration** count to 60 days.
+* Specify only Saturday as **ScheduleRunDays**.
+
+```json
+ {
+ "policyType": "Differential",
+ "retentionPolicy": {
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Days"
+ },
+ "retentionPolicyType": "SimpleRetentionPolicy"
+ },
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunDays": [
+ "Saturday"
+ ],
+ "scheduleRunFrequency": "Weekly",
+ "scheduleRunTimes": [
+ "2017-03-07T02:00:00+00:00"
+ ],
+ "scheduleWeeklyFrequency": 0
+ }
+}
+```
+
+## Protect the new databases added to a SQL instance
+
+[Registering a SQL instance with a Recovery Services vault](backup-azure-sql-backup-cli.md#register-and-protect-the-sql-server) automatically discovers all databases in this instance.
+
+However, if you've added new databases to the SQL instance later, use the [az backup protectable-item initialize](/cli/azure/backup/protectable-item#az-backup-protectable-item-initialize) command. This command discovers the new databases added.
+
+```azurecli-interactive
+az backup protectable-item initialize --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --workload-type SQLDataBase
+```
+
+Then use the [az backup protectable-item list](/cli/azure/backup/protectable-item#az-backup-protectable-item-list) cmdlet to list all the databases that have been discovered on your SQL instance. This list, however, excludes those databases on which backup has already been configured. Once the database to be backed-up is discovered, refer to [Enable backup on SQL database](backup-azure-sql-backup-cli.md#enable-backup-on-the-sql-database).
+
+```azurecli-interactive
+az backup protectable-item list --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --workload-type SQLDataBase \
+ --protectable-item-type SQLDataBase \
+ --output table
+```
+
+The new database that you want to back up shows in this list, which appears as:
+
+```output
+Name Protectable Item Type ParentName ServerName IsProtected
+ - --
+sqldatabase;mssqlserver;db1 SQLDataBase mssqlserver testSQLVM NotProtected
+sqldatabase;mssqlserver;db2 SQLDataBase mssqlserver testSQLVM NotProtected
+```
+
+## Stop protection for an SQL database
+
+You can stop protecting an SQL database in the following processes:
+
+* Stop all future backup jobs and delete all recovery points.
+* Stop all future backup jobs and leave the recovery points intact.
+
+If you choose to leave recovery points, keep in mind these details:
+
+* All recovery points remain intact forever, and all pruning stops at stop protection with retain data.
+* You'll be charged for the protected instance and the consumed storage.
+* If you delete a data source without stopping backups, new backups will fail.
+
+The processes to stop protection are detailed below.
+
+### Stop protection with retain data
+
+To stop protection with retain data, use the [az backup protection disable](/cli/azure/backup/protection#az-backup-protection-disable)` command.
+
+```azurecli-interactive
+az backup protection disable --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --item-name sqldatabase;mssqlserver;master \
+ --workload-type SQLDataBase \
+ --output table
+```
+
+The output appears as:
+
+```output
+Name ResourceGroup
+
+g0f15dae-7cac-4475-d833-f52c50e5b6c3 SQLResourceGroup
+```
+
+To verify the status of this operation, use the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+
+### Stop protection without retain data
+
+To stop protection without retain data, use the [az backup protection disable](/cli/azure/backup/protection#az-backup-protection-disable) command.
+
+```azurecli-interactive
+az backup protection disable --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --item-name sqldatabase;mssqlserver;master \
+ --workload-type SQLDataBase \
+ --delete-backup-data true \
+ --output table
+```
+
+The output appears as:
+
+```output
+Name ResourceGroup
+
+g0f15dae-7cac-4475-d833-f52c50e5b6c3 SQLResourceGroup
+```
+
+To verify the status of this operation, use the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+
+## Resume protection
+
+When you stop protection for the SQL database with retain data, you can resume protection later. If you don't retain the backed-up data, you won't be able to resume protection.
+
+To resume protection, use the [az backup protection resume](/cli/azure/backup/protection#az-backup-protection-resume) command.
+
+```azurecli-interactive
+az backup protection resume --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --policy-name SQLPolicy \
+ --output table
+```
+
+The output appears as:
+
+```output
+Name ResourceGroup
+
+b2a7f108-1020-4529-870f-6c4c43e2bb9e SQLResourceGroup
+```
+
+To verify the status of this operation, use the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+
+## Next steps
+
+* Learn how to [back up an SQL database running on Azure VM using the Azure portal](backup-sql-server-database-azure-vms.md).
+* Learn how to [manage a backed-up SQL database running on Azure VM using the Azure portal](manage-monitor-sql-database-backup.md).
backup Backup Azure Sql Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-restore-cli.md
+
+ Title: Restore SQL server databases in Azure VMs using Azure Backup via CLI
+description: Learn how to use CLI to restore SQL server databases in Azure VMs in the Recovery Services vault.
+ Last updated : 07/07/2022+++++
+# Restore SQL databases in an Azure VM using Azure CLI
+
+\Azure CLI is used to create and manage Azure resources from the Command Line or through scripts. This article describes how to restore a backed-up SQL database on an Azure VM using Azure CLI. You can also perform these actions using the [Azure portal](restore-sql-database-azure-vm.md).
+
+Use [Azure Cloud Shell](../cloud-shell/overview.md) to run CLI commands.
+
+In this article, you'll learn how to:
+
+> [!div class="checklist"]
+>
+> * View restore points for a backed-up database
+> * Restore a database
+
+This article assumes you've an SQL database running on Azure VM that's backed-up using Azure Backup. If you've used [Back up an SQL database in Azure using CLI](backup-azure-sql-backup-cli.md) to back up your SQL database, then you're using the following resources:
+
+* A resource group named *SQLResourceGroup*
+* A vault named *SQLVault*
+* Protected container named *VMAppContainer;Compute;SQLResourceGroup;testSQLVM*
+* Backed-up database/item named *sqldatabase;mssqlserver;master*
+* Resources in the *westus2* region
+
+## View restore points for a backed-up database
+
+To view the list of all recovery points for a database, use the [az backup recoverypoint list](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) command as:
+
+```azurecli-interactive
+az backup recoverypoint list --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --item-name sqldatabase;mssqlserver;master \
+ --output table
+```
+
+The list of recovery points appears as:
+
+```output
+Name Time BackupManagementType Item Name RecoveryPointType
+- -
+7660777527047692711 2019-12-10T04:00:32.346000+00:00 AzureWorkload sqldatabase;mssqlserver;master Full
+7896624824685666836 2019-12-15T10:33:32.346000+00:00 AzureWorkload sqldatabase;mssqlserver;master Differential
+DefaultRangeRecoveryPoint AzureWorkload sqldatabase;mssqlserver;master Log
+```
+
+The list above contains three recovery points: each for full, differential, and log backup.
+
+>[!NOTE]
+>You can also view the start and end points of every unbroken log backup chain, using the [az backup recoverypoint show-log-chain](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) command.
+
+## Prerequisites to restore a database
+
+Ensure that the following prerequisites are met before restoring a database:
+
+* You can restore the database only to an SQL instance in the same region.
+* The target instance must be registered with the same vault as the source.
+
+## Restore a database
+
+Azure Backup can restore SQL databases that are running on Azure VMs as:
+
+* Restore to a specific date or time (to the second) by using log backups. Azure Backup automatically determines the appropriate full, differential backups and the chain of log backups that are required to restore based on the selected time.
+* Restore to a specific full or differential backup to restore to a specific recovery point.
+
+To restore a database, use the [az restore restore-azurewl](/cli/azure/backup/restore#az-backup-restore-restore-azurewl) command, which requires a recovery config object as one of the inputs. You can generate this object using the [az backup recoveryconfig show](/cli/azure/backup/recoveryconfig#az-backup-recoveryconfig-show) command. The recovery config object contains all details to perform a restore. One of them is the restore mode ΓÇô **OriginalWorkloadRestore** or **AlternateWorkloadRestore**.
+
+>[!NOTE]
+> **OriginalWorkloadRestore**: Restores data to the same SQL instance as the original source. This option overwrites the original database.
+> **AlternateWorkloadRestore**: Restores database to an alternate location and keep the original source database.
+
+## Restore to alternate location
+
+To restore a database to an alternate location, use **AlternateWorkloadRestore** as the restore mode. You must then choose the restore point, which could be a previous point-in-time or any previous restore points.
+
+Let's proceed to restore to a previous restore point. [View the list of restore points](#view-restore-points-for-a-backed-up-database) for the database and choose the point you want to restore. Here, let's use the restore point with the name *7660777527047692711*.
+
+With the above restore point name and the restore mode, create the recovery config object using the [az backup recoveryconfig show](/cli/azure/backup/recoveryconfig#az-backup-recoveryconfig-show) command. Check the remaining parameters in this command:
+
+* **--target-item-name**: The name to be used by the restored database. In this scenario, we used the name *restored_database*.
+* **--target-server-name**: The name of an SQL server that's successfully registered to a Recovery Services vault and stays the same region as per the database to be restored. Here, you're restoring the database to the same SQL server that you've protected, named *testSQLVM*.
+* **--target-server-type**: For the restore of SQL databases, you must use **SQLInstance**.
+
+```azurecli-interactive
+
+az backup recoveryconfig show --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --item-name SQLDataBase;mssqlserver;master \
+ --restore-mode AlternateWorkloadRestore \
+ --rp-name 7660777527047692711 \
+ --target-item-name restored_database \
+ --target-server-name testSQLVM \
+ --target-server-type SQLInstance \
+ --workload-type SQLDataBase \
+ --output json
+```
+
+The response to the above query is a recovery config object that appears as:
+
+```output
+{
+ "container_id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SQLResourceGroup/providers/Microsoft.RecoveryServices/vaults/SQLVault/backupFabrics/Azure/protectionContainers/vmappcontainer;compute;SQLResourceGroup;testSQLVM",
+ "container_uri": "VMAppContainer;compute;SQLResourceGroup;testSQLVM",
+ "database_name": "MSSQLSERVER/restored_database",
+ "filepath": null,
+ "item_type": "SQL",
+ "item_uri": "SQLDataBase;mssqlserver;master",
+ "log_point_in_time": null,
+ "recovery_mode": null,
+ "recovery_point_id": "7660777527047692711",
+ "restore_mode": "AlternateLocation",
+ "source_resource_id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SQLResourceGroup/providers/Microsoft.Compute/virtualMachines/testSQLVM",
+ "workload_type": "SQLDataBase",
+ "alternate_directory_paths": []
+}
+```
+
+Now, to restore the database, run the [az restore restore-azurewl](/cli/azure/backup/restore#az-backup-restore-restore-azurewl) command. To use this command, enter the above JSON output that's saved to a file named *recoveryconfig.json*.
+
+```azurecli-interactive
+az backup restore restore-azurewl --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --recovery-config recoveryconfig.json \
+ --output table
+```
+
+The output appears as:
+
+```output
+Name Operation Status Item Name Backup Management Type Start Time UTC Duration
+ -- - -- --
+be7ea4a4-0752-4763-8570-a306b0a0106f Restore InProgress master [testSQLVM] AzureWorkload 2022-06-21T03:51:06.898981+00:00 0:00:05.652967
+```
+
+The response provides you the job name. You can use this job name to track the job status using [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+
+## Restore and overwrite
+
+To restore to the original location, use **OriginalWorkloadRestore** as the restore mode. You must then choose the restore point, which could be a previous point-in-time or any of the previous restore points.
+
+As an example, let's choose the previous point-in-time "28-11-2019-09:53:00" to restore to. You can provide this restore point in the following formats: *dd-mm-yyyy, dd-mm-yyyy-hh:mm:ss*. To choose a valid point-in-time to restore, use the [az backup recoverypoint show-log-chain](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) command, which lists the intervals of unbroken log chain backups.
+
+```azurecli-interactive
+az backup recoveryconfig show --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --item-name sqldatabase;mssqlserver;master \
+ --restore-mode OriginalWorkloadRestore \
+ --log-point-in-time 20-06-2022-09:02:41 \
+ --output json
+```
+
+The response to the above query is a recovery config object that appears as:
+
+```output
+{
+ "alternate_directory_paths": null,
+ "container_id": null,
+ "container_uri": "VMAppContainer;compute;petronasinternaltest;sqlserver-11",
+ "database_name": null,
+ "filepath": null,
+ "item_type": "SQL",
+ "item_uri": "SQLDataBase;mssqlserver;msdb",
+ "log_point_in_time": "20-06-2022-09:02:41",
+ "recovery_mode": null,
+ "recovery_point_id": "DefaultRangeRecoveryPoint",
+ "restore_mode": "OriginalLocation",
+ "source_resource_id": "/subscriptions/62b829ee-7936-40c9-a1c9-47a93f9f3965/resourceGroups/petronasinternaltest/providers/Microsoft.Compute/virtualMachines/sqlserver-11",
+ "workload_type": "SQLDataBase"
+}
+```
+
+Now, to restore the database, run the [az restore restore-azurewl](/cli/azure/backup/restore#az-backup-restore-restore-azurewl) command. To use this command, enter the above JSON output that's saved to a file named *recoveryconfig.json*.
+
+```azurecli-interactive
+az backup restore restore-azurewl --resource-group sqlResourceGroup \
+ --vault-name sqlVault \
+ --recovery-config recoveryconfig.json \
+ --output table
+```
+
+The output appears as:
+
+```output
+Name Operation Status Item Name Backup Management Type Start Time UTC Duration
+ -- - - -- --
+1730ec49-166a-4bfd-99d5-93027c2d8480 Restore InProgress master [testSQLVM] AzureWorkload 2022-06-21T04:04:11.161411+00:00 0:00:03.118076
+```
+
+The response provides you the job name. You can use this job name to track the job status using the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+
+## Restore to a secondary region
+
+To restore a database to the secondary region, specify a target vault and server located in the secondary region, in the restore configuration.
+
+```azurecli-interactive
+az backup recoveryconfig show --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;compute;SQLResourceGroup;testSQLVM \
+ --item-name sqldatabase;mssqlserver;master \
+ --restore-mode AlternateWorkloadRestore \
+ --from-full-rp-name 293170069256531 \
+ --rp-name 293170069256531 \
+ --target-server-name targetSQLServer \
+ --target-container-name VMAppContainer;compute;SQLResourceGroup;targetSQLServer \
+ --target-item-name testdb_restore_1 \
+ --target-server-type SQLInstance \
+ --workload-type SQLDataBase \
+ --target-resource-group SQLResourceGroup \
+ --target-vault-name targetVault \
+ --backup-management-type AzureWorkload
+```
+
+The response is a recovery configuration object that appears as:
+
+```output
+ {
+ "container_id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SQLResourceGroup/providers/Microsoft.RecoveryServices/vaults/targetVault/backupFabrics/Azure/protectionContainers/vmappcontainer;compute;SQLResourceGroup;targetSQLServer",
+ "container_uri": "VMAppContainer;compute;SQLResourceGroup;testSQLVM",
+ "database_name": "MSSQLSERVER/sqldatabase;mssqlserver;testdb_restore_1",
+ "filepath": null,
+ "item_type": "SQL",
+ "item_uri": "SQLDataBase;mssqlserver;master",
+ "log_point_in_time": null,
+ "recovery_mode": null,
+ "recovery_point_id": "932606668166874635",
+ "restore_mode": "AlternateLocation",
+ "source_resource_id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SQLResourceGroup/providers/Microsoft.Compute/virtualMachines/testSQLVM",
+ "workload_type": "SQLDataBase",
+ "alternate_directory_paths": [],
+}
+```
+
+Use this recovery configuration in the [az restore restore-azurewl](/cli/azure/backup/restore#az-backup-restore-restore-azurewl) command. Select the `--use-secondary-region` flag to restore the database to the secondary region.
+
+```azurecli-interactive
+az backup restore restore-azurewl --resource-group SQLResourceGroup \
+ --vault-name testSQLVault \
+ --recovery-config recoveryconfig.json \
+ --use-secondary-region \
+ --output table
+```
+
+The output appears as:
+
+```output
+Name Operation Status Item Name Backup Management Type Start Time UTC Duration
+ - - -- --
+0d863259-b0fb-4935-8736-802c6667200b CrossRegionRestore InProgress master [testSQLVM] AzureWorkload 2022-06-21T08:29:24.919138+00:00 0:00:12.372421
+```
+
+## Restore as files
+
+To restore the backup data as files instead of a database, use **RestoreAsFiles** as the restore mode. Then choose the restore point, which can be a previous point-in-time or any previous restore points. Once the files are dumped to a specified path, you can take these files to any SQL machine where you want to restore them as a database. Because you can move these files to any machine, you can now restore the data across subscriptions and regions.
+
+Here, choose the previous point-in-time `28-11-2019-09:53:00` to restore and the location to dump backup files as `/home/sql/restoreasfiles` on the same SQL server. You can provide this restore point in one of the following formats: **dd-mm-yyyy** or **dd-mm-yyyy-hh:mm:ss**. To choose a valid point-in-time to restore, use the [az backup recoverypoint show-log-chain](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) command, which lists the intervals of unbroken log chain backups.
+
+With the above restore point name and the restore mode, create the recovery config object using the [az backup recoveryconfig show](/cli/azure/backup/recoveryconfig#az-backup-recoveryconfig-show) command. Check each of the remaining parameters in this command:
+
+* **--target-container-name**: The name of a SQL server that's successfully registered to a Recovery Services vault and present in the same region as per the database to be restored. Let's restore the database as files to the same SQL server that you've protected, named *hxehost*.
+* **--rp-name**: For a point-in-time restore, the restore point name is **DefaultRangeRecoveryPoint**.
+
+```azurecli-interactive
+az backup recoveryconfig show --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --item-name sqldatabase;mssqlserver;master \
+ --restore-mode RestoreAsFiles \
+ --rp-name 932606668166874635 \
+ --target-container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \
+ --filepath /sql/restoreasfiles \
+ --output json
+```
+
+The response to the query above js a recovery config object that appears as:
+
+```output
+{
+ "alternate_directory_paths": null,
+ "container_id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SQLResourceGroup/providers/Microsoft.RecoveryServices/vaults/SQLVault/backupFabrics/Azure/protectionContainers/VMAppContainer;Compute;SQLResourceGroup;testSQLVM",
+ "container_uri": "VMAppContainer;compute;SQLResourceGroup;testSQLVM",
+ "database_name": null,
+ "filepath": "/sql/restoreasfiles",
+ "item_type": "SQL",
+ "item_uri": "SQLDataBase;mssqlserver;master",
+ "log_point_in_time": null,
+ "recovery_mode": "FileRecovery",
+ "recovery_point_id": "932606668166874635",
+ "restore_mode": "AlternateLocation",
+ "source_resource_id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SQLResourceGroup/providers/Microsoft.Compute/virtualMachines/testSQLVM",
+ "workload_type": "SQLDataBase"
+}
+```
+
+Now, to restore the database as files run the [az restore restore-azurewl](/cli/azure/backup/restore#az-backup-restore-restore-azurewl) command. To use this command, enter the JSON output above that's saved to a file named *recoveryconfig.json*.
+
+```azurecli-interactive
+az backup restore restore-azurewl --resource-group SQLResourceGroup \
+ --vault-name SQLVault \
+ --restore-config recoveryconfig.json \
+ --output json
+```
+
+The output appears as:
+
+```output
+{
+ "eTag": null,
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/SQLResourceGroup/providers/Microsoft.RecoveryServices/vaults/SQLVault/backupJobs/e9cd9e73-e3a3-425a-86a9-8dd1c500ff56",
+ "location": null,
+ "name": "e9cd9e73-e3a3-425a-86a9-8dd1c500ff56",
+ "properties": {
+ "actionsInfo": [
+ "1"
+ ],
+ "activityId": "9e7c8ee4-f1ef-11ec-8a2c-3c52826c1a9a",
+ "backupManagementType": "AzureWorkload",
+ "duration": "0:00:04.304322",
+ "endTime": null,
+ "entityFriendlyName": "master [testSQLVM]",
+ "errorDetails": null,
+ "extendedInfo": {
+ "dynamicErrorMessage": null,
+ "propertyBag": {
+ "Job Type": "Restore as files"
+ },
+ "tasksList": [
+ {
+ "status": "InProgress",
+ "taskId": "Transfer data from vault"
+ }
+ ]
+ },
+ "isUserTriggered": true,
+ "jobType": "AzureWorkloadJob",
+ "operation": "Restore",
+ "startTime": "2022-06-22T05:53:32.951666+00:00",
+ "status": "InProgress",
+ "workloadType": "SQLDataBase"
+ },
+ "resourceGroup": "SQLResourceGroup",
+ "tags": null,
+ "type": "Microsoft.RecoveryServices/vaults/backupJobs"
+}
+```
+
+The response provides you the job name. You can use this job name to track the job status using the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
++
+## Next steps
+
+* Learn how to [manage SQL databases that are backed up using Azure CLI](backup-azure-sql-manage-cli.md).
+* Learn how to [restore an SQL database running in Azure VM using the Azure portal](./restore-sql-database-azure-vm.md).
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
Following is a diagrammatic representation for performing a critical operation o
Here is the flow of events in a typical scenario: 1. The Backup admin creates the Recovery Services vault.
-1. The Security admin creates the Resource Guard. The Resource Guard can be in a different subscription or a different tenant w.r.t the Recovery Services vault. It must be ensured that the Backup admin does not have Contributor permissions on the Resource Guard.
+1. The Security admin creates the Resource Guard. The Resource Guard can be in a different subscription or a different tenant with respect to the Recovery Services vault. It must be ensured that the Backup admin does not have Contributor permissions on the Resource Guard.
1. The Security admin grants the **Reader** role to the Backup Admin for the Resource Guard (or a relevant scope). The Backup admin requires the reader role to enable MUA on the vault. 1. The Backup admin now configures the Recovery Services vault to be protected by MUA via the Resource Guard. 1. Now, if the Backup admin wants to perform a critical operation on the vault, they need to request access to the Resource Guard. The Backup admin can contact the Security admin for details on gaining access to perform such operations. They can do this using Privileged Identity Management (PIM) or other processes as mandated by the organization.
RS vault and Resource Guard are **in different tenants.** </br> The Backup admin
## Next steps
-[Configure Multi-user authorization using Resource Guard](multi-user-authorization.md)
+[Configure Multi-user authorization using Resource Guard](multi-user-authorization.md)
cloud-services-extended-support Enable Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-rdp.md
The Azure portal uses the remote desktop extension to enable remote desktop even
2. Select **Add**. 3. Choose the roles to enable remote desktop for.
-4. Fill in the required fields for user name, password, expiry, and certificate (not required).
+4. Fill in the required fields for user name, password and expiration.
+ > [Note] > The password for remote desktop must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements from the following: 1) Contains an uppercase character 2) Contains a lowercase character 3) Contains a numeric digit 4) Contains a special character 5) Control characters are not allowed
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Pitch changes can be applied at the sentence level.
</voice> </speak> ```
-## say-as element
+
+## Adjust emphasis
+
+The optional `emphasis` element is used to add or remove word-level stress for the text. This element can only contain text and the following elements: `audio`, `break`, `emphasis`, `lang`, `phoneme`, `prosody`, `say-as`, `sub`, and `voice`.
+
+> [!NOTE]
+> The word-level emphasis tuning is only available for these neural voices: `en-US-GuyNeural`, `en-US-DavisNeural`, and `en-US-JaneNeural`.
+
+**Syntax**
+
+```xml
+<emphasis level="value"></emphasis>
+```
+
+**Attribute**
+
+| Attribute | Description | Required or optional |
+|--|-||
+| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul><br>When the `level` attribute is not specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2)| Optional|
+
+**Example**
+
+This SSML snippet demonstrates how the `emphasis` element is used to add moderate level emphasis for the word "meetings".
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
+ <voice name="en-US-GuyNeural">
+ I can help you join your <emphasis level="moderate">meetings</emphasis> fast.
+ </voice>
+</speak>
+```
+
+## Add say-as element
The `say-as` element is optional. It indicates the content type, such as number or date, of the element's text. This element provides guidance to the speech synthesis engine about how to pronounce the text.
cognitive-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md
Last updated 06/27/2022
-# Cognitive Services autoscale Feature
+# Cognitive Services autoscale feature
This article provides guidance for how customers can access higher rate limits on their Cognitive Service resources.
This feature is disabled by default for every new resource. Follow these instruc
Go to your resource's page in the Azure portal, and select the **Overview** tab on the left pane. Under the **Essentials** section, find the **Autoscale** line and select the link to view the **Autoscale Settings** pane and enable the feature. #### [Azure CLI](#tab/cli)
az resource update --namespace Microsoft.CognitiveServices --resource-type accou
-## Frequently Asked Questions
+## Frequently asked questions
### Does enabling the autoscale feature mean my resource will never be throttled again?
Yes, you can disable the autoscale feature through Azure portal or CLI and retur
Autoscale feature is available for the following
-* [Face](computer-vision/overview-identity.md)
* [Computer Vision](computer-vision/index.yml)
-* [Language](language-service/overview.md)
+* [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
### Can I test this feature using a free subscription?
cognitive-services Document Format Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/reference/document-format-guidelines.md
Any additional columns in the source file are ignored.
### Structured data format through import
-Importing a knowledge base replaces the content of the existing knowledge base. Import requires a structured .tsv file that contains data source information. This information helps group the question-answer pairs and attribute them to a particular data source. Question answer pairs have an optional metadata field that can be used to group question answer pairs into categories.
+Importing a knowledge base replaces the content of the existing knowledge base. Import requires a structured .tsv file that contains data source information. This information helps group the question-answer pairs and attribute them to a particular data source. Question answer pairs have an optional metadata field that can be used to group question answer pairs into categories. The import format needs to be similar to the exported knowledgebase format.
-| Question | Answer | Source| Metadata (1 key: 1 value) |
-|--||-||
-| Question1 | Answer1 | Url1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
-| Question2 | Answer2 | Editorial| `Key:Value` |
+| Question | Answer | Source| Metadata (1 key: 1 value) | QnaId |
+|--||-|||
+| Question1 | Answer1 | Url1 | <code>Key1:Value1 &#124; Key2:Value2</code> | QnaId 1 |
+| Question2 | Answer2 | Editorial| `Key:Value` | QnaId 2 |
<a href="#formatting-considerations"></a>
confidential-computing How To Fortanix Confidential Computing Manager Node Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-fortanix-confidential-computing-manager-node-agent.md
In this tutorial, you used Fortanix tools to convert your application image to r
To learn more about Azure confidential computing offerings, see [Azure confidential computing overview](overview.md).
-You can also learn how to complete similar tasks by using other third-party offerings on Azure, like [Anjuna](https://azuremarketplace.microsoft.com/marketplace/apps/anjuna-5229812.aee-az-v1) and [Scone](https://sconedocs.github.io).
+You can also learn how to complete similar tasks by using other third-party offerings on Azure, like [Anjuna](https://azuremarketplace.microsoft.com/marketplace/apps/anjuna1646713490052.anjuna_cc_saas?tab=Overview) and [Scone](https://sconedocs.github.io).
confidential-computing How To Fortanix Confidential Computing Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-fortanix-confidential-computing-manager.md
In this quickstart, you enrolled a node using an Azure managed app to Fortanix's
To learn more about Azure's confidential computing offerings, see [Azure confidential computing](overview.md).
-Learn how to complete similar tasks using other third-party offerings on Azure, like [Anjuna](https://azuremarketplace.microsoft.com/marketplace/apps/anjuna-5229812.aee-az-v1) and [Scone](https://sconedocs.github.io).
+Learn how to complete similar tasks using other third-party offerings on Azure, like [Anjuna](https://azuremarketplace.microsoft.com/marketplace/apps/anjuna1646713490052.anjuna_cc_saas?tab=Overview) and [Scone](https://sconedocs.github.io).
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps,
| Check the root folder for changes to a specific blob. | **<*container-name*>/<*blob-name*>.<*blob-extension*>** | | Check the root folder for changes to any blobs with the same extension, for example, **.txt**. | **<*container-name*>/{name}.txt** <br><br>**Important**: Make sure that you use **{name}** as a literal. | | Check the root folder for changes to any blobs with names starting with a specific string, for example, **Sample-**. | **<*container-name*>/Sample-{name}** <br><br>**Important**: Make sure that you use **{name}** as a literal. |
- | Check a subfolder for a newly added blob. | **<*container-name*>/<*subfolder*>** |
+ | Check a subfolder for a newly added blob. | **<*container-name*>/<*subfolder*>/{blobname}.{blobextension}** <br><br>**Important**: Make sure that you use **{blobname}.{blobextension}** as a literal. |
| Check a subfolder for changes to a specific blob. | **<*container-name*>/<*subfolder*>/<*blob-name*>.<*blob-extension*>** | |||
cosmos-db Access Data Spring Data App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/access-data-spring-data-app.md
The following procedure configures the test application.
``` > [!NOTE]
- > Although the usage described below is identical for both Java version 3 and version 4 samples above, the way in which they have been implemented in order to include custom retry and load balancing policies is different. We recommend reviewing the code to understand how to implement custom policies if you are making changes to an existing spring java application.
+ > Although the usage described below is identical for both Java version 3 and version 4 samples above, the way in which they have been implemented in order to include custom retry and load balancing policies is different. We recommend reviewing the code to understand how to implement custom policies if you are making changes to an existing Spring Java application.
1. Locate the *application.properties* file in the *resources* directory of the sample project, or create the file if it does not already exist.
For more information about using Azure with Java, see the [Azure for Java Develo
[COSMOSDB04]: media/access-data-spring-data-app/create-cosmos-db-04.png [COSMOSDB05]: media/access-data-spring-data-app/create-cosmos-db-05.png [COSMOSDB05-1]: media/access-data-spring-data-app/create-cosmos-db-05-1.png
-[COSMOSDB06]: media/access-data-spring-data-app/create-cosmos-db-06.png
+[COSMOSDB06]: media/access-data-spring-data-app/create-cosmos-db-06.png
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Title: Partitioning and horizontal scaling in Azure Cosmos DB description: Learn about partitioning, logical, physical partitions in Azure Cosmos DB, best practices when choosing a partition key, and how to manage logical partitions--+++ Last updated 03/24/2022 - # Partitioning and horizontal scaling in Azure Cosmos DB
cosmos-db Create Sql Api Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-go.md
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> * [.NET V3](create-sql-api-dotnet.md)
-> * [.NET V4](create-sql-api-dotnet-V4.md)
-> * [Java SDK v4](create-sql-api-java.md)
-> * [Spring Data v3](create-sql-api-spring-data.md)
-> * [Spark v3 connector](create-sql-api-spark.md)
+>
+> * [.NET](quickstart-dotnet.md)
> * [Node.js](create-sql-api-nodejs.md)
+> * [Java](create-sql-api-java.md)
+> * [Spring Data](create-sql-api-spring-data.md)
> * [Python](create-sql-api-python.md)
+> * [Spark v3](create-sql-api-spark.md)
> * [Go](create-sql-api-go.md)
+>
In this quickstart, you'll build a sample Go application that uses the Azure SDK for Go to manage a Cosmos DB SQL API account.
cosmos-db Create Sql Api Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-java.md
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> * [.NET V3](create-sql-api-dotnet.md)
-> * [.NET V4](create-sql-api-dotnet-V4.md)
-> * [Java SDK v4](create-sql-api-java.md)
-> * [Spring Data v3](create-sql-api-spring-data.md)
-> * [Spark v3 connector](create-sql-api-spark.md)
+>
+> * [.NET](quickstart-dotnet.md)
> * [Node.js](create-sql-api-nodejs.md)
+> * [Java](create-sql-api-java.md)
+> * [Spring Data](create-sql-api-spring-data.md)
> * [Python](create-sql-api-python.md)
+> * [Spark v3](create-sql-api-spark.md)
> * [Go](create-sql-api-go.md)
+>
In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-nodejs.md
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> - [.NET V3](create-sql-api-dotnet.md)
-> - [.NET V4](create-sql-api-dotnet-V4.md)
-> - [Java SDK v4](create-sql-api-java.md)
-> * [Spring Data v3](create-sql-api-spring-data.md)
-> * [Spark v3 connector](create-sql-api-spark.md)
-> - [Node.js](create-sql-api-nodejs.md)
-> - [Python](create-sql-api-python.md)
+>
+> * [.NET](quickstart-dotnet.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Java](create-sql-api-java.md)
+> * [Spring Data](create-sql-api-spring-data.md)
+> * [Python](create-sql-api-python.md)
+> * [Spark v3](create-sql-api-spark.md)
> * [Go](create-sql-api-go.md)-
+>
In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. ## Walkthrough video
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-python.md
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> * [.NET V3](create-sql-api-dotnet.md)
-> * [.NET V4](create-sql-api-dotnet-V4.md)
-> * [Java SDK v4](create-sql-api-java.md)
-> * [Spring Data v3](create-sql-api-spring-data.md)
-> * [Spark v3 connector](create-sql-api-spark.md)
+>
+> * [.NET](quickstart-dotnet.md)
> * [Node.js](create-sql-api-nodejs.md)
+> * [Java](create-sql-api-java.md)
+> * [Spring Data](create-sql-api-spring-data.md)
> * [Python](create-sql-api-python.md)
+> * [Spark v3](create-sql-api-spark.md)
> * [Go](create-sql-api-go.md)
+>
In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and from Visual Studio Code with a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> * [.NET V3](create-sql-api-dotnet.md)
-> * [.NET V4](create-sql-api-dotnet-V4.md)
-> * [Java SDK v4](create-sql-api-java.md)
-> * [Spring Data v3](create-sql-api-spring-data.md)
-> * [Spark 3 OLTP connector](create-sql-api-spark.md)
+>
+> * [.NET](quickstart-dotnet.md)
> * [Node.js](create-sql-api-nodejs.md)
+> * [Java](create-sql-api-java.md)
+> * [Spring Data](create-sql-api-spring-data.md)
> * [Python](create-sql-api-python.md)
+> * [Spark v3](create-sql-api-spark.md)
> * [Go](create-sql-api-go.md)
+>
This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x.
cosmos-db Create Sql Api Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spring-data.md
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> * [.NET V3](create-sql-api-dotnet.md)
-> * [.NET V4](create-sql-api-dotnet-V4.md)
-> * [Java SDK v4](create-sql-api-java.md)
-> * [Spring Data v3](create-sql-api-spring-data.md)
-> * [Spark v3 connector](create-sql-api-spark.md)
+>
+> * [.NET](quickstart-dotnet.md)
> * [Node.js](create-sql-api-nodejs.md)
+> * [Java](create-sql-api-java.md)
+> * [Spring Data](create-sql-api-spring-data.md)
> * [Python](create-sql-api-python.md)
+> * [Spark v3](create-sql-api-spark.md)
> * [Go](create-sql-api-go.md)
+>
In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
> [!div class="op_single_selector"] > > * [.NET](quickstart-dotnet.md)
+> * [Node.js](create-sql-api-nodejs.md)
+> * [Java](create-sql-api-java.md)
+> * [Spring Data](create-sql-api-spring-data.md)
+> * [Python](create-sql-api-python.md)
+> * [Spark v3](create-sql-api-spark.md)
+> * [Go](create-sql-api-go.md)
> Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks.
cosmos-db Session State And Caching Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/session-state-and-caching-provider.md
Previously updated : 07/15/2021 Last updated : 07/06/2022 # Use Azure Cosmos DB as an ASP.NET session state and caching provider
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
} ```
-Where you specify the database and container you want the session state to be stored and optionally, create them if they don't exist.
+Where you specify the database and container you want the session state to be stored and optionally, create them if they don't exist using the `CreateIfNotExists` attribute.
+
+> [!IMPORTANT]
+> If you provide an existing container instead of using `CreateIfNotExists`, make sure it has [time to live enabled](how-to-time-to-live.md).
You can customize your SDK client configuration by using the `CosmosClientBuilder` or if your application is already using a `CosmosClient` for other operations with Cosmos DB, you can also inject it into the provider:
After this, you can use ASP.NET Core sessions like with any other provider and u
## Distributed cache scenarios
-Given that the Cosmos DB provider implements the [IDistributedCache interface to act as a distributed cache provider](/aspnet/core/performance/caching/distributed?view=aspnetcore-5.0&preserve-view=true), it can also be used for any application that requires distributed cache, not just for web application that require a performant and distributed session state provider.
+Given that the Cosmos DB provider implements the [IDistributedCache interface to act as a distributed cache provider](/aspnet/core/performance/caching/distributed?view=aspnetcore-5.0&preserve-view=true), it can also be used for any application that requires distributed cache, not just for web applications that require a performant and distributed session state provider.
Distributed caches require data consistency to provide independent instances to be able to share that cached data. When using the Cosmos DB provider, you can:
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-java.md
The connection string for your Cosmos DB account is considered an app secret and
## 4 - Include the azure-data-tables package
-To access the Cosmos DB Tables API from a java application, include the [azure-data-tables](https://search.maven.org/artifact/com.azure/azure-data-tables) package.
+To access the Cosmos DB Tables API from a Java application, include the [azure-data-tables](https://search.maven.org/artifact/com.azure/azure-data-tables) package.
```xml <dependency>
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-account.md
+
+ Title: Create an Azure Cosmos DB Table API account
+description: Learn how to create a new Azure Cosmos DB Table API account
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Create an Azure Cosmos DB Table API account
++
+An Azure Cosmos DB Table API account contains all of your Azure Cosmos DB resources: tables and items. The account provides a unique endpoint for various tools and SDKs to connect to Azure Cosmos DB and perform everyday operations. For more information about the resources in Azure Cosmos DB, see [Azure Cosmos DB resource model](../account-databases-containers-items.md).
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+
+## Create an account
+
+Create a single Azure Cosmos DB account using the Table API.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos"
+
+ # Variable for location
+ location="westus"
+
+ # Variable for account name with a randomnly generated suffix
+ let suffix=$RANDOM*$RANDOM
+ accountName="msdocs-$suffix"
+ ```
+
+1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
+
+1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
+
+ ```azurecli-interactive
+ az group create \
+ --name $resourceGroupName \
+ --location $location
+ ```
+
+1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB Table API account with default settings.
+
+ ```azurecli-interactive
+ az cosmosdb create \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --capabilities EnableTable \
+ --locations regionName=$location
+ ```
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Create shell variables for *ACCOUNT_NAME*, *RESOURCE_GROUP_NAME*, and **LOCATION**.
+
+ ```azurepowershell-interactive
+ # Variable for resource group name
+ $RESOURCE_GROUP_NAME = "msdocs-cosmos"
+
+ # Variable for location
+ $LOCATION = "West US"
+
+ # Variable for account name with a randomnly generated suffix
+ $SUFFIX = Get-Random
+ $ACCOUNT_NAME = "msdocs-$SUFFIX"
+ ```
+
+1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+ Location = $LOCATION
+ }
+ New-AzResourceGroup @parameters
+ ```
+
+1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB Table API account with default settings.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ ApiKind = "Table"
+ Location = $LOCATION
+ }
+ New-AzCosmosDBAccount @parameters
+ ```
+++
+## Next steps
+
+In this guide, you learned how to create an Azure Cosmos DB Table API account. You can now import more data to your Azure Cosmos DB account.
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB Table API](../import-data.md)
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-item.md
+
+ Title: Create an item in Azure Cosmos DB Table API using .NET
+description: Learn how to create an item in your Azure Cosmos DB Table API account using the .NET SDK
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Create an item in Azure Cosmos DB Table API using .NET
++
+Items in Azure Cosmos DB represent a specific entity stored within a table. In the Table API, an item consists of a set of key-value pairs uniquely identified by the composite of the row and partition keys.
+
+## Create a unique identifier for an item
+
+The unique identifier, programmatically known as the **** is a distinct string that identifies an item within a table. Each item also includes a **partition key** value that is used to determine the logical partition for the item. Both keys are required when creating a new item within a table.
+
+Within the scope of a table, two items can't share both the same **row key** and **partition key**.
+
+## Create an item
+
+The [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) class is a generic implementation of a dictionary that is uniquely designed to make it easy to create a new item from an arbitrary dictionary of key-value pairs.
+
+Use one of the following strategies to model items that you wish to create in a table:
+
+* [Create an instance of the ``TableEntity`` class](#use-a-built-in-class)
+* [Implement the ``ITableEntity`` interface](#implement-interface)
+
+### Use a built-in class
+
+The [``(string rowKey, string partitionKey)`` constructor](/dotnet/api/azure.data.tables.tableentity.-ctor#azure-data-tables-tableentity-ctor(system-string-system-string)) of the **TableEntity** class is a quick way to create an item with just the required properties. You can then use the [``Add``](/dotnet/api/azure.data.tables.tableentity.add) method to add extra key-value pairs to the item.
+
+For example, you can create a new instance of the **TableEntity** class by first specifying the **row** and **partition** keys in the constructor and then adding new key-value pairs to the dictionary:
++
+The [``(IDictionary<string, object>)`` constructor](/dotnet/api/azure.data.tables.tableentity.-ctor#azure-data-tables-tableentity-ctor(system-collections-generic-idictionary((system-string-system-object)))) of the **TableEntity** class converts an existing dictionary into an item ready to be added to a table.
+
+For example, you can pass in a dictionary to a new instance of the **TableEntity** class:
++
+The [``TableClient.AddEntityAsync<>``](/dotnet/api/azure.data.tables.tableclient.addentityasync#azure-data-tables-tableclient-addentityasync-1(-0-system-threading-cancellationtoken)) method takes in a parameter of type **TableEntity** and then creates a server-side item in the table.
+
+### Implement interface
+
+> [!NOTE]
+> The examples in this section assume that you have already defined a C# type to represent your data named **Product**:
+>
+> :::code language="csharp" source="~/azure-cosmos-db-table-dotnet-v12/251-create-item-itableentity/Product.cs" id="type":::
+>
+
+The [``TableClient.AddEntityAsync<>``](/dotnet/api/azure.data.tables.tableclient.addentityasync#azure-data-tables-tableclient-addentityasync-1(-0-system-threading-cancellationtoken)) method takes in a parameter of any type that implements the [``ITableEntity`` interface](/dotnet/api/azure.data.tables.itableentity). The interface already includes the required ``RowKey`` and ``PartitionKey`` properties.
+
+For example, you can create a new object that implements at least all of the required properties in the **ITableEntity** interface:
++
+You can then pass this object to the **AddEntityAsync``<>``** method creating a server-side item:
++
+## Next steps
+
+Now that you've created various items, use the next guide to read an item.
+
+> [!div class="nextstepaction"]
+> [Read an item](how-to-dotnet-read-item.md)
cosmos-db How To Dotnet Create Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-table.md
+
+ Title: Create a table in Azure Cosmos DB Table API using .NET
+description: Learn how to create a table in your Azure Cosmos DB Table API account using the .NET SDK
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Create a table in Azure Cosmos DB Table API using .NET
++
+Tables in Azure Cosmos DB Table API are units of management for multiple items. Before you can create or manage items, you must first create a table.
+
+## Name a table
+
+In Azure Cosmos DB, a table is analogous to a table in a relational database.
+
+> [!NOTE]
+> With Table API accounts, when you create your first table, a default database is automatically created in your Azure Cosmos account.
+
+Here are some quick rules when naming a table:
+
+* Keep table names between 3 and 63 characters long
+* Table names can only contain lowercase letters, numbers, or the dash (-) character.
+* Table names must start with a lowercase letter or number.
+
+## Create a table
+
+To create a table, call one of the following methods:
+
+* [``CreateAsync``](#create-a-table-asynchronously)
+* [``CreateIfNotExistsAsync``](#create-a-table-asynchronously-if-it-doesnt-already-exist)
+
+### Create a table asynchronously
+
+The following example creates a table asynchronously:
++
+The [``TableCient.CreateAsync``](/dotnet/api/azure.data.tables.tableclient.createasync) method will throw an exception if a database with the same name already exists.
+
+### Create a table asynchronously if it doesn't already exist
+
+The following example creates a table asynchronously only if it doesn't already exist on the account:
++
+The [``TableClient.CreateIfNotExistsAsync``](/dotnet/api/azure.data.tables.tableclient.createifnotexistsasync) method will only create a new table if it doesn't already exist. This method is useful for avoiding errors if you run the same code multiple times.
+
+## Next steps
+
+Now that you've created a table, use the next guide to create items.
+
+> [!div class="nextstepaction"]
+> [Create an item](how-to-dotnet-create-item.md)
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-get-started.md
+
+ Title: Get started with Azure Cosmos DB Table API and .NET
+description: Get started developing a .NET application that works with Azure Cosmos DB Table API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB Table API endpoint.
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Get started with Azure Cosmos DB Table API and .NET
++
+This article shows you how to connect to Azure Cosmos DB Table API using the .NET SDK. Once connected, you can perform operations on tables and items.
+
+[Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/) | [Samples](samples-dotnet.md) | [API reference](/dotnet/api/azure.data.tables) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/tables/Azure.Data.Tables) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues) |
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* Azure Cosmos DB Table API account. [Create a Table API account](how-to-create-account.md).
+* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+## Set up your project
+
+### Create the .NET console application
+
+Create a new .NET application by using the [``dotnet new``](/dotnet/core/tools/dotnet-new) command with the **console** template.
+
+```dotnetcli
+dotnet new console
+```
+
+Import the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables) NuGet package using the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command.
+
+```dotnetcli
+dotnet add package Azure.Data.Tables
+```
+
+Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) command.
+
+```dotnetcli
+dotnet build
+```
+
+## Connect to Azure Cosmos DB Table API
+
+To connect to the Table API of Azure Cosmos DB, create an instance of the [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) class. This class is the starting point to perform all operations against tables. There are two primary ways to connect to a Table API account using the **TableServiceClient** class:
+
+* [Connect with a Table API connection string](#connect-with-a-connection-string)
+* [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
+
+### Connect with a connection string
+
+The most common constructor for **TableServiceClient** has a single parameter:
+
+| Parameter | Example value | Description |
+| | | |
+| ``connectionString`` | ``COSMOS_CONNECTION_STRING`` environment variable | Connection string to the Table API account |
+
+#### Retrieve your account connection string
+
+##### [Azure CLI](#tab/azure-cli)
+
+1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
+
+ ```azurecli-interactive
+ # Retrieve most recently created account name
+ accountName=$(
+ az cosmosdb list \
+ --resource-group $resourceGroupName \
+ --query "[0].name" \
+ --output tsv
+ )
+ ```
+
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+
+ ```azurecli-interactive
+ az cosmosdb keys list \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --type "connection-strings" \
+ --query "connectionStrings[?description == \`Primary Table Connection String\`] | [0].connectionString"
+ ```
+
+##### [PowerShell](#tab/azure-powershell)
+
+1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
+
+ ```azurepowershell-interactive
+ # Retrieve most recently created account name
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ }
+ $ACCOUNT_NAME = (
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property Name -First 1
+ ).Name
+ ```
+
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Type = "ConnectionStrings"
+ }
+ Get-AzCosmosDBAccountKey @parameters |
+ Select-Object -Property "Primary Table Connection String" -First 1
+ ```
+++
+To use the **PRIMARY CONNECTION STRING** value within your .NET code, persist it to a new environment variable on the local machine running the application.
+
+#### [Windows](#tab/windows)
+
+```powershell
+$env:COSMOS_CONNECTION_STRING = "<cosmos-account-PRIMARY-CONNECTION-STRING>"
+```
+
+#### [Linux / macOS](#tab/linux+macos)
+
+```bash
+export COSMOS_CONNECTION_STRING="<cosmos-account-PRIMARY-CONNECTION-STRING>"
+```
+++
+#### Create TableServiceClient with connection string
+
+Create a new instance of the **TableServiceClient** class with the ``COSMOS_CONNECTION_STRING`` environment variable as the only parameter.
++
+### Connect using the Microsoft Identity Platform
+
+To connect to your Table API account using the Microsoft Identity Platform and Azure AD, use a security principal. The exact type of principal will depend on where you host your application code. The table below serves as a quick reference guide.
+
+| Where the application runs | Security principal
+|--|--||
+| Local machine (developing and testing) | User identity or service principal |
+| Azure | Managed identity |
+| Servers or clients outside of Azure | Service principal |
+
+#### Import Azure.Identity
+
+The **Azure.Identity** NuGet package contains core authentication functionality that is shared among all Azure SDK libraries.
+
+Import the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) NuGet package using the ``dotnet add package`` command.
+
+```dotnetcli
+dotnet add package Azure.Identity
+```
+
+Rebuild the project with the ``dotnet build`` command.
+
+```dotnetcli
+dotnet build
+```
+
+In your code editor, add using directives for ``Azure.Core`` and ``Azure.Identity`` namespaces.
++
+#### Create TableServiceClient with default credential implementation
+
+If you're testing on a local machine, or your application will run on Azure services with direct support for managed identities, obtain an OAuth token by creating a [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) instance.
+
+For this example, we saved the instance in a variable of type [``TokenCredential``](/dotnet/api/azure.core.tokencredential) as that's a more generic type that's reusable across SDKs.
++
+Create a new instance of the **TableServiceClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
++
+#### Create TableServiceClient with a custom credential implementation
+
+If you plan to deploy the application out of Azure, you can obtain an OAuth token by using other classes in the [Azure.Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). These other classes also derive from the ``TokenCredential`` class.
+
+For this example, we create a [``ClientSecretCredential``](/dotnet/api/azure.identity.clientsecretcredential) instance by using client and tenant identifiers, along with a client secret.
++
+You can obtain the client ID, tenant ID, and client secret when you register an application in Azure Active Directory (AD). For more information about registering Azure AD applications, see [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+
+Create a new instance of the **TableServiceClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
++
+## Build your application
+
+As you build your application, your code will primarily interact with four types of resources:
+
+* The Table API account, which is the unique top-level namespace for your Azure Cosmos DB data.
+
+* Tables, which contain a set of individual items in your account.
+
+* Items, which represent an individual item in your table.
+
+The following diagram shows the relationship between these resources.
+
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child table nodes. One of the table nodes includes two child items.
+
+Each type of resource is represented by one or more associated .NET classes or interfaces. Here's a list of the most common types:
+
+| Class | Description |
+|||
+| [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) | This client class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service. |
+| [``TableClient``](/dotnet/api/azure.data.tables.tableclient) | This client class is a reference to a table that may, or may not, exist in the service yet. The table is validated server-side when you attempt to access it or perform an operation against it. |
+| [``ITableEntity``](/dotnet/api/azure.data.tables.itableentity) | This interface is the base interface for any items that are created in the table or queried from the table. This interface includes all required properties for items in the Table API. |
+| [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) | This class is a generic implementation of the ``ITableEntity`` interface as a dictionary of key-value pairs. |
+
+The following guides show you how to use each of these classes to build your application.
+
+| Guide | Description |
+|--||
+| [Create a table](how-to-dotnet-create-table.md) | Create tables |
+| [Create an item](how-to-dotnet-create-item.md) | Create items |
+| [Read an item](how-to-dotnet-read-item.md) | Read items |
+
+## See also
+
+* [Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
+* [Samples](samples-dotnet.md)
+* [API reference](/dotnet/api/azure.data.tables)
+* [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/tables/Azure.Data.Tables)
+* [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
+
+## Next steps
+
+Now that you've connected to a Table API account, use the next guide to create and manage tables.
+
+> [!div class="nextstepaction"]
+> [Create a table in Azure Cosmos DB Table API using .NET](how-to-dotnet-create-table.md)
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-read-item.md
+
+ Title: Read an item in Azure Cosmos DB Table API using .NET
+description: Learn how to read an item in your Azure Cosmos DB Table API account using the .NET SDK
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Read an item in Azure Cosmos DB Table API using .NET
++
+Items in Azure Cosmos DB represent a specific entity stored within a table. In the Table API, an item consists of a set of key-value pairs uniquely identified by the composite of the row and partition keys.
+
+## Reading items using the composite key
+
+Every item in Azure Cosmos DB Table API has a unique identifier specified by the composite of the **row** and **partition** keys. These composite keys are stored as the ``RowKey`` and ``PartitionKey`` properties respectively. Within the scope of a table, two items can't share the same unique identifier composite.
+
+Azure Cosmos DB requires both the unique identifier and the partition key value of an item to perform a read of the item. Specifically, providing the composite key will perform a quick *point read* of that item with a predictable cost in request units (RUs).
+
+## Read an item
+
+To perform a point read of an item, use one of the following strategies:
+
+* [Return a ``TableEntity`` object using ``GetEntityAsync<>``](#read-an-item-using-a-built-in-class)
+* [Return an object of your own type using ``GetEntityAsync<>``](#read-an-item-using-your-own-type)
+
+### Read an item using a built-in class
+
+The following example point reads a single item asynchronously and returns the results deserialized into a dictionary using the built-in [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) type:
++
+The [``TableClient.GetEntityAsync<TableEntity>``](/dotnet/api/azure.data.tables.tableclient.getentityasync) method reads an item and returns an object of type [``Response<TableEntity>``](/dotnet/api/azure.response-1). The **Response\<\>** type contains an implicit conversion operator to convert the object into a **TableEntity`` object.
+
+### Read an item using your own type
+
+> [!NOTE]
+> The examples in this section assume that you have already defined a C# type to represent your data named **Product**:
+>
+> :::code language="csharp" source="~/azure-cosmos-db-table-dotnet-v12/276-read-item-itableentity/Product.cs" id="type":::
+>
+
+The following example point reads a single item asynchronously and returns a deserialized item using the provided generic type:
++
+> [!IMPORTANT]
+> The generic type you use with the **TableClient.GetEntityAsync\<\>** method must implement the [``ITableEntity`` interface](/dotnet/api/azure.data.tables.itableentity).
+
+The [``TableClient.GetEntityAsync<>``](/dotnet/api/azure.data.tables.tableclient.getentityasync) method reads an item and returns an object of type [``Response<>``](/dotnet/api/azure.response-1). The **Response\<\>** type contains an implicit conversion operator to convert the object into the generic type. To learn more about implicit operators, see [user-defined conversion operators](/dotnet/csharp/language-reference/operators/user-defined-conversion-operators).
+
+## Next steps
+
+Now that you've read various items, try one of our tutorials on querying Azure Cosmos DB Table API data.
+
+> [!div class="nextstepaction"]
+> [Query Azure Cosmos DB by using the Table API](tutorial-query-table.md)
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/samples-dotnet.md
+
+ Title: Examples for Azure Cosmos DB Table API SDK for .NET
+description: Find .NET SDK examples on GitHub for common tasks using the Azure Cosmos DB Table API.
++++
+ms.devlang: csharp
+ Last updated : 07/06/2022+++
+# Examples for Azure Cosmos DB Table API SDK for .NET
++
+> [!div class="op_single_selector"]
+>
+> * [.NET](samples-dotnet.md)
+>
+
+The [cosmos-db-table-api-dotnet-samples](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB Table API resources.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* Azure Cosmos DB Table API account. [Create a Table API account](how-to-create-account.md).
+* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+## Samples
+
+The sample projects are all self-contained and are designed to be ran individually without any dependencies between projects.
+
+### Client
+
+| Task | API reference |
+| : | : |
+| [Create a client with connection string](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples/blob/v12/101-client-connection-string/Program.cs#L11-L13) | [``CosmosClient(string)``](/dotnet/api/azure.data.tables.tableserviceclient.-ctor#azure-data-tables-tableserviceclient-ctor(system-string)) |
+| [Create a client with ``DefaultAzureCredential``](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples/blob/v12/102-client-default-credential/Program.cs#L20-L23) | [``TableServiceClient(Uri, TokenCredential)``](/dotnet/api/azure.data.tables.tableserviceclient.-ctor#azure-data-tables-tableserviceclient-ctor(system-uri-azure-azuresascredential)) |
+| [Create a client with custom ``TokenCredential``](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples/blob/v12/103-client-secret-credential/Program.cs#L25-L28) | [``TableServiceClient(Uri, TokenCredential)``](/dotnet/api/azure.data.tables.tableserviceclient.-ctor#azure-data-tables-tableserviceclient-ctor(system-uri-azure-azuresascredential)) |
+
+### Tables
+
+| Task | API reference |
+| : | : |
+| [Create a table](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples/blob/v12/200-create-table/Program.cs#L18-L22) | [``TableClient.CreateIfNotExistsAsync``](/dotnet/api/azure.data.tables.tableclient.createifnotexistsasync) |
+
+### Items
+
+| Task | API reference |
+| : | : |
+| [Create an item using TableEntity](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples/blob/v12/250-create-item-tableentity/Program.cs#L25-L36) | [``TableClient.AddEntityAsync<>``](/dotnet/api/azure.data.tables.tableclient.addentityasync#azure-data-tables-tableclient-addentityasync-1(-0-system-threading-cancellationtoken)) |
+| [Create an item using ITableEntity](https://github.com/azure-samples/cosmos-db-table-api-dotnet-samples/blob/v12/251-create-item-itableentity/Program.cs#L25-L37) | [``TableClient.AddEntityAsync<>``](/dotnet/api/azure.data.tables.tableclient.addentityasync#azure-data-tables-tableclient-addentityasync-1(-0-system-threading-cancellationtoken)) |
+| [Point read an item](https://github.com/Azure-Samples/cosmos-db-table-api-dotnet-samples/blob/v12/276-read-item-itableentity/Program.cs#L42-L45) | [``TableClient.GetEntityAsync<>``](/dotnet/api/azure.data.tables.tableclient.getentityasync) |
+
+## Next steps
+
+Dive deeper into the SDK to read data and manage your Azure Cosmos DB Table API resources.
+
+> [!div class="nextstepaction"]
+> [Get started with Azure Cosmos DB Table API and .NET](how-to-dotnet-get-started.md)
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-ordered.md
Previously updated : 10/29/2021 Last updated : 06/16/2022 #Customer intent: As an IT admin, I need to be able to export data from Azure to another location, such as, another cloud provider or my location.
This sample XML file includes examples of each XML tag that is used to select bl
<!--AzureFileList selects individual files (FilePath) and multiple files (FilePathPrefix) in Azure File storage for export.--> <AzureFileList> <FilePath>/fileshare1/file.txt</FilePath> <!-- Exports /fileshare1/file.txt -->
- <FilePathPrefix>/fileshare1/</FilePath> <!-- Exports all directories and files in fileshare1 -->
+ <FilePathPrefix>/fileshare1/</FilePathPrefix> <!-- Exports all directories and files in fileshare1 -->
<FilePathPrefix>/fileshare</FilePathPrefix> <!-- Exports all directories and files in any fileshare with prefix: "fileshare" --> <FilePathPrefix>/fileshare2/contosowest</FilePathPrefix> <!-- Exports all directories and files in fileshare2 with prefix: "contosowest" --> </AzureFileList>
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
<sup><a name="footnote1"></a>1</sup>: **Preview for non-AKS clusters**: This alert is generally available for AKS clusters, but it is in preview for other environments, such as Azure Arc, EKS and GKE.
-<sup><a name="footnote2"></a>2</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
+<sup><a name="footnote2"></a>2</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, is not supported for GKE clusters.
<sup><a name="footnote3"></a>3</sup>: This alert is supported on Windows nodes/containers.
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
To enable auto provisioning of the Log Analytics agent:
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the workspace to which you'll be connecting the agents.
- 1. Select **Enhanced security off** or **Enable all Microsoft Defender plans**.
+ 1. Set Security posture management to **on** or select **Enable all** to turn all Microsoft Defender plans on.
1. From the **Windows security events** configuration, select the amount of raw event data to store: - **None** ΓÇô Disable security event storage. This is the default setting.
defender-for-cloud Onboard Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-management-group.md
Title: Onboard a management group to Microsoft Defender for Cloud description: Learn how to use a supplied Azure Policy definition to enable Microsoft Defender for Cloud for all the subscriptions in a management group. Previously updated : 04/25/2022 Last updated : 07/07/2022 # Enable Defender for Cloud on all subscriptions in a management group
You can use Azure Policy to enable Microsoft Defender for Cloud on all the Azure
To onboard a management group and all its subscriptions:
-1. As a user with **Security Admin** permissions, open Azure Policy and search for the definition `Enable Azure Security Center on your subscription`.
+1. As a user with **Security Admin** permissions, open Azure Policy and search for the definition `Enable Microsoft Defender for Cloud on your subscription`.
:::image type="content" source="./media/get-started/enable-microsoft-defender-for-cloud-policy.png" alt-text="Screenshot showing the Azure Policy definition Enable Defender for Cloud on your subscription." lightbox="media/get-started/enable-microsoft-defender-for-cloud-policy-extended.png":::
defender-for-cloud Other Threat Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/other-threat-protections.md
Azure Application Gateway offers a web application firewall (WAF) that provides
Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. The Application Gateway WAF is based on Core Rule Set 3.0 or 2.2.9 from the Open Web Application Security Project. The WAF is updated automatically to protect against new vulnerabilities.
-If you have a license for Azure WAF, your WAF alerts are streamed to Defender for Cloud with no additional configuration needed. For more information on the alerts generated by WAF, see [Web application firewall CRS rule groups and rules](../web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md?tabs=owasp31#crs911-31).
+If you have created [WAF Security solution](partner-integration.md#add-data-sources), your WAF alerts are streamed to Defender for Cloud with no additional configurations. For more information on the alerts generated by WAF, see [Web application firewall CRS rule groups and rules](../web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md?tabs=owasp31#crs911-31).
### Display Azure DDoS Protection alerts in Defender for Cloud <a name="azure-ddos"></a>
defender-for-cloud Quickstart Enable Database Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-database-protections.md
You can enable database protection on your subscription, or exclude specific dat
## Prerequisites -- You must have Subscription Owner access
+You must have:
+
+- [Subscription Owner](../role-based-access-control/built-in-roles.md#owner) access.
- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/). ## Enable database protection on your subscription
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 06/29/2022 Last updated : 07/07/2022
The **tabs** below show the features that are available, by environment, for Mic
| Hardening | Control plane recommendations | ACR, AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Runtime protection| Threat detection (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime protection| Threat detection (workload) | AKS | Preview | - | Defender profile | Defender for Containers | Commercial clouds |
+| Runtime protection| Threat detection (workload) | AKS | GA | - | Defender profile | Defender for Containers | Commercial clouds |
| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Discovery and provisioning | Collection of control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Auto provisioning of Defender profile | AKS | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Defender profile | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | <sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | | [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | | [Network-based security alerts](other-threat-protections.md#network-layer) | - | - |
-| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö (Preview) | - |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - |
| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | | [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | | [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö |
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 07/05/2022 Last updated : 07/07/2022 # What's new in Microsoft Defender for IoT?
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--| | 22.2.3 | 07/2022 | 4/2023 |
+| 22.1.6 | 06/2022 | 10/2023 |
| 22.1.5 | 06/2022 | 10/2023 | | 22.1.4 | 04/2022 | 10/2022 | | 22.1.3 | 03/2022 | 10/2022 |
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | ||| |**Enterprise IoT networks** | - [Enterprise IoT purchase experience and Defender for Endpoint integration in GA](#enterprise-iot-purchase-experience-and-defender-for-endpoint-integration-in-ga) |
-|**OT networks** |Sensor software version 22.2.3:<br><br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br><br>To update to version 22.2.3:<br>- From version 22.1.x, update directly to version 22.2.3<br>- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3<br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
+|**OT networks** |**Sensor software version 22.2.3**:<br><br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br><br>To update to version 22.2.3:<br>- From version 22.1.x, update directly to version 22.2.3<br>- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3<br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) | - ### Enterprise IoT purchase experience and Defender for Endpoint integration in GA Defender for IoTΓÇÖs new purchase experience and the Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
For more information, see:
- [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview) + ### Improved security for uploading protocol plugins This version of the sensor provides an improved security for uploading proprietary plugins you've created using the Horizon SDK.
For more information, see:
## June 2022
-**Sensor software version**: 22.1.5
+- **Sensor software version 22.1.6**: Minor version with maintenance updates for internal sensor components
+
+- **Sensor software version 22.1.5**: Minor version to improve TI installation packages and software updates
-We've recently optimized and enhanced our documentation as follows:
+We've also recently optimized and enhanced our documentation as follows:
- [Updated appliance catalog for OT environments](#updated-appliance-catalog-for-ot-environments) - [Documentation reorganization for end-user organizations](#documentation-reorganization-for-end-user-organizations)
event-hubs Event Hubs Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-overview.md
You can configure Capture at the event hub creation time using the [Azure portal
> [!NOTE] > If you enable the Capture feature for an existing event hub, the feature captures events that arrive at the event hub **after** the feature is turned on. It doesn't capture events that existed in the event hub before the feature was turned on.
-## Exploring the captured files and working with Avro
-
-Event Hubs Capture creates files in Avro format, as specified on the configured time window. You can view these files in any tool such as [Azure Storage Explorer][Azure Storage Explorer]. You can download the files locally to work on them.
-
-The files produced by Event Hubs Capture have the following Avro schema:
-
-![Avro schema][3]
-
-An easy way to explore Avro files is by using the [Avro Tools][Avro Tools] jar from Apache. You can also use [Apache Drill][Apache Drill] for a lightweight SQL-driven experience or [Apache Spark][Apache Spark] to perform complex distributed processing on the ingested data.
-
-### Use Apache Drill
-
-[Apache Drill][Apache Drill] is an "open-source SQL query engine for Big Data exploration" that can query structured and semi-structured data wherever it is. The engine can run as a standalone node or as a huge cluster for great performance.
-
-A native support to Azure Blob storage is available, which makes it easy to query data in an Avro file, as described in the documentation:
-
-[Apache Drill: Azure Blob Storage Plugin][Apache Drill: Azure Blob Storage Plugin]
-
-To easily query captured files, you can create and execute a VM with Apache Drill enabled via a container to access Azure Blob storage. See the following sample: [Streaming at Scale with Event Hubs Capture](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-capture-databricks-delta).
-
-### Use Apache Spark
-
-[Apache Spark][Apache Spark] is a "unified analytics engine for large-scale data processing." It supports different languages, including SQL, and can easily access Azure Blob storage. There are a few options to run Apache Spark in Azure, and each provides easy access to Azure Blob storage:
--- [HDInsight: Address files in Azure storage][HDInsight: Address files in Azure storage]-- [Azure Databricks: Azure Blob storage][Azure Databricks: Azure Blob Storage]-- [Azure Kubernetes Service](../aks/spark-job.md) -
-### Use Avro Tools
-
-[Avro Tools][Avro Tools] are available as a jar package. After you download the jar file, you can see the schema of a specific Avro file by running the following command:
-
-```shell
-java -jar avro-tools-1.9.1.jar getschema <name of capture file>
-```
-
-This command returns
-
-```json
-{
-
- "type":"record",
- "name":"EventData",
- "namespace":"Microsoft.ServiceBus.Messaging",
- "fields":[
- {"name":"SequenceNumber","type":"long"},
- {"name":"Offset","type":"string"},
- {"name":"EnqueuedTimeUtc","type":"string"},
- {"name":"SystemProperties","type":{"type":"map","values":["long","double","string","bytes"]}},
- {"name":"Properties","type":{"type":"map","values":["long","double","string","bytes"]}},
- {"name":"Body","type":["null","bytes"]}
- ]
-}
-```
-
-You can also use Avro Tools to convert the file to JSON format and perform other processing.
-
-To perform more advanced processing, download and install Avro for your choice of platform. At the time of this writing, there are implementations available for C, C++, C\#, Java, NodeJS, Perl, PHP, Python, and Ruby.
-
-Apache Avro has complete Getting Started guides for [Java][Java] and [Python][Python]. You can also read the [Getting started with Event Hubs Capture](event-hubs-capture-python.md) article.
- ## How Event Hubs Capture is charged Event Hubs Capture is metered similarly to [throughput units](event-hubs-scalability.md#throughput-units) (standard tier) or [processing units](event-hubs-scalability.md#processing-units) (in premium tier): as an hourly charge. The charge is directly proportional to the number of throughput units or processing units purchased for the namespace. As throughput units or processing units are increased and decreased, Event Hubs Capture meters increase and decrease to provide matching performance. The meters occur in tandem. For pricing details, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/).
Event Hubs Capture is metered similarly to [throughput units](event-hubs-scalabi
Capture doesn't consume egress quota as it is billed separately. ## Integration with Event Grid - You can create an Azure Event Grid subscription with an Event Hubs namespace as its source. The following tutorial shows you how to create an Event Grid subscription with an event hub as a source and an Azure Functions app as a sink: [Process and migrate captured Event Hubs data to a Azure Synapse Analytics using Event Grid and Azure Functions](store-captured-data-data-warehouse.md).
+## Explore captured files
+To learn how to explore captured Avro files, see [Explore captured Avro files](explore-captured-avro-files.md).
+ ## Next steps Event Hubs Capture is the easiest way to get data into Azure. Using Azure Data Lake, Azure Data Factory, and Azure HDInsight, you can perform batch processing and other analytics using familiar tools and platforms of your choosing, at any scale you need.
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
Published events are removed from an event hub based on a configurable, timed-ba
Event Hubs retains events for a configured retention time that applies across all partitions. Events are automatically removed when the retention period has been reached. If you specify a retention period of one day, the event will
-become unavailable exactly 24 hours after it has been accepted. You cannot
+become unavailable exactly 24 hours after it has been accepted. You can't
explicitly delete events. If you need to archive events beyond the allowed
Event Hubs enables granular control over event publishers through *publisher pol
//<my namespace>.servicebus.windows.net/<event hub name>/publishers/<my publisher name> ```
-You don't have to create publisher names ahead of time, but they must match the SAS token used when publishing an event, in order to ensure independent publisher identities. When using publisher policies, the **PartitionKey** value is set to the publisher name. To work properly, these values must match.
+You don't have to create publisher names ahead of time, but they must match the SAS token used when publishing an event, in order to ensure independent publisher identities. When you use publisher policies, the **PartitionKey** value needs to be set to the publisher name. To work properly, these values must match.
## Capture
The files produced by Event Hubs Capture have the following Avro schema:
:::image type="content" source="./media/event-hubs-capture-overview/event-hubs-capture3.png" alt-text="Image showing the structure of captured data":::
+> [!NOTE]
+> When you use no code editor in the Azure portal, you can capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in the **Parquet** format. For more information, see [How to: capture data from Event Hubs in Parquet format](../stream-analytics/capture-event-hub-data-parquet.md?toc=%2Fazure%2Fevent-hubs%2Ftoc.json) and [Tutorial: capture Event Hubs data in Parquet format and analyze with Azure Synapse Analytics](../stream-analytics/event-hubs-parquet-capture-tutorial.md?toc=%2Fazure%2Fevent-hubs%2Ftoc.json).
+ ## Partitions [!INCLUDE [event-hubs-partitions](./includes/event-hubs-partitions.md)] ## SAS tokens
-Event Hubs uses *Shared Access Signatures*, which are available at the namespace and event hub level. A SAS token is generated from a SAS key and is an SHA hash of a URL, encoded in a specific format. Using the name of the key (policy) and the token, Event Hubs can regenerate the hash and thus authenticate the sender. Normally, SAS tokens for event publishers are created with only **send** privileges on a specific event hub. This SAS token URL mechanism is the basis for publisher identification introduced in the publisher policy. For more information about working with SAS, see [Shared Access Signature Authentication with Service Bus](../service-bus-messaging/service-bus-sas.md).
+Event Hubs uses *Shared Access Signatures*, which are available at the namespace and event hub level. A SAS token is generated from a SAS key and is an SHA hash of a URL, encoded in a specific format. Event Hubs can regenerate the hash by using the name of the key (policy) and the token and thus authenticate the sender. Normally, SAS tokens for event publishers are created with only **send** privileges on a specific event hub. This SAS token URL mechanism is the basis for publisher identification introduced in the publisher policy. For more information about working with SAS, see [Shared Access Signature Authentication with Service Bus](../service-bus-messaging/service-bus-sas.md).
## Event consumers
-Any entity that reads event data from an event hub is an *event consumer*. All Event Hubs consumers connect via the AMQP 1.0 session and events are delivered through the session as they become available. The client does not need to poll for data availability.
+Any entity that reads event data from an event hub is an *event consumer*. All Event Hubs consumers connect via the AMQP 1.0 session and events are delivered through the session as they become available. The client doesn't need to poll for data availability.
### Consumer groups
event-hubs Explore Captured Avro Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/explore-captured-avro-files.md
+
+ Title: Exploring captured Avro files in Azure Event Hubs
+description: This article provides the schema of Avro files captured by Azure Event Hubs and a list of tools to explore them.
+ Last updated : 07/06/2022++
+# Exploring captured Avro files in Azure Event Hubs
+This article provides the schema for Avro files captured by Azure Event Hubs and a few tools to explore the files.
+
+## Schema
+The Avro files produced by Event Hubs Capture have the following Avro schema:
++
+## Azure Storage Explorer
+You can view captured files in any tool such as [Azure Storage Explorer][Azure Storage Explorer]. You can download files locally to work on them.
+
+An easy way to explore Avro files is by using the [Avro Tools][Avro Tools] jar from Apache. You can also use [Apache Drill][Apache Drill] for a lightweight SQL-driven experience or [Apache Spark][Apache Spark] to perform complex distributed processing on the ingested data.
+
+## Use Apache Drill
+[Apache Drill][Apache Drill] is an "open-source SQL query engine for Big Data exploration" that can query structured and semi-structured data wherever it is. The engine can run as a standalone node or as a huge cluster for great performance.
+
+A native support to Azure Blob storage is available, which makes it easy to query data in an Avro file, as described in the documentation:
+
+[Apache Drill: Azure Blob Storage Plugin][Apache Drill: Azure Blob Storage Plugin]
+
+To easily query captured files, you can create and execute a VM with Apache Drill enabled via a container to access Azure Blob storage. See the following sample: [Streaming at Scale with Event Hubs Capture](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-capture-databricks-delta).
+
+## Use Apache Spark
+[Apache Spark][Apache Spark] is a "unified analytics engine for large-scale data processing." It supports different languages, including SQL, and can easily access Azure Blob storage. There are a few options to run Apache Spark in Azure, and each provides easy access to Azure Blob storage:
+
+- [HDInsight: Address files in Azure storage][HDInsight: Address files in Azure storage]
+- [Azure Databricks: Azure Blob storage][Azure Databricks: Azure Blob Storage]
+- [Azure Kubernetes Service](../aks/spark-job.md)
+
+## Use Avro Tools
+
+[Avro Tools][Avro Tools] are available as a jar package. After you download the jar file, you can see the schema of a specific Avro file by running the following command:
+
+```shell
+java -jar avro-tools-1.9.1.jar getschema <name of capture file>
+```
+
+This command returns
+
+```json
+{
+
+ "type":"record",
+ "name":"EventData",
+ "namespace":"Microsoft.ServiceBus.Messaging",
+ "fields":[
+ {"name":"SequenceNumber","type":"long"},
+ {"name":"Offset","type":"string"},
+ {"name":"EnqueuedTimeUtc","type":"string"},
+ {"name":"SystemProperties","type":{"type":"map","values":["long","double","string","bytes"]}},
+ {"name":"Properties","type":{"type":"map","values":["long","double","string","bytes"]}},
+ {"name":"Body","type":["null","bytes"]}
+ ]
+}
+```
+
+You can also use Avro Tools to convert the file to JSON format and perform other processing.
+
+To perform more advanced processing, download and install Avro for your choice of platform. At the time of this writing, there are implementations available for C, C++, C\#, Java, NodeJS, Perl, PHP, Python, and Ruby.
+
+Apache Avro has complete Getting Started guides for [Java][Java] and [Python][Python]. You can also read the [Getting started with Event Hubs Capture](event-hubs-capture-python.md) article.
+
+## Next steps
+Event Hubs Capture is the easiest way to get data into Azure. Using Azure Data Lake, Azure Data Factory, and Azure HDInsight, you can perform batch processing and other analytics using familiar tools and platforms of your choosing, at any scale you need. See the following articles to learn more about this feature.
+
+- [Event Hubs Capture overview](event-hubs-capture-overview.md)
+- [Use the Azure portal to enable Event Hubs Capture](event-hubs-capture-enable-through-portal.md)
+- [Use an Azure Resource Manager template to enable Event Hubs Capture](event-hubs-resource-manager-namespace-event-hub-enable-capture.md)
++
+[Apache Avro]: https://avro.apache.org/
+[Apache Drill]: https://drill.apache.org/
+[Apache Spark]: https://spark.apache.org/
+[support request]: https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade
+[Azure Storage Explorer]: https://github.com/microsoft/AzureStorageExplorer/releases
+[Avro Tools]: https://downloads.apache.org/avro/stable/java/
+[Java]: https://avro.apache.org/docs/current/gettingstartedjava.html
+[Python]: https://avro.apache.org/docs/current/gettingstartedpython.html
+[Event Hubs overview]: ./event-hubs-about.md
+[HDInsight: Address files in Azure storage]: ../hdinsight/hdinsight-hadoop-use-blob-storage.md
+[Azure Databricks: Azure Blob Storage]:https://docs.databricks.com/spark/latest/data-sources/azure/azure-storage.html
+[Apache Drill: Azure Blob Storage Plugin]:https://drill.apache.org/docs/azure-blob-storage-plugin/
+[Streaming at Scale: Event Hubs Capture]:https://github.com/yorek/streaming-at-scale/tree/master/event-hubs-capture
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
Title: Configure the minimum TLS version for an Event Hubs namespace using ARM
+ Title: Configure the minimum TLS version for an Event Hubs namespace
description: Configure an Azure Event Hubs namespace to use a minimum version of Transport Layer Security (TLS).
Previously updated : 04/25/2022 Last updated : 07/06/2022
-# Configure the minimum TLS version for an Event Hubs namespace using ARM (Preview)
+# Configure the minimum TLS version for an Event Hubs namespace (Preview)
-To configure the minimum TLS version for an Event Hubs namespace, set the `MinimumTlsVersion` version property. When you create an Event Hubs namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
+Azure Event Hubs namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Hubs namespace to require that clients send and receive data with a newer version of TLS. If an Event Hubs namespace requires a minimum version of TLS, then any requests made with an older version will fail. For conceptual information about this feature, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md).
-> [!NOTE]
-> Namespaces created using an api-version prior to 2022-01-01-preview will have 1.0 as the value for `MinimumTlsVersion`. This behavior was the prior default, and is still there for backwards compatibility.
+You can configure the minimum TLS version using the Azure portal or Azure Resource Manager (ARM) template.
+
+## Specify the minimum TLS version in the Azure portal
+You can specify the minimum TLS version when creating an Event Hubs namespace in the Azure portal on the **Advanced** tab.
++
+You can also specify the minimum TLS version for an existing namespace on the **Configuration** page.
+ ## Create a template to configure the minimum TLS version
-To configure the minimum TLS version for an Event Hubs namespace with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. The following steps describe how to create a template in the Azure portal.
+To configure the minimum TLS version for an Event Hubs namespace with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. When you create an Event Hubs namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
+
+> [!NOTE]
+> Namespaces created using an api-version prior to 2022-01-01-preview will have 1.0 as the value for `MinimumTlsVersion`. This behavior was the prior default, and is still there for backwards compatibility.
+
+The following steps describe how to create a template in the Azure portal.
1. In the Azure portal, choose **Create a resource**. 2. In **Search the Marketplace** , type **custom deployment** , and then press **ENTER**.
Configuring the minimum TLS version requires api-version 2022-01-01-preview or l
## Check the minimum required TLS version for a namespace
-To check the minimum required TLS version for your Event Hubs namespace, you can query the Azure Resource Manager API. You will need a Bearer token to query against the API, which you can retrieve using [ARMClient](https://github.com/projectkudu/ARMClient) by executing the following commands.
+To check the minimum required TLS version for your Event Hubs namespace, you can query the Azure Resource Manager API. You'll need a Bearer token to query against the API, which you can retrieve using the [ARMClient](https://github.com/projectkudu/ARMClient) app by executing the following commands.
```powershell .\ARMClient.exe login
The response should look something like the below, with the minimumTlsVersion se
To test that the minimum required TLS version for an Event Hubs namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
-When a client accesses an Event Hubs namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Event Hubs returns error code 401 (Unauthorized) and a message indicating that the TLS version that was used is not permitted for making requests against this Event Hubs namespace.
+When a client accesses an Event Hubs namespace using a TLS version that doesn't meet the minimum TLS version configured for the namespace, Azure Event Hubs returns error code 401 (Unauthorized) and a message indicating that the TLS version that was used isn't permitted for making requests against this Event Hubs namespace.
> [!NOTE] > Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
When a client accesses an Event Hubs namespace using a TLS version that does not
## Next steps
-See the following documentation for more information.
+For more information, see the following articles.
- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md) - [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul, Seoul2 | | **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported | Amsterdam, Chicago, Dallas, London, Newport (Wales), Sao Paulo, Seattle, Silicon Valley, Singapore, Washington DC | | **LG CNS** |Supported |Supported | Busan, Seoul |
-| **[Liquid Telecom](https://www.liquidtelecom.com/products-and-services/cloud.html)** |Supported |Supported | Cape Town, Johannesburg |
+| **[Liquid Telecom](https://liquidcloud.africa/connect/)** |Supported |Supported | Cape Town, Johannesburg |
| **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul | | **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported | Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, Queretaro (Mexico), San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich | | **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported | London |
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
_common_ properties used by Azure Policy and in built-ins. Each `metadata` prope
## Parameters
-Parameters help simplify your policy management by reducing the number of policy definitions. Think
+Parameters help simplify your policy management by reducing the number of policy definitions. Think
of parameters like the fields on a form - `name`, `address`, `city`, `state`. These parameters always stay the same, however their values change based on the individual filling out the form. Parameters work the same way when building policies. By including parameters in a policy definition,
A parameter has the following properties that are used in the policy definition:
the assignment scope. There's one role assignment per role definition in the policy (or per role definition in all of the policies in the initiative). The parameter value must be a valid resource or scope.-- `defaultValue`: (Optional) Sets the value of the parameter in an assignment if no value is given.
- Required when updating an existing policy definition that is assigned.
+- `defaultValue`: (Optional) Sets the value of the parameter in an assignment if no value is given. Required when updating an existing policy definition that is assigned. For oject-type parameters, the value must match the appropriate schema.
- `allowedValues`: (Optional) Provides an array of values that the parameter accepts during
- assignment. Allowed value comparisons are case-sensitive.
+ assignment. Allowed value comparisons are case-sensitive. For oject-type parameters, the values must match the appropriate schema.
+- `schema`: (Optional) Provides validation of parameter inputs during assignment using a self-defined JSON schema. This property is only supported for object-type parameters and follows the [Json.NET Schema](https://www.newtonsoft.com/jsonschema) 2019-09 implementation. You can learn more about using schemas at https://json-schema.org/ and test draft schemas at https://www.jsonschemavalidator.net/.
+
+### Sample Parameters
+
+#### Example 1
As an example, you could define a policy definition to limit the locations where resources can be deployed. A parameter for that policy definition could be **allowedLocations**. This parameter would
be used by each assignment of the policy definition to limit the accepted values
} ```
+A sample input for this array-type parameter (without strongType) at assignment time might be ["westus", "eastus2"].
+
+#### Example 2
+
+In a more advanced scenario, you could define a policy that requires Kubernetes cluster pods to use specified labels. A parameter for that policy definition could be **labelSelector**, which would be used by each assignment of the policy definition to specify Kubernetes resources in question based on label keys and values:
+
+```json
+"parameters": {
+ "labelSelector": {
+ "type": "Object",
+ "metadata": {
+ "displayName": "Kubernetes label selector",
+ "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources."
+ },
+ "defaultValue": {},
+ "schema": {
+ "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.",
+ "type": "object",
+ "properties": {
+ "matchLabels": {
+ "description": "matchLabels is a map of {key,value} pairs.",
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ },
+ "minProperties": 1
+ },
+ "matchExpressions": {
+ "description": "matchExpressions is a list of values, a key, and an operator.",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "key": {
+ "description": "key is the label key that the selector applies to.",
+ "type": "string"
+ },
+ "operator": {
+ "description": "operator represents a key's relationship to a set of values.",
+ "type": "string",
+ "enum": [
+ "In",
+ "NotIn",
+ "Exists",
+ "DoesNotExist"
+ ]
+ },
+ "values": {
+ "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.",
+ "type": "array",
+ "items": {
+ "type": "string"
+ }
+ }
+ },
+ "required": [
+ "key",
+ "operator"
+ ],
+ "additionalProperties": false
+ },
+ "minItems": 1
+ }
+ },
+ "additionalProperties": false
+ }
+ },
+}
+```
+
+A sample input for this object-type parameter at assignment time would be in JSON format, validated by the specified schema, and might be:
+
+```json
+{
+ "matchLabels": {
+ "poolID": "abc123",
+ "nodeGroup": "Group1",
+ "region": "southcentralus"
+ },
+ "matchExpressions": [
+ {
+ "key": "name",
+ "operator": "In",
+ "values": ["payroll", "web"]
+ },
+ {
+ "key": "environment",
+ "operator": "NotIn",
+ "values": ["dev"]
+ }
+ ]
+}
+```
+ ### Using a parameter value In the policy rule, you reference parameters with the following `parameters` function syntax:
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-96712 | [FLUME-3194](https://issues.apache.org/jira/browse/FLUME-3194) | upgrade derby to the latest (1.14.1.0) version | | BUG-96713 | [FLUME-2678](https://issues.apache.org/jira/browse/FLUME-2678) | Upgrade xalan to 2.7.2 to take care of CVE-2014-0107 vulnerability | | BUG-96714 | [FLUME-2050](https://issues.apache.org/jira/browse/FLUME-2050) | Upgrade to log4j2 (when GA) |
-| BUG-96737 | N/A | Use java io filesystem methods to access local files |
+| BUG-96737 | N/A | Use Java io filesystem methods to access local files |
| BUG-96925 | N/A | Upgrade Tomcat from 6.0.48 to 6.0.53 in Hadoop | | BUG-96977 | [FLUME-3132](https://issues.apache.org/jira/browse/FLUME-3132) | Upgrade tomcat jasper library dependencies | | BUG-97022 | [HADOOP-14799](https://issues.apache.org/jira/browse/HADOOP-14799), [HADOOP-14903](https://issues.apache.org/jira/browse/HADOOP-14903), [HADOOP-15265](https://issues.apache.org/jira/browse/HADOOP-15265) | Upgrading Nimbus-JOSE-JWT library with version above 4.39 |
hpc-cache Cache Usage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/cache-usage-models.md
Title: Azure HPC Cache usage models description: Describes the different cache usage models and how to choose among them to set read-only or read/write caching and control other caching settings-+ Previously updated : 07/12/2021- Last updated : 06/29/2022+ <!-- filename is referenced from GUI in aka.ms/hpc-cache-usagemodel -->
This table summarizes the usage model differences:
If you have questions about the best usage model for your Azure HPC Cache workflow, talk to your Azure representative or open a support request for help.
+> [!TIP]
+> A utility is available to write specific individual files back to a storage target without writing the entire cache contents. Learn more about the flush_file.py script in [Customize file write-back in Azure HPC Cache](custom-flush-script.md).
+ ## Change usage models You can change usage models by editing the storage target, but some changes are not allowed because they create a small risk of file version conflict. You can't change **to** or **from** the model named **Read heavy, infrequent writes**. To change a storage target to this usage model, or to change it from this model to a different usage model, you have to delete the original storage target and create a new one.
-This restriction also applies to the usage model **Read heavy, checking the backing server every 3 hours**, which is less commonly used. Also, you can change between between the two "read heavy..." usage models, but not into or out of a different usage model style.
+This restriction also applies to the usage model **Read heavy, checking the backing server every 3 hours**, which is less commonly used. Also, you can change between the two "read heavy..." usage models, but not into or out of a different usage model style.
This restriction is needed because of the way different usage models handle Network Lock Manager (NLM) requests. Azure HPC Cache sits between clients and the back-end storage system. Usually, the cache passes NLM requests through to the back-end storage system, but in some situations, the cache itself acknowledges the NLM request and returns a value to the client. In Azure HPC Cache, this only happens when you use the usage models **Read heavy, infrequent writes** or **Read heavy, checking the backing server every 3 hours**, or with a standard blob storage target, which doesn't have configurable usage models.
hpc-cache Custom Flush Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/custom-flush-script.md
+
+ Title: Use a python library to customize file write-back
+description: Advanced file write-back with Azure HPC Cache
+++ Last updated : 07/07/2022+++
+# Customize file write-back in Azure HPC Cache
+
+HPC Cache users can request that the cache write specific individual files to back-end storage on demand by using the flush_file.py utility. This utility is a separately downloaded software package that you install and use on Linux client machines.
+
+This feature is designed for situations where you want the changes on cached files to be made available as soon as possible to systems that don't mount the cache.
+
+For example, you might use Azure HPC Cache to scale your computing jobs in the cloud, but store your data set permanently in an on-premises data center. If compute tasks happen at the data center that depend on changes created with Azure HPC Cache, you can use this utility to "push" the output or changes generated by a cloud task back to the on-premises NAS storage. This lets the new files be used almost immediately by on-premises compute resources.
+
+## Choose between custom write-back and flush
+
+You can force data to be written back with the "storage target flush" option built in to Azure HPC Cache - but this approach might not be right for all situations.
+
+* Writing all of the modified files back to the storage system can take several minutes or even hours, depending on the quantity of data and the speed of the network link back to the on-premises system. Also, you can't choose only the files you've finished with to be written; files that are still actively being modified will be included in this calculation.
+
+* The cache might block serving some requests from that storage target during the flush process. This can delay processing if there are other compute clients using files that reside on the same storage target.
+
+* Triggering this action requires contributor access to the Azure Resource Manager, which end-users might not have.
+
+<!-- If you create a storage target for each export of your on-premises system, you might have multiple compute jobs in the cloud using different subsets of data from one storage target. In this situation, you have to wait for all the jobs to finish before you can force the changed files to be written back to the back-end storage without disrupting access to the compute clients working on other datasets from the same export. -->
+<!--
+have overlap in your compute jobs could have more than one compute job working on the data
+Your on-premises storage has a limited number of exports, so you will likely have more than one task operating on the data sets in one HPC Cache storage target. You can use the flush command to write all changes back to the long-term storage, but in this situation that would disrupt the other Azure-based jobs using that storage target. -->
+
+For example, you can have multiple parallel (but not overlapping) compute jobs that consume data residing on the same HPC Cache storage target. When one job completes, you want to immediately write that job's output from the cache to your long-term storage on the back end.
+
+You have three options:
+
+* Wait for the cached files to be automatically written back from the cache - but files might sit in the cache for more than an hour before they're completely written back. The timing depends on the write-back delay of your cache usage model, along with other factors such as network link performance and the size of the files. (Read [Understand cache usage models](cache-usage-models.md) to learn more about write-back delay.)
+
+* Immediately [flush the cached files for the entire storage target](manage-storage-targets.md#write-cached-files-to-the-storage-target) - but that would disrupt other compute jobs that are also using this storage target's data.
+
+* Use this customized write-back utility to send a special NFS request to the cache to write back only the specific files you want. This scenario doesn't disrupt access for other clients and can be triggered at any point in the computing task.
+
+## About the write-back utility
+
+The write-back utility has a script that you can use to specify individual files that will be written from the cache to the long-term storage system.
+
+The script takes an input stream of the files to write, plus the cache namespace path to your storage target export, and an HPC Cache mount IP address.
+
+The script uses an NFSv3 "commit" call with special arguments enabled. The Linux nfs-common client can't pass these arguments appropriately, so the flush_file.py utility uses an NFS client emulator in a Python library to communicate with the HPC Cache NFS service. The library includes everything needed, which bypasses any limitations that might exist in your compute client's Linux-kernel-based NFS client.
+
+To use this feature, you need to do the following:
+
+* Install the ``hpc-cache-nfsv3-client`` library from the [GitHub Microsoft HPC-Cache-NFSv3-client repository](<https://github.com/microsoft/hpc-cache-nfsv3-client>) at <https://github.com/microsoft/hpc-cache-nfsv3-client>. on one or more compute clients. Prerequisite information and instructions are included on the repository's README file.
+
+* Use the included 'flush_file.py' script to tell the cache to write the exact files you need back to the long-term storage system.
+
+Learn more about installing and using the flush_file.py script in the [GitHub repository](<https://github.com/microsoft/hpc-cache-nfsv3-client#readme>).
hpc-cache Hpc Cache Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-manage.md
description: How to manage and update Azure HPC Cache using the Azure portal or
Previously updated : 06/02/2022 Last updated : 06/29/2022
$
+> [!TIP]
+> If you need to write specific individual files back to a storage target without writing the entire cache contents, consider using the flush_file.py script contained in the PC Cache NFSv3 client library distribution. Learn more in [Customize file write-back in Azure HPC Cache](custom-flush-script.md).
+ ## Upgrade cache software If a new software version is available, the **Upgrade** button becomes active. You also should see a message at the top of the page about updating software.
hpc-cache Manage Storage Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/manage-storage-targets.md
description: How to suspend, remove, force delete, and flush Azure HPC Cache sto
Previously updated : 05/29/2022 Last updated : 06/29/2022
You could use this option to make sure that the back-end storage is populated be
This option mainly applies to usage models that include write caching. Read [Understand cache usage models](cache-usage-models.md) to learn more about read and write caching.
+> [!TIP]
+> If you need to write specific individual files back to a storage target without writing its entire cache contents, consider the flush_file.py script contained in the HPC Cache NFSv3 client library distribution. Learn more in [Customize file write-back in Azure HPC Cache](custom-flush-script.md).
++ ### Suspend a storage target The suspend feature disables client access to a storage target, but doesn't permanently remove the storage target from your cache. You can use this option if you need to disable a back-end storage system for maintenance, repair, or replacement.
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
When you define a custom role, you choose the set of permissions that a user is
| Delete | View | | Full Control | View, Update, Create, Delete |
+**Data explorer permissions**
+
+| Name | Dependencies |
+| - | -- |
+| View | None <br/> Other dependencies: View device groups, device templates, device instances |
+| Update | View <br/> Other dependencies: View device groups, device templates, device instances |
+| Create | View, Update <br/> Other dependencies: View device groups, device templates, device instances |
+| Delete | View <br/> Other dependencies: View device groups, device templates, device instances |
+| Full Control | View, Update, Create, Delete <br/> Other dependencies: View device groups, device templates, device instances |
+ **Branding, favicon, and colors permissions** | Name | Dependencies |
iot-edge Module Edgeagent Edgehub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-edgeagent-edgehub.md
The module twin for the IoT Edge hub is called `$edgeHub` and coordinates the co
| -- | -- | -- | | schemaVersion | Either "1.0" or "1.1". Version 1.1 was introduced with IoT Edge version 1.0.10, and is recommended. | Yes | | routes.{routeName} | A string representing an IoT Edge hub route. For more information, see [Declare routes](module-composition.md#declare-routes). | The `routes` element can be present but empty. |
-| storeAndForwardConfiguration.timeToLiveSecs | The time in seconds that IoT Edge hub keeps messages if disconnected from routing endpoints, whether IoT Hub or a local module. The value can be any positive integer. | Yes |
+| storeAndForwardConfiguration.timeToLiveSecs | The device time in seconds that IoT Edge hub keeps messages if disconnected from routing endpoints, whether IoT Hub or a local module. This time persists over any power offs or restarts. For more information, see [Offline capabilities](offline-capabilities.md#time-to-live). | Yes |
## EdgeHub reported properties
iot-hub Iot Hub Devguide Routing Query Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-routing-query-syntax.md
deviceClient.sendEvent(message, (err, res) => {
``` > [!NOTE]
-> This shows how to handle the encoding of the body in javascript. If you want to see a sample in C#, download the [Azure IoT C# Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). Unzip the master.zip file. The Visual Studio solution *SimulatedDevice*'s Program.cs file shows how to encode and submit messages to an IoT Hub. This is the same sample used for testing the message routing, as explained in the [Message Routing tutorial](tutorial-routing.md). At the bottom of Program.cs, it also has a method to read in one of the encoded files, decode it, and write it back out as ASCII so you can read it.
+> This shows how to handle the encoding of the body in JavaScript. If you want to see a sample in C#, download the [Azure IoT C# Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). Unzip the master.zip file. The Visual Studio solution *SimulatedDevice*'s Program.cs file shows how to encode and submit messages to an IoT Hub. This is the same sample used for testing the message routing, as explained in the [Message Routing tutorial](tutorial-routing.md). At the bottom of Program.cs, it also has a method to read in one of the encoded files, decode it, and write it back out as ASCII so you can read it.
### Query expressions
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
To secure outbound traffic from your logic app, you can integrate your logic app
### Considerations for outbound traffic through VNet integration
+If your virtual network uses a network security group (NSG), user-defined route table (UDR), or a firewall, make sure that the virtual network allows outbound connections to [all managed connector IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in the corresponding region. Otherwise, Azure-managed connectors won't work.
+ Setting up virtual network integration affects only outbound traffic. To secure inbound traffic, which continues to use the App Service shared endpoint, review [Set up inbound traffic through private endpoints](#set-up-inbound). For more information, review the following documentation:
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
When you access online endpoints with REST requests, the returned status codes a
| 401 | Unauthorized | You don't have permission to do the requested action, such as score, or your token is expired. | | 404 | Not found | Your URL isn't correct. | | 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config.|
-| 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
+| 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](/azure/azure-monitor/essentials/metrics-getting-started). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
| 429 | Rate-limiting | You attempted to send more than 100 requests per second to your endpoint. | | 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow 2 * `max_concurrent_requests_per_instance` * `instance_count` requests at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`. If you are using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. | | 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
marketplace Marketplace Metering Service Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-metering-service-apis.md
description: The usage event API allows you to emit usage events for SaaS offers
Previously updated : 06/30/2022 Last updated : 07/07/2022
The batch usage event API allows you to emit usage events for more than one purc
| `authorization` | A unique access token that identifies the ISV that is making this API call. The format is `Bearer <access_token>` when the token value is retrieved by the publisher as explained for <br> <ul> <li> SaaS in [Get the token with an HTTP POST](partner-center-portal/pc-saas-registration.md#get-the-token-with-an-http-post). </li> <li> Managed application in [Authentication strategies](./marketplace-metering-service-authentication.md). </li> </ul> | >[!NOTE]
->In the request body, the resource identifier has different meanings for SaaS app and for Azure Managed app emitting custom meter. The resource identifier for SaaS App is `resourceID`. The resource identifier for Azure Application Managed Apps plans is `resourceUri`.
+>In the request body, the resource identifier has different meanings for SaaS app and for Azure Managed app emitting custom meter. The resource identifier for SaaS App is `resourceID`. The resource identifier for Azure Application Managed Apps plans is `resourceUri`. For more information on resource identifiers, see [Azure Marketplace Metered Billing- Picking the correct ID when submitting usage events](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/azure-marketplace-metered-billing-picking-the-correct-id-when/ba-p/3542373).
For SaaS offers, the `resourceId` is the SaaS subscription ID. For more details on SaaS subscriptions, see [list subscriptions](partner-center-portal/pc-saas-fulfillment-subscription-api.md#get-list-of-all-subscriptions).
marketplace Commercial Marketplace Lead Management Instructions Azure Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md
If your customer relationship management (CRM) system isn't explicitly supported
:::image type="content" source="media/commercial-marketplace-lead-management-instructions-azure-table/azure-storage-keys.png" alt-text="Azure storage key.":::
-1. From your storage account pane, select **Tables**, and select **+ Table** to create a table. Enter a name for your table and select **OK**. Save this value because you'll need it if you want to configure a flow to receive email notifications when leads are received.
+1. (Optional) From your storage account pane, select **Tables**, and select **+ Table** to create a table. Enter a name for your table and select **OK**. Save this value because you'll need it if you want to configure a flow to receive email notifications when leads are received.
![Azure tables](./media/commercial-marketplace-lead-management-instructions-azure-table/azure-tables.png)
marketplace Pc Saas Fulfillment Subscription Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-subscription-api.md
Response body example:
``` Code: 404 Not Found.
-SubscriptionId is not found.
+`subscriptionId` is not found.
Code: 403 Forbidden. The authorization token is invalid, expired, or was not provided. The request may be attempting to access a SaaS subscription for an offer that's unsubscribed or published with a different Azure AD app ID from the one used to create the authorization token.
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-java-connection-pooling.md
Title: Java samples to illustrate connection pooling
-description: This article lists java samples to illustrate connection pooling.
+description: This article lists Java samples to illustrate connection pooling.
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
This model of high availability deployment enables Flexible server to be highly
Automatic backups are performed periodically from the primary database server, while the transaction logs are continuously archived to the backup storage from the standby replica. If the region supports availability zones, then backup data is stored on zone-redundant storage (ZRS). In regions that doesn't support availability zones, backup data is stored on local redundant storage (LRS). :::image type="content" source="./media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Same-zone high availability":::
->[!NOTE]
-> See the [HA limitation section](#high-availabilitylimitations) for a current restriction with same-zone HA deployment.
## Components and workflow
Flexible servers that are configured with high availability, log data is replica
## High availability - limitations
->[!NOTE]
-> New server creates with **Same-zone HA** are currently restricted when you choose the primary server's AZ. Workarounds are to (a) create your same-zone HA server without choosing the primary AZ, or (b) create as a single instance (non-HA) server and then enable same-zone HA after the server is created.
- * High availability is not supported with burstable compute tier. * High availability is supported only in regions where multiple zones are available. * Due to synchronous replication to the standby server, especially with zone-redundant HA, applications can experience elevated write and commit latency.
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
This section provides details specifically for HA-related fields. You can follow
4. If you chose the Availability zone in step 2 and if you chose zone-redundant HA, then you can choose the standby zone. :::image type="content" source="./media/how-to-manage-high-availability-portal/choose-standby-availability-zone.png" alt-text="Screenshot of Standby AZ selection.":::
+
->[!NOTE]
-> See the [HA limitation section](concepts-high-availability.md#high-availabilitylimitations) for a current restriction with same-zone HA deployment.
-
-1. If you want to change the default compute and storage, click **Configure server**.
+5. If you want to change the default compute and storage, click **Configure server**.
:::image type="content" source="./media/how-to-manage-high-availability-portal/configure-server.png" alt-text="Screenshot of configure compute and storage screen.":::
-2. If high availability option is checked, the burstable tier will not be available to choose. You can choose either
+6. If high availability option is checked, the burstable tier will not be available to choose. You can choose either
**General purpose** or **Memory Optimized** compute tiers. Then you can select **compute size** for your choice from the dropdown. :::image type="content" source="./media/how-to-manage-high-availability-portal/select-compute.png" alt-text="Compute tier selection screen.":::
-3. Select **storage size** in GiB using the sliding bar and select the **backup retention period** between 7 days and 35 days.
+7. Select **storage size** in GiB using the sliding bar and select the **backup retention period** between 7 days and 35 days.
:::image type="content" source="./media/how-to-manage-high-availability-portal/storage-backup.png" alt-text="Screenshot of Storage Backup.":::
-4. Click **Save**.
+8. Click **Save**.
## Enable high availability post server creation
postgresql Howto App Stacks Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-stacks-java.md
Title: Java app to connect and query Hyperscale (Citus)
-description: Learn building a simple app on Hyperscale (Citus) using java
+description: Learn building a simple app on Hyperscale (Citus) using Java
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 06/28/2022 Last updated : 07/07/2022
This article outlines how to register a Power BI tenant in a **same-tenant scena
|Public access with Azure IR |Allowed |Allowed |Azure Runtime | Microsoft Purview Managed Identity | [Review deployment checklist](#deployment-checklist) | |Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) | |Private access |Allowed |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-|Private access |Denied |Allowed* |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
+|Private access |Denied |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
|Private access |Denied |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-\* Power BI tenant must have a private endpoint that is deployed in a Virtual Network accessible from the self-hosted integration runtime VM. For more information, see [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links).
- ### Known limitations - If Microsoft Purview or Power BI tenant is protected behind a private endpoint, Self-hosted runtime is the only option to scan.
Use any of the following deployment checklists during the setup or for troublesh
3. Microsoft Graph User.Read 3. Under **Authentication**, **Allow public client flows** is enabled. 2. Review network configuration and validate if:
- 1. A [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links) is deployed.
+ 1. A [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links) is deployed. (Optional)
2. All required [private endpoints for Microsoft Purview](/azure/purview/catalog-private-link-end-to-end) are deployed.
- 3. Network connectivity from Self-hosted runtime to Power BI tenant is enabled through private network.
+ 3. Network connectivity from Self-hosted runtime to Power BI tenant is enabled.
3. Network connectivity from Self-hosted runtime to Microsoft services is enabled through private network.
To create and run a new scan, do the following:
This scenario can be used when Microsoft Purview and Power BI tenant or both, are configured to use private endpoint and deny public access. Additionally, this option is also applicable if Microsoft Purview and Power BI tenant are configured to allow public access.
-> [!IMPORTANT]
-> Additional configuration may be required for your Power BI tenant and Microsoft Purview account, if you are planning to scan Power BI tenant through private network where either Microsoft Purview account, Power BI tenant or both are configured with private endpoint with public access denied.
->
-> For more information related to Power BI network, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links).
->
-> For more information about Microsoft Purview network settings, see [Use private endpoints for your Microsoft Purview account](catalog-private-link.md).
+For more information related to Power BI network, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links).
+
+For more information about Microsoft Purview network settings, see [Use private endpoints for your Microsoft Purview account](catalog-private-link.md).
To create and run a new scan, do the following:
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
Last updated 06/06/2022
-# Configure the minimum TLS version for a Service Bus namespace using ARM (Preview)
+# Configure the minimum TLS version for a Service Bus namespace (Preview)
Azure Service Bus namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Service Bus namespace to require that clients send and receive data with a newer version of TLS. If a Service Bus namespace requires a minimum version of TLS, then any requests made with an older version will fail. For conceptual information about this feature, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md).
service-health Resource Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-overview.md
For VMSS, visit [Resource health state is "Degraded" in Azure Virtual Machine Sc
## History information > [!NOTE]
-> You can query data up to 1 year using the QueryStartTime parameter of [Events](/rest/api/resourcehealth/events/list-by-subscription-id) REST API.
+> You can list current service health events in subscription and query data up to 1 year using the QueryStartTime parameter of [Events - List By Subscription Id](/rest/api/resourcehealth/events/list-by-subscription-id) REST API but currently there is no QueryStartTime parameter under [Events - List By Single Resource](/rest/api/resourcehealth/events/list-by-single-resource) REST API so you cannot query data up to 1 year while listing current service health events for given resource.
You can access up to 30 days of history in the **Health history** section of Resource Health from Azure Portal.
spring-cloud Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps-enterprise.md
Use the following steps to provision an Azure Spring Apps service instance.
az spring app create \ --resource-group <resource-group-name> \
- --name catalog-service
+ --name catalog-service \
--service <Azure-Spring-Apps-service-instance-name> az spring app create \
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
To configure the API language runtime version, set the `apiRuntime` property in
| .NET 6.0 isolated | Windows | 4.x | `dotnet-isolated:6.0` | | Node.js 12.x | Linux | 3.x | `node:12` | | Node.js 14.x | Linux | 4.x | `node:14` |
-| Node.js 16.x (preview) | Linux | 4.x | `node:16` |
+| Node.js 16.x | Linux | 4.x | `node:16` |
| Python 3.8 | Linux | 3.x | `python:3.8` | | Python 3.9 | Linux | 4.x | `python:3.9` |
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
The `BlobInventoryPolicyCompleted` event is generated when the inventory run com
"policyRunStatus": "Succeeded", "policyRunStatusMessage": "Inventory run succeeded, refer manifest file for inventory details.", "policyRunId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "manifestBlobUrl": "https://testaccount.blob.core.windows.net/inventory-destination-container/2021/05/26/13-25-36/Rule_1/Rule_1.csv"
+ "manifestBlobUrl": "https://testaccount.blob.core.windows.net/inventory-destination-container/2021/05/26/13-25-36/Rule_1/Rule_1-manifest.json"
}, "dataVersion": "1.0", "metadataVersion": "1",
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
Previously updated : 03/30/2022 Last updated : 07/06/2022
The [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) o
```javascript const { BlobServiceClient, StorageSharedKeyCredential } = require('@azure/storage-blob');
- // optional but suggested - connect with managed identity
+ // optional but recommended - connect with managed identity (Azure AD)
const { DefaultAzureCredential } = require('@azure/identity'); ```
+## Connect with Azure AD
+Azure Active Directory (Azure AD) provides the most secure connection by managing the connection identity ([**managed identity**](/azure/active-directory/managed-identities-azure-resources/overview)). This functionality allows you to develop code that doesn't require any secrets (keys or connection strings) stored in the code or environment. Managed identity requires [**setup**](assign-azure-role-data-access.md?tabs=portal) for any identities such as developer (personal) or cloud (hosting) environments. You need to complete the setup before using the code in this section.
+
+After you complete the setup, your Storage resource needs to have one or more of the following roles assigned to the identity resource you plan to connect with:
+
+* A [data access](../common/authorize-data-access.md) role - such as:
+ * **Storage Blob Data Reader**
+ * **Storage Blob Data Contributor**
+* A [resource](../common/authorization-resource-provider.md) role - such as:
+ * **Reader**
+ * **Contributor**
+
+To authorize with Azure AD, you'll need to use an Azure credential. Which type of credential you need depends on where your application runs. Use this table as a guide.
+
+| Where the application runs | Security principal | Guidance |
+|--|--||
+| Local machine (developing and testing) | User identity or service principal | [Use the Azure Identity library to get an access token for authorization](../common/identity-library-acquire-token.md) |
+| Azure | Managed identity | [Authorize access to blob data with managed identities for Azure resources](authorize-managed-identity.md) |
+| Servers or clients outside of Azure | Service principal | [Authorize access to blob or queue data from a native or web application](../common/storage-auth-aad-app.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json) |
+
+Create a [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential) instance. Use that object to create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient).
+
+```javascript
+const { BlobServiceClient } = require('@azure/storage-blob');
+const { DefaultAzureCredential } = require('@azure/identity');
+require('dotenv').config()
+
+const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
+if (!accountName) throw Error('Azure Storage accountName not found');
+
+const blobServiceClient = new BlobServiceClient(
+ `https://${accountName}.blob.core.windows.net`,
+ new DefaultAzureCredential()
+);
+
+async function main(){
+
+ // this call requires Reader role on the identity
+ const serviceGetPropertiesResponse = await blobServiceClient.getProperties();
+ console.log(`${JSON.stringify(serviceGetPropertiesResponse)}`);
+
+}
+
+main()
+ .then(() => console.log(`done`))
+ .catch((ex) => console.log(`error: ${ex.message}`));
+```
+
+If you plan to deploy the application to servers and clients that run outside of Azure, you can obtain an OAuth token by using other classes in the [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme) which derive from the [TokenCredential](/javascript/api/@azure/core-auth/tokencredential) class.
## Connect with an account name and key
To generate and manage SAS tokens, see any of these articles:
- [Create a service SAS for a container or blob](sas-service-create.md)
-## Object authorization with Azure AD
-
-To authorize with Azure AD, you'll need to use an Azure credential. Which type of credential you need depends on where your application runs. Use this table as a guide.
-
-| Where the application runs | Security principal | Guidance |
-|--|--||
-| Local machine (developing and testing) | User identity or service principal | [Use the Azure Identity library to get an access token for authorization](../common/identity-library-acquire-token.md) |
-| Azure | Managed identity | [Authorize access to blob data with managed identities for Azure resources](authorize-managed-identity.md) |
-| Servers or clients outside of Azure | Service principal | [Authorize access to blob or queue data from a native or web application](../common/storage-auth-aad-app.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json) |
-
-If you're testing on a local machine, or your application will run in Azure virtual machines (VMs), Functions apps, virtual machine scale sets, or in other Azure services, obtain an OAuth token by creating a [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential) instance. Use that object to create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient).
-
-```javascript
-const { BlobServiceClient } = require('@azure/storage-blob');
-const { DefaultAzureCredential } = require('@azure/identity');
-require('dotenv').config()
-const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
-if (!accountName) throw Error('Azure Storage accountName not found');
-
-const blobServiceClient = new BlobServiceClient(
- `https://${accountName}.blob.core.windows.net`,
- new DefaultAzureCredential()
-);
-
-async function main(){
- const serviceGetPropertiesResponse = await blobServiceClient.getProperties();
- console.log(`${JSON.stringify(serviceGetPropertiesResponse)}`);
-}
-
-main()
- .then(() => console.log(`done`))
- .catch((ex) => console.log(`error: ${ex.message}`));
-```
-
-If you plan to deploy the application to servers and clients that run outside of Azure, you can obtain an OAuth token by using other classes in the [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme) which derive from the [TokenCredential](/javascript/api/@azure/core-auth/tokencredential) class.
## Connect anonymously
The following guides show you how to use each of these clients to build your app
- [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) - [API reference](/javascript/api/@azure/storage-blob/) - [Library source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob)-- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
synapse-analytics Sql Database Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-database-synapse-link.md
You can start or stop a link connection. When started, a link connection will st
You need to select compute core counts for each link connection to replicate your data. The core counts represent the compute power and it impacts your data replication latency and cost.
+You also have the chance to make a trade-off between cost and latency by selecting the continuous or batch mode to replicate the data. When you select continuous mode, the runtime will be running continuously so that any changes applied to your SQL DB or SQL Server will be replicated to Synapse with low latency. When you select batch mode with specified interval, the changes applied to your SQL DB or SQL Server will be accumulated and replicated to Synapse in a batch mode with specified interval. By doing so, you can save cost as you are only charged for the time when the runtime is required to replicate your data. After each batch of data is replicated, the runtime will be shut down automatically.
+ ## Monitoring You can monitor Azure Synapse Link for SQL at the link and table levels. For each link connection, you'll see the following status:
synapse-analytics Sql Server 2022 Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-server-2022-synapse-link.md
You can start or stop a link connection. When started, a link connection will st
You need to select compute core counts for each link connection to replicate your data. The core counts represent the compute power and it impacts your data replication latency and cost.
+You also have the chance to make a trade-off between cost and latency by selecting the continuous or batch mode to replicate the data. When you select continuous mode, the runtime will be running continuously so that any changes applied to your SQL DB or SQL Server will be replicated to Synapse with low latency. When you select batch mode with specified interval, the changes applied to your SQL DB or SQL Server will be accumulated and replicated to Synapse in a batch mode with specified interval. By doing so, you can save cost as you are only charged for the time when the runtime is required to replicate your data. After each batch of data is replicated, the runtime will be shut down automatically.
+ ## Landing zone The landing zone is an interim staging store required for Azure Synapse Link for SQL Server 2022. First, the operational data is loaded from the SQL Server 2022 to the landing zone. Next, the data is copied from the landing zone to the Synapse dedicated SQL pool. You need to provide your own Azure Data Lake Storage Gen2 account to be used as a landing zone. It is not supported to use this landing zone for anything other than Azure Synapse Link for SQL.
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 06/02/2022 Last updated : 07/07/2022
Azure Virtual Desktop updates regularly. This article is where you'll find out a
Make sure to check back here often to keep up with new updates.
+## June 2022
+
+Here's what changed in June 2022:
+
+### Australia metadata service in public preview
+
+The Azure Virtual Desktop metadata database located in Australia is now in public preview. This allows customers to store their Azure Virtual Desktop objects and metadata within a database located within our Australia geography, ensuring that the data will only reside within Australia. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-public-preview-of-the-azure-virtual-desktop/ba-p/3483090).
+
+### Intune user configuration for Windows 11 Enterprise multi-session VMs in public preview
+
+Deploying Intune user configuration policies from Microsoft Endpoint Manager admin center to Windows 11 Enterprise multi-session virtual machines (VMs) on Azure Virtual Desktop is now in public preview. In this preview, you can configure the following:
+
+- User scope policies using the Settings catalog.
+- User certificates via Templates.
+- PowerShell scripts to run in the user context.
+
+For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/public-preview-intune-user-configuration-for-windows-11-multi/m-p/3562093).
+
+### Teams media optimizations for macOS now generally available
+
+Teams media optimizations for redirecting audio and video during calls and meetings to a local macOS machine is now generally available. To use this feature, you'll need to update or install, at a minimum, version 10.7.7 of the Azure Virtual Desktop macOS client. Learn more at [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-media-optimizations-is-now-generally-available/ba-p/3563125).
+ ## May 2022 Here's what changed in May 2022:
The latest update for FSLogix 2201 includes fixes to Cloud Cache and container r
Here's what changed in April 2022:
-### Intune device configuration for Windows multisession now generally available
+### Intune device configuration for Windows multi-session now generally available
-Deploying Intune device configuration policies from Microsoft Endpoint Manager admin center to Windows multisession VMs on Azure Virtual Desktop is now generally available. Learn more at [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/intune-device-configuration-for-azure-virtual-desktop-multi/ba-p/3294444).
+Deploying Intune device configuration policies from Microsoft Endpoint Manager admin center to Windows multi-session VMs on Azure Virtual Desktop is now generally available. Learn more at [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/intune-device-configuration-for-azure-virtual-desktop-multi/ba-p/3294444).
### Scheduled Agent Updates public preview
Here's what changed in March 2022:
### Live Captions with Teams on Azure Virtual Desktop now generally available
-Accessibility has always been important to us, so we are pleased to announce that Teams for Azure Virtual Desktop now supports real-time captions. Learn how to use live captions at [Use live captions in a Teams meeting](https://support.microsoft.com/office/use-live-captions-in-a-teams-meeting-4be2d304-f675-4b57-8347-cbd000a21260). For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-live-captions-is-now-generally-available-on/ba-p/3264148).
+Accessibility has always been important to us, so we're pleased to announce that Teams for Azure Virtual Desktop now supports real-time captions. Learn how to use live captions at [Use live captions in a Teams meeting](https://support.microsoft.com/office/use-live-captions-in-a-teams-meeting-4be2d304-f675-4b57-8347-cbd000a21260). For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-live-captions-is-now-generally-available-on/ba-p/3264148).
### Multimedia redirection enhancements now in public preview
The start VM on connect feature is now generally available. This feature helps y
We recently announced a new pricing option for remote app streaming for using Azure Virtual Desktop to deliver apps as a service to your customers and business partners. For example, software vendors can use remote app streaming to deliver apps as a software as a service (SaaS) solution that's accessible to their customers. To learn more about remote app streaming, check out [our documentation](./remote-app-streaming/overview.md).
-From July 14th, 2021 to December 31st, 2021, we're giving customers who use remote app streaming a promotional offer that lets their business partners and customers access Azure Virtual Desktop for no charge. This offer only applies to external user access rights. Regular billing will resume on January 1st, 2022. In the meantime, you can continue to use your existing Windows license entitlements found in licenses like Microsoft 365 E3 or Windows E3. To learn more about this offer, see the [Azure Virtual Desktop pricing page](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+From July 14, 2021 to December 31, 2021, we're giving customers who use remote app streaming a promotional offer that lets their business partners and customers access Azure Virtual Desktop for no charge. This offer only applies to external user access rights. Regular billing will resume on January 1, 2022. In the meantime, you can continue to use your existing Windows license entitlements found in licenses like Microsoft 365 E3 or Windows E3. To learn more about this offer, see the [Azure Virtual Desktop pricing page](https://azure.microsoft.com/pricing/details/virtual-desktop/).
### New Azure Virtual Desktop handbooks
We've expanded our Azure control plane presence to the United Arab Emirates (UAE
### Ending Internet Explorer 11 support
-On September 30th, 2021, the Azure Virtual Desktop web client will no longer support Internet Explorer 11. We recommend you start using the [Microsoft Edge](https://www.microsoft.com/edge?form=MY01R2&OCID=MY01R2&r=1) browser for your web client and remote sessions instead. For more information, see the announcement in [this blog post](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/windows-virtual-desktop-web-client-to-end-support-for-internet/m-p/2369007).
+On September 30, 2021, the Azure Virtual Desktop web client will no longer support Internet Explorer 11. We recommend you start using the [Microsoft Edge](https://www.microsoft.com/edge?form=MY01R2&OCID=MY01R2&r=1) browser for your web client and remote sessions instead. For more information, see the announcement in [this blog post](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/windows-virtual-desktop-web-client-to-end-support-for-internet/m-p/2369007).
### Microsoft Endpoint Manager public preview
To learn more about new features, check out [this blog post](https://techcommuni
### Autoscaling tool update
-The latest version of the autoscaling tool that was in preview is now generally available. This tool uses an Azure automation account and the Azure Logic App to automatically shut down and restart session host VMs within a host pool, reducing infrastructure costs. Learn more at [Scale session hosts using Azure Automation](set-up-scaling-script.md).
+The latest version of the autoscaling tool that was in preview is now generally available. This tool uses an Azure Automation account and the Azure Logic App to automatically shut down and restart session host VMs within a host pool, reducing infrastructure costs. Learn more at [Scale session hosts using Azure Automation](set-up-scaling-script.md).
### Azure portal
Here's what this change does for you:
### PowerShell support
-We've added new AzWvd cmdlets to the Azure PowerShell Az Module with this update. This new module is supported in PowerShell Core, which runs on .NET Core.
+We've added new AzWvd cmdlets to the Azure Az PowerShell module with this update. This new module is supported in PowerShell Core, which runs on .NET Core.
To install the module, follow the instructions in [Set up the PowerShell module for Azure Virtual Desktop](powershell-module.md).
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
# Connect to a Linux VM
-In Azure there are multiple ways to connect to a Linux virtual machine. The most common practice for connecting to a Linux VM is using the Secure Shell Protocol (SSH). This is done via any standard SSH aware client commonly found in Linux; on Windows you can use [Windows Sub System for Linux](/windows/wsl/about) or any local terminal. You can also use [Azure Cloud Shell](../cloud-shell/overview.md) from any browser.
-
+In Azure there are multiple ways to connect to a Linux virtual machine. The most common practice for connecting to a Linux VM is using the Secure Shell Protocol (SSH). This is done via any standard SSH client commonly found in Linux and Windows. You can also use [Azure Cloud Shell](../cloud-shell/overview.md) from any browser.
+
This document describes how to connect, via SSH, to a VM that has a public IP. If you need to connect to a VM without a public IP, see [Azure Bastion Service](../bastion/bastion-overview.md). ## Prerequisites
This document describes how to connect, via SSH, to a VM that has a public IP. I
## Connect to the VM
-Once the above prerequisites are met, you're ready to connect to your VM. Open your SSH client of choice.
---- If you're using Linux or macOS, the SSH client is usually terminal or shell.-- For a Windows machine this might be [WSL](/windows/wsl/about), or any local terminal like [PowerShell](/powershell/scripting/overview). If you do not have an SSH client you can [install WSL](/windows/wsl/install), or consider using [Azure Cloud Shell](../cloud-shell/overview.md).
+Once the above prerequisites are met, you are ready to connect to your VM. Open your SSH client of choice. The SSH client command is typically included in Linux, macOS, and Windows. If you are using Windows 7 or older, where Win32 OpenSSH is not included by default, consider installing [WSL](/windows/wsl/about) or using [Azure Cloud Shell](../cloud-shell/overview.md) from the browser.
> [!NOTE] > The following examples assume the SSH key is in the key.pem format. If you used CLI or Azure PowerShell to download your keys, they may be in the id_rsa format.
Once the above prerequisites are met, you're ready to connect to your VM. Open y
3. Success! You should now be connected to your VM. If you're unable to connect using the correct method above, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
-## [Windows 10 Command Line (cmd.exe, PowerShell etc.)](#tab/Windows)
+## [Windows command line (cmd.exe, PowerShell etc.)](#tab/Windows)
### SSH with a new key pair+ 1. Locate your private SSH Key 2. Run the SSH command with the following syntax: `ssh -i PATH_TO_PRIVATE_KEY USERNAME@EXTERNAL_IP` For example, if your `azureuser` is the username you created and `20.51.230.13` is the public IP address of your VM, type:
- ```bash
+ ```powershell
ssh -i .\Downloads\myKey.pem azureuser@20.51.230.13 ``` 3. Validate the returned fingerprint. If you have never connected to this VM before you will be asked to verify the hosts fingerprint. It is tempting to simply accept the fingerprint presented, however, this exposes you to a possible person in the middle attack. You should always validate the hosts fingerprint. You only need to do this on the first time you connect from a client. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
- ```bash
- ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}'
+ ```azurepowershell-interactive
+ Invoke-AzVMRunCommand -ResourceGroupName 'myResourceGroup' -VMName 'myVM' -CommandId 'RunPowerShellScript' -ScriptString
+ 'ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}''
```
-4. Success! You should now be connected to your VM. If you're unable to connect, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
+
+4. Success! You should now be connected to your VM. If you are unable to connect, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
### Password authentication
Once the above prerequisites are met, you're ready to connect to your VM. Open y
## Next steps Learn how to transfer files to an existing Linux VM, see [Use SCP to move files to and from a Linux VM](./linux/copy-files-to-linux-vm-using-scp.md).-
virtual-machines Add Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/add-disk.md
In this example, we are using the nano editor, so when you are done editing the
> [!NOTE] > Later removing a data disk without editing fstab could cause the VM to fail to boot. Most distributions provide either the *nofail* and/or *nobootwait* fstab options. These options allow a system to boot even if the disk fails to mount at boot time. Consult your distribution's documentation for more information on these parameters. >
-> The *nofail* option ensures that the VM starts even if the filesystem is corrupt or the disk does not exist at boot time. Without this option, you may encounter behavior as described in [Cannot SSH to Linux VM due to FSTAB errors](/archive/blogs/linuxonazure/cannot-ssh-to-linux-vm-after-adding-data-disk-to-etcfstab-and-rebooting)
+> The *nofail* option ensures that the VM starts even if the filesystem is corrupt or the disk does not exist at boot time. Without this option, you may encounter behavior as described in [Cannot SSH to Linux VM due to FSTAB errors](/troubleshoot/azure/virtual-machines/linux-virtual-machine-cannot-start-fstab-errors)
> > The Azure VM Serial Console can be used for console access to your VM if modifying fstab has resulted in a boot failure. More details are available in the [Serial Console documentation](/troubleshoot/azure/virtual-machines/serial-console-linux).
There are two ways to enable TRIM support in your Linux VM. As usual, consult yo
## Next steps * To ensure your Linux VM is configured correctly, review the [Optimize your Linux machine performance](/previous-versions/azure/virtual-machines/linux/optimization) recommendations.
-* Expand your storage capacity by adding additional disks and [configure RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) for additional performance.
+* Expand your storage capacity by adding additional disks and [configure RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) for additional performance.
virtual-machines Connect Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/connect-rdp.md
+
+ Title: Connect using Remote Desktop to an Azure VM running Windows
+description: Learn how to connect using Remote Desktop and sign on to a Windows VM using the Azure portal and the Resource Manager deployment model.
++++ Last updated : 02/24/2022++++
+# How to connect using Remote Desktop and sign on to an Azure virtual machine running Windows
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+
+You can create a remote desktop connection to a virtual machine (VM) running Windows in Azure.
+
+To connect to a Windows VM from a Mac, you will need to install an RDP client for Mac such as [Microsoft Remote Desktop](https://aka.ms/rdmac).
+
+## Prerequisites
+- In order to connect to a Windows Virtual Machine via RDP you need TCP connectivity to the machine on the port where Remote Desktop service is listening (3389 by default). You can validate an appropriate port is open for RDP using the troubleshooter or by checking manually in your VM settings. To check if the TCP port is open (assuming default):
+
+ 1. On the page for the VM, select **Networking** from the left menu.
+ 1. On the **Networking** page, check to see if there is a rule which allows TCP on port 3389 from the IP address of the computer you are using to connect to the VM. If the rule exists, you can move to the next section.
+ 1. If there isn't a rule, add one by selecting **Add Inbound port rule**.
+ 2. From the **Service** dropdown select **RDP**.
+ 3. Edit **Priority** and **Source** if necessary
+ 4. For **Name**, type *Port_3389*
+ 5. When finished, select **Add**
+ 6. You should now have an RDP rule in the table of inbound port rules.
+
+- Your VM must have a public IP address. To check if your VM has a public IP address, select **Overview** from the left menu and look at the **Networking** section. If you see an IP address next to **Public IP address**, then your VM has a public IP. To learn more about adding a public IP address to an existing VM, see [Associate a public IP address to a virtual machine](../../virtual-network/ip-services/associate-public-ip-address-vm.md)
+
+- Verify your VM is running. On the Overview tab, in the essentials section, verify the status of the VM is Running. To start the VM, select **Start** at the top of the page.
+## Connect to the virtual machine
+
+1. Go to the [Azure portal](https://portal.azure.com/) to connect to a VM. Search for and select **Virtual machines**.
+2. Select the virtual machine from the list.
+3. At the beginning of the virtual machine page, select **Connect**.
+4. On the **Connect to virtual machine** page, select **RDP**, and then select the appropriate **IP address** and **Port number**. In most cases, the default IP address and port should be used. Select **Download RDP File**. If the VM has a just-in-time policy set, you first need to select the **Request access** button to request access before you can download the RDP file. For more information about the just-in-time policy, see [Manage virtual machine access using the just in time policy](../../security-center/security-center-just-in-time.md).
+5. Open the downloaded RDP file and select **Connect** when prompted. You will get a warning that the `.rdp` file is from an unknown publisher. This is expected. In the **Remote Desktop Connection** window, select **Connect** to continue.
+
+ ![Screenshot of a warning about an unknown publisher.](./media/connect-logon/rdp-warn.png)
+3. In the **Windows Security** window, select **More choices** and then **Use a different account**. Enter the credentials for an account on the virtual machine and then select **OK**.
+
+ **Local account**: This is usually the local account user name and password that you specified when you created the virtual machine. In this case, the domain is the name of the virtual machine and it is entered as *vmname*&#92;*username*.
+
+ **Domain joined VM**: If the VM belongs to a domain, enter the user name in the format *Domain*&#92;*Username*. The account also needs to either be in the Administrators group or have been granted remote access privileges to the VM.
+
+ **Domain controller**: If the VM is a domain controller, enter the user name and password of a domain administrator account for that domain.
+4. Select **Yes** to verify the identity of the virtual machine and finish logging on.
+
+ ![Screenshot showing a message about verifying the identity of the VM.](./media/connect-logon/cert-warning.png)
++
+ > [!TIP]
+ > If the **Connect** button in the portal is grayed-out and you are not connected to Azure via an [Express Route](../../expressroute/expressroute-introduction.md) or [Site-to-Site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md) connection, you will need to create and assign your VM a public IP address before you can use RDP. For more information, see [Public IP addresses in Azure](../../virtual-network/ip-services/public-ip-addresses.md).
+ >
+ >
+
+## Connect to the virtual machine using PowerShell
+
+
+
+If you are using PowerShell and have the Azure PowerShell module installed you may also connect using the `Get-AzRemoteDesktopFile` cmdlet, as shown below.
+
+This example will immediately launch the RDP connection, taking you through similar prompts as above.
+
+```powershell
+Get-AzRemoteDesktopFile -ResourceGroupName "RgName" -Name "VmName" -Launch
+```
+
+You may also save the RDP file for future use.
+
+```powershell
+Get-AzRemoteDesktopFile -ResourceGroupName "RgName" -Name "VmName" -LocalPath "C:\Path\to\folder"
+```
+
+## Next steps
+If you have difficulty connecting, see [Troubleshoot Remote Desktop connections](/troubleshoot/azure/virtual-machines/troubleshoot-rdp-connection?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
virtual-machines Connect Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/connect-ssh.md
+
+ Title: Connect using SSH to an Azure VM running Windows
+description: Learn how to connect using Secure Shell and sign on to a Windows VM.
++++ Last updated : 06/29/2022++++
+# How to connect using Secure Shell (SSH) and sign on to an Azure virtual machine running Windows
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+
+The [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH) project makes remote connectivity
+using Secure Shell ubiquitous by providing native support in Windows. The capability is provided in
+Windows Server version 2019 and later, and can be added to older versions of Windows using a virtual
+machine (VM) extension.
+
+The examples below use variables. You can set variables in your environment as follows.
+
+| Shell | Example
+|-|-
+| Bash/ZSH | myResourceGroup='resGroup10'
+| PowerShell | $myResourceGroup='resGroup10'
+
+## Install SSH
+
+First, you will need to enable SSH in your Windows machine.
+
+**Windows Server 2019 and newer**
+
+Following the Windows Server documentation page
+[Get started with OpenSSH](/windows-server/administration/openssh/openssh_install_firstuse),
+run the command `Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0`
+to enable the built-in capability, start the service, and open the Windows Firewall port.
+
+You can use the Azure RunCommand extension to complete this task.
+
+# [Azure CLI](#tab/azurecli)
+
+```azurecli-interactive
+az vm run-command invoke -g $myResourceGroup -n $myVM --command-id RunPowerShellScript --scripts "Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0"
+```
+
+# [Azure PowerShell](#tab/azurepowershell-interactive)
+
+```azurepowershell-interactive
+Invoke-AzVMRunCommand -ResourceGroupName $myResourceGroup -VMName $myVM -CommandId 'RunPowerShellScript' -ScriptString "Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0"
+```
+
+# [ARM template](#tab/json)
+
+```json
+{
+ "type": "Microsoft.Compute/virtualMachines/runCommands",
+ "apiVersion": "2022-03-01",
+ "name": "[concat(parameters('VMName'), '/RunPowerShellScript')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "source": {
+ "script": "Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0",
+ },
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource runPowerShellScript 'Microsoft.Compute/virtualMachines/runCommands@2022-03-01' = {
+ name: 'RunPowerShellScript'
+ location: resourceGroup().location
+ parent: virtualMachine
+ properties: {
+ source: {
+ script: 'Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0'
+ }
+ }
+}
+```
++++
+**Windows Server 2016 and older**
+
+- Deploy the SSH extension for Windows. The extension provides an automated installation of the
+ Win32 OpenSSH solution, similar to enabling the capability in newer versions of Windows. Use the
+ following examples to deploy the extension.
+
+# [Azure CLI](#tab/azurecli)
+
+```azurecli-interactive
+az vm extension set --resource-group $myResourceGroup --vm-name $myVM --name WindowsOpenSSH --publisher Microsoft.Azure.OpenSSH --version 3.0
+```
+
+# [Azure PowerShell](#tab/azurepowershell-interactive)
+
+```azurepowershell-interactive
+Set-AzVMExtension -ResourceGroupName $myResourceGroup -VMName $myVM -Name 'OpenSSH' -Publisher 'Microsoft.Azure.OpenSSH' -Type 'WindowsOpenSSH' -TypeHandlerVersion '3.0'
+```
+
+# [ARM template](#tab/json)
+
+```json
+{
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "[concat(parameters('VMName'), '/WindowsOpenSSH')]",
+ "apiVersion": "2020-12-01",
+ "location": "[parameters('location')]",
+ "properties": {
+ "publisher": "Microsoft.Azure.OpenSSH",
+ "type": "WindowsOpenSSH",
+ "typeHandlerVersion": "3.0",
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource windowsOpenSSHExtension 'Microsoft.Compute/virtualMachines/extensions@2020-12-01' = {
+ parent: virtualMachine
+ name: 'WindowsOpenSSH'
+ location: resourceGroup().location
+ properties: {
+ publisher: 'Microsoft.Azure.OpenSSH'
+ type: 'WindowsOpenSSH'
+ typeHandlerVersion: '3.0'
+ }
+}
+```
+++
+## Open TCP port
+
+Ensure the appropriate port (by default, TCP 22) is open to allow connectivity to the VM.
++
+# [Azure CLI](#tab/azurecli)
+
+```azurecli-interactive
+az network nsg rule create -g $myResourceGroup --nsg-name $myNSG -n allow-SSH --priority 1000 --source-address-prefixes 208.130.28.4/32 --destination-port-ranges 22 --protocol TCP
+```
+
+# [Azure PowerShell](#tab/azurepowershell-interactive)
+
+```azurepowershell-interactive
+Get-AzNetworkSecurityGroup -Name $MyNSG -ResourceGroupName $myResourceGroup | Add-AzNetworkSecurityRuleConfig -Name allow-SSH -access Allow -Direction Inbound -Priority 1000 -SourceAddressPrefix 208.130.28.4/32 -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange 22 -Protocol TCP | Set-AzNetworkSecurityGroup
+```
+
+# [ARM template](#tab/json)
+
+```json
+{
+ "type": "Microsoft.Network/networkSecurityGroups/securityRules",
+ "apiVersion": "2021-08-01",
+ "name": "allow-SSH",
+ "properties": {
+ "access": "Allow",
+ "destinationAddressPrefix": "*",
+ "destinationPortRange": "22",
+ "direction": "Inbound",
+ "priority": "1000",
+ "protocol": "TCP",
+ "sourceAddressPrefix": "208.130.28.4/32",
+ "sourcePortRange": "*"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource allowSSH 'Microsoft.Network/networkSecurityGroups/securityRules@2021-08-01' = {
+ name: 'allowSSH'
+ parent: MyNSGSymbolicName
+ properties: {
+ access: 'Allow'
+ destinationAddressPrefix: '*'
+ destinationPortRange: 'string'
+ destinationPortRanges: [
+ '22'
+ ]
+ direction: 'Inbound'
+ priority: 1000
+ protocol: 'TCP'
+ sourceAddressPrefix: '208.130.28.4/32'
+ sourcePortRange: '*'
+ }
+}
+```
+++
+- Your VM must have a public IP address. To check if your VM has a public IP address, select
+ **Overview** from the left menu and look at the **Networking** section. If you see an IP address
+ next to **Public IP address**, then your VM has a public IP. To learn more about adding a public IP
+ address to an existing VM, see
+ [Associate a public IP address to a virtual machine](../../virtual-network/ip-services/associate-public-ip-address-vm.md)
+
+- Verify your VM is running. On the Overview tab, in the essentials section, verify the status of
+ the VM is Running. To start the VM, select **Start** at the top of the page.
+
+## Authentication
+
+You can authenticate to Windows machines using either username and password or SSH keys. Azure does not support provisioning public keys to Windows machines automatically, however you can copy the key using the RunCommand extension.
+++
+### Copy a public key using the RunCommand extension.
+
+The RunCommand extension provides an easy solution to copying a public key into Windows machines
+and making sure the file has correct permissions.
+
+# [Azure CLI](#tab/azurecli)
+
+```azurecli-interactive
+az vm run-command invoke -g $myResourceGroup -n $myVM --command-id RunPowerShellScript --scripts "MYPUBLICKEY | Add-Content 'C:\ProgramData\ssh\administrators_authorized_keys';icacls.exe 'C:\ProgramData\ssh\administrators_authorized_keys' /inheritance:r /grant 'Administrators:F' /grant 'SYSTEM:F'"
+```
+
+# [Azure PowerShell](#tab/azurepowershell-interactive)
+
+```azurepowershell-interactive
+Invoke-AzVMRunCommand -ResourceGroupName $myResourceGroup -VMName $myVM -CommandId 'RunPowerShellScript' -ScriptString "MYPUBLICKEY | Add-Content 'C:\ProgramData\ssh\administrators_authorized_keys';icacls.exe 'C:\ProgramData\ssh\administrators_authorized_keys' /inheritance:r /grant 'Administrators:F' /grant 'SYSTEM:F'"
+```
+
+# [ARM template](#tab/json)
+
+```json
+{
+ "type": "Microsoft.Compute/virtualMachines/runCommands",
+ "apiVersion": "2022-03-01",
+ "name": "[concat(parameters('VMName'), '/RunPowerShellScript')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "source": {
+ "script": "MYPUBLICKEY | Add-Content 'C:\\ProgramData\\ssh\\administrators_authorized_keys';icacls.exe 'C:\\ProgramData\\ssh\\administrators_authorized_keys' /inheritance:r /grant 'Administrators:F' /grant 'SYSTEM:F'",
+ },
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource runPowerShellScript 'Microsoft.Compute/virtualMachines/runCommands@2022-03-01' = {
+ name: 'RunPowerShellScript'
+ location: resourceGroup().location
+ parent: virtualMachine
+ properties: {
+ source: {
+ script: "MYPUBLICKEY | Add-Content 'C:\ProgramData\ssh\administrators_authorized_keys';icacls.exe 'C:\ProgramData\ssh\administrators_authorized_keys' /inheritance:r /grant 'Administrators:F' /grant 'SYSTEM:F'"
+ }
+ }
+}
+```
+++
+## Connect using Az CLI
+
+Connect to Windows machines using `Az SSH` commands.
+
+```azurecli-interactive
+az ssh vm -g $myResourceGroup -n $myVM --local-user $myUsername
+```
+
+It is also possible to create a network tunnel for specific TCP ports through the SSH connection. A good use case for this is Remote Desktop which defaults to port 3389.
+
+```azurecli-interactive
+az ssh vm -g $myResourceGroup -n $myVM --local-user $myUsername -- -L 3389:localhost:3389
+```
+
+### Connect from Azure portal
+
+1. Go to the [Azure portal](https://portal.azure.com/) to connect to a VM. Search for and select **Virtual machines**.
+2. Select the virtual machine from the list.
+3. Select **Connect** from the left menu.
+4. Select the **SSH** tab. If the VM has a just-in-time policy set, you first need to select the **Request access** button to request access before you can download the RDP file. For more information about the just-in-time policy, see [Manage virtual machine access using the just in time policy](../../security-center/security-center-just-in-time.md).
++
+## Next steps
+Learn how to transfer files to an existing VM, see [Use SCP to move files to and from a Linux VM](../linux/copy-files-to-linux-vm-using-scp.md). The same steps will also work for Windows machines.
virtual-machines Connect Winrm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/connect-winrm.md
+
+ Title: Connect using WinRM to an Azure VM running Windows
+description: Set up WinRM access for use with an Azure virtual machine created in the Resource Manager deployment model.
+++ Last updated : 3/25/2022++++
+# Setting up WinRM access for Virtual Machines in Azure Resource Manager
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
++
+Here are the steps you need to take to set up a VM with WinRM connectivity
+
+1. Create a Key Vault
+2. Create a self-signed certificate
+3. Upload your self-signed certificate to Key Vault
+4. Get the URL for your self-signed certificate in the Key Vault
+5. Reference your self-signed certificates URL while creating a VM
+++
+## Step 1: Create a Key Vault
+You can use the below command to create the Key Vault
+
+```azurepowershell
+New-AzKeyVault -VaultName "<vault-name>" -ResourceGroupName "<rg-name>" -Location "<vault-location>" -EnabledForDeployment -EnabledForTemplateDeployment
+```
+
+## Step 2: Create a self-signed certificate
+You can create a self-signed certificate using this PowerShell script
+
+```azurepowershell
+$certificateName = "somename"
+
+$thumbprint = (New-SelfSignedCertificate -DnsName $certificateName -CertStoreLocation Cert:\CurrentUser\My -KeySpec KeyExchange).Thumbprint
+
+$cert = (Get-ChildItem -Path cert:\CurrentUser\My\$thumbprint)
+
+$password = Read-Host -Prompt "Please enter the certificate password." -AsSecureString
+
+Export-PfxCertificate -Cert $cert -FilePath ".\$certificateName.pfx" -Password $password
+```
+
+## Step 3: Upload your self-signed certificate to the Key Vault
+Before uploading the certificate to the Key Vault created in step 1, it needs to be converted into a format the Microsoft.Compute resource provider will understand. The below PowerShell script will allow you to do that
+
+```azurepowershell
+$fileName = "<Path to the .pfx file>"
+$fileContentBytes = Get-Content $fileName -Encoding Byte
+$fileContentEncoded = [System.Convert]::ToBase64String($fileContentBytes)
+[System.Collections.HashTable]$TableForJSON = @{
+ "data" = $fileContentEncoded;
+ "dataType" = "pfx";
+ "password" = "<password>";
+}
+[System.String]$jsonObject = $TableForJSON | ConvertTo-Json
+$encoding = [System.Text.Encoding]::UTF8
+$jsonEncoded = [System.Convert]::ToBase64String($encoding.GetBytes($jsonObject))
+$secret = ConvertTo-SecureString -String $jsonEncoded -AsPlainText ΓÇôForce
+Set-AzKeyVaultSecret -VaultName "<vault name>" -Name "<secret name>" -SecretValue $secret
+```
+
+## Step 4: Get the URL for your self-signed certificate in the Key Vault
+The Microsoft.Compute resource provider needs a URL to the secret inside the Key Vault while provisioning the VM. This enables the Microsoft.Compute resource provider to download the secret and create the equivalent certificate on the VM.
+
+> [!NOTE]
+> The URL of the secret needs to include the version as well. An example URL looks like below
+> https:\//contosovault.vault.azure.net:443/secrets/contososecret/01h9db0df2cd4300a20ence585a6s7ve
+
+#### Templates
+You can get the link to the URL in the template using the below code
+
+```json
+"certificateUrl": "[reference(resourceId(resourceGroup().name, 'Microsoft.KeyVault/vaults/secrets', '<vault-name>', '<secret-name>'), '2015-06-01').secretUriWithVersion]"
+```
+
+#### PowerShell
+You can get this URL using the below PowerShell command
+
+```azurepowershell
+$secretURL = (Get-AzKeyVaultSecret -VaultName "<vault name>" -Name "<secret name>").Id
+```
+
+## Step 5: Reference your self-signed certificates URL while creating a VM
+#### Azure Resource Manager Templates
+While creating a VM through templates, the certificate gets referenced in the secrets section and the winRM section as below:
+
+```json
+"osProfile": {
+ ...
+ "secrets": [
+ {
+ "sourceVault": {
+ "id": "<resource id of the Key Vault containing the secret>"
+ },
+ "vaultCertificates": [
+ {
+ "certificateUrl": "<URL for the certificate you got in Step 4>",
+ "certificateStore": "<Name of the certificate store on the VM>"
+ }
+ ]
+ }
+ ],
+ "windowsConfiguration": {
+ ...
+ "winRM": {
+ "listeners": [
+ {
+ "protocol": "http"
+ },
+ {
+ "protocol": "https",
+ "certificateUrl": "<URL for the certificate you got in Step 4>"
+ }
+ ]
+ },
+ ...
+ }
+ },
+```
+
+A sample template for the above can be found here at [vm-winrm-keyvault-windows](https://azure.microsoft.com/resources/templates/vm-winrm-keyvault-windows/)
+
+Source code for this template can be found on [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows)
+
+#### PowerShell
+```azurepowershell
+$vm = New-AzVMConfig -VMName "<VM name>" -VMSize "<VM Size>"
+$credential = Get-Credential
+$secretURL = (Get-AzKeyVaultSecret -VaultName "<vault name>" -Name "<secret name>").Id
+$vm = Set-AzVMOperatingSystem -VM $vm -Windows -ComputerName "<Computer Name>" -Credential $credential -WinRMHttp -WinRMHttps -ProvisionVMAgent -WinRMCertificateUrl $secretURL
+$sourceVaultId = (Get-AzKeyVault -ResourceGroupName "<Resource Group name>" -VaultName "<Vault Name>").ResourceId
+$CertificateStore = "My"
+$vm = Add-AzVMSecret -VM $vm -SourceVaultId $sourceVaultId -CertificateStore $CertificateStore -CertificateUrl $secretURL
+```
+
+## Step 6: Connecting to the VM
+Before you can connect to the VM, you'll need to make sure your machine is configured for WinRM remote management. Start PowerShell as an administrator and execute the below command to make sure you're set up.
+
+```azurepowershell
+Enable-PSRemoting -Force
+```
+
+> [!NOTE]
+> You might need to make sure the WinRM service is running if the above does not work. You can do that using `Get-Service WinRM`
+>
+>
+
+Once the setup is done, you can connect to the VM using the below command
+
+```azurepowershell
+Enter-PSSession -ConnectionUri https://<public-ip-dns-of-the-vm>:5986 -Credential $cred -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck) -Authentication Negotiate
+```
virtual-network-manager Concept Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-use-cases.md
+
+ Title: Common use cases for Azure Virtual Network Manager
+description: This article covers common use cases for customers using AVNM
+++ Last updated : 07/06/2022++
+# Customer Intent: As a network admin, I need to know when I should use Azure Virtual Network Manager in my orginization for managing virtual networks across my organization in a scalable, flexible, and secure manner with minimal administrative overhead.
++
+# Common use cases for Azure Virtual Network Manager
+
+Learn about use cases for Azure Virtual Network Manager including managing connectivity of virtual networks, and securing network traffic.
+
+> [!IMPORTANT]
+> Azure Virtual Network Manager is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++
+## Creating topology and connectivity
+Connectivity configuration allows you to create different network topologies based on your network needs. You create a connectivity configuration by adding new or existing virtual networks into [network groups](concept-network-groups.md) and creating a topology that meets your needs. The connectivity configuration offers three topology options: mesh, hub and spoke, or hub and spoke with direct connectivity between spoke virtual networks.
+
+### Mesh topology
+When a mesh topology is deployed, all virtual networks have direct connectivity with each other. They don't need to go through other hops on the network to communicate. Mesh topology is useful when all the virtual networks need to communicate directly with each other.
+
+### Hub and spoke topology
+Hub and spoke topology is recommended when you're deploying central infrastructure services in a hub virtual network that are shared by spoke virtual networks. This topology can be more efficient than having these common components in all spoke virtual networks.
+
+### Hub and spoke topology with direct connectivity between spoke virtual networks
+This topology combines the two above topologies. It's recommended when you have common central infrastructure in the hub, and you want direct communication between all spokes. Direct connectivity helps you reduce the latency caused by extra network hops when going through a hub.
+
+### Maintaining topology
+AVNM automatically maintains the desired topology you defined in the connectivity configuration when changes are made to your infrastructure. For example, when you add a new spoke to the topology, AVNM can handle the changes necessary to create the connectivity to the spoke and its virtual networks.
++
+## Security
+
+With Azure Virtual Network Manager, you create [security admin rules](concept-security-admins.md) to enforce security policies across virtual networks in your organization. Security admin rules take precedence over rules defined by network security groups, and they're applied first when analyzing traffic as seen in the following diagram:
+Common uses include:
+
+- Create standard rules that must be applied and enforced on all existing VNets and newly created VNets.
+- Create security rules that can't be modified and enforce company/organizational level rules.
+- Enforce security protection to prevent users from opening high-risk ports.
+- Create default rules for everyone in the company/organization so that administrators can prevent security threats caused by NSG misconfiguration or forgetting to put necessary NSGs.
+- Create security boundaries using security admin rules as an administrator and let the owners of the virtual networks configure their NSGs so the NSGs wonΓÇÖt break company policies.
+- Force-allow the traffic from and to critical services so that other users can't accidentally block the necessary traffic, such as program updates.
+
+For a walk-through of use cases, see [Securing Your Virtual Networks with Azure Virtual Network Manager - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-networking-blog/securing-your-virtual-networks-with-azure-virtual-network/ba-p/3353366).
+
+## Next steps
+- Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.
+- Learn more about [network groups](concept-network-groups.md) in Azure Virtual Network Manager.
+- Learn what you can do with a [connectivity configuration](concept-connectivity-configuration.md).
+- Learn more about [security admin configurations](concept-security-admins.md).
+
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Front Door.
|944110|Possible Session Fixation Attack: Setting Cookie Values in HTML| |944120|Remote Command Execution: Java serialization (CVE-2015-5842)| |944130|Suspicious Java class detected|
-|944200|Magic bytes Detected, probable java serialization in use|
-|944210|Magic bytes Detected Base64 Encoded, probable java serialization in use|
+|944200|Magic bytes Detected, probable Java serialization in use|
+|944210|Magic bytes Detected Base64 Encoded, probable Java serialization in use|
|944240|Remote Command Execution: Java serialization and Log4j vulnerability ([CVE-2021-44228](https://www.cve.org/CVERecord?id=CVE-2021-44228), [CVE-2021-45046](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45046))| |944250|Remote Command Execution: Suspicious Java method detected|