Updates from: 07/30/2021 03:05:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-angular-spa-app.md
+
+ Title: Configure authentication in a sample Angular spa application using Azure Active Directory B2C
+description: Using Azure Active Directory B2C to sign in and sign up users in an Angular SPA application.
++++++ Last updated : 07/29/2021+++++
+# Configure authentication in a sample Angular Single Page application using Azure Active Directory B2C
+
+This article uses a sample Angular Single Page application (SPA) to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your Angular apps.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to securely sign a user in to an application. This Angular sample uses [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular) and the [MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser). MSAL is a Microsoft provided library that simplifies adding authentication and authorization support to Angular SPA apps.
+
+### Sign in flow
+
+The sign-in flow involves following steps:
+
+1. The user navigates to the app and selects **Sign-in**.
+1. The app initiates an authentication request, and redirects the user to Azure AD B2C.
+1. The user [signs up or signs in](add-sign-up-and-sign-in-policy.md), [resets the password](add-password-reset-policy.md), or signs in with a [social account](add-identity-provider.md).
+1. Upon successful sign-in, Azure AD B2C returns an authorization code to the app. The app takes the following actions:
+ 1. Exchanges the authorization code for an ID token, access token and refresh token.
+ 1. Reads the ID token claims.
+ 1. Stores the access token and refresh token in an in-memory cache for later use. The access token allows the user to call protected resources, such as a web API. The refresh token is used to acquire a new access token.
+
+### App registration overview
+
+To enable your app to sign in with Azure AD B2C and call a web API, you must register two applications in the Azure AD B2C directory.
+
+- The **Single page application** (Angular) registration enables your app to sign in with Azure AD B2C. During app registration, you specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. For example, **App ID: 1**.
+
+- The **web API** registration enables your app to call a protected web API. The registration exposes the web API permissions (scopes). The app registration process generates an *Application ID* that uniquely identifies your web API. For example, **App ID: 2**. Grant your app (App ID: 1) permissions to the web API scopes (App ID: 2).
+
+The following diagrams describe the app registrations and the application architecture.
+
+![Diagram describes a SPA app with web API, registrations and tokens.](./media/configure-authentication-sample-angular-spa-app/spa-app-with-api-architecture.png)
+
+### Call to a web API
++
+### Sign out flow
++
+## Prerequisites
+
+A computer that's running:
+
+* [Visual Studio Code](https://code.visualstudio.com/), or another code editor
+* [Node.js runtime](https://nodejs.org/en/download/) and [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
+* [Angular LCI](https://angular.io/cli)
+
+## Step 1: Configure your user flow
++
+## Step 2: Register your Angular SPA and API
+
+In this step, you create the Angular SPA app and the web API application registrations, and specify the scopes of your web API.
+
+### 2.1 Register the web API application
++
+### 2.2 Configure scopes
++
+### 2.3 Register the Angular app
+
+Follow these steps to create the Angular app registration:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **New registration**.
+1. Enter a **Name** for the application. For example, *MyApp*.
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Single-page application (SPA)**, and then enter `http://localhost:4200` in the URL text box.
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** check box.
+1. Select **Register**.
+1. Record the **Application (client) ID** for use in a later step when you configure the web application.
+ ![Screenshot showing how to get the Angular application ID.](./media/configure-authentication-sample-angular-spa-app/get-azure-ad-b2c-app-id.png)
+
+### 2.5 Grant permissions
++
+## Step 3: Get the Angular sample code
+
+This sample demonstrates how an Angular single-page application can use Azure AD B2C for user sign-up and sign-in. Then the app acquires an access token and calls a protected web API. Download the sample below:
+
+ [Download a zip file](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/archive/refs/heads/main.zip) or clone the sample from the [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/):
+
+ ```
+ git clone https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial.git
+ ```
+
+### 3.1 Configure the Angular sample
+
+Now that you've obtained the SPA app sample, update the code with your Azure AD B2C and web API values. In the sample folder, under the `src/app` folder, open the `auth-config.ts` file, and update with keys the corresponding values:
++
+|Section |Key |Value |
+||||
+| b2cPolicies | names |The user flow or custom policy you created in [step 1](#step-1-configure-your-user-flow). |
+| b2cPolicies | authorities | Replace `your-tenant-name` with your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`. Then, replace the policy name with the user flow or custom policy you created in [step 1](#step-1-configure-your-user-flow). For example, `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`. |
+| b2cPolicies | authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`. |
+| Configuration | clientId | The Angular application ID from [step 2.3](#23-register-the-angular-app). |
+| protectedResources| endpoint| The URL of the web API, `http://localhost:5000/api/todolist`. |
+| protectedResources| scopes| The web API scopes you created in [step 2.2](#22-configure-scopes). For example, `b2cScopes: ["https://<your-tenant-namee>.onmicrosoft.com/tasks-api/tasks.read"]`. |
+
+Your resulting *src/app/auth-config.ts* code should look similar to following sample:
+
+```typescript
+export const b2cPolicies = {
+ names: {
+ signUpSignIn: "b2c_1_susi_reset_v2",
+ editProfile: "b2c_1_edit_profile_v2"
+ },
+ authorities: {
+ signUpSignIn: {
+ authority: "https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/b2c_1_susi_reset_v2",
+ },
+ editProfile: {
+ authority: "https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/b2c_1_edit_profile_v2"
+ }
+ },
+ authorityDomain: "your-tenant-name.b2clogin.com"
+ };
+
+
+export const msalConfig: Configuration = {
+ auth: {
+ clientId: '<your-MyApp-application-ID>',
+ authority: b2cPolicies.authorities.signUpSignIn.
+ knownAuthorities: [b2cPolicies.authorityDomain],
+ redirectUri: '/',
+ },
+ // More configuration here
+ }
+
+export const protectedResources = {
+ todoListApi: {
+ endpoint: "http://localhost:5000/api/todolist",
+ scopes: ["https://your-tenant-namee.onmicrosoft.com/api/tasks.read"],
+ },
+}
+```
+
+## Step 4: Get the web API sample code
+
+Now that the web API is registered and you've defined its scopes, configure the web API code to work with your Azure AD B2C tenant. Download the sample below:
+
+[Download a \*.zip archive](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi/archive/master.zip), or clone the sample web API project from GitHub. You can also browse directly to the [Azure-Samples/active-directory-b2c-javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) project on GitHub.
+
+```console
+git clone https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi.git
+```
+
+### 4.1 Configure the web API
+
+In the sample folder, open the *config.json* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token the web app passes as a bearer token. Update the following properties of the app settings:
+
+|Section |Key |Value |
+||||
+|credentials|tenantName| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso`.|
+|credentials|clientID| The web API application ID from step [2.1](#21-register-the-web-api-application). In the [diagram above](#app-registration-overview), it's the application with *App ID: 2*.|
+|credentials| issuer| (Optional) The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace the `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace the `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
+|policies|policyName|The user flow or custom policy you created in [step 1](#step-1-configure-your-user-flow). If your application uses multiple user flows or custom policies, specify only one. For example, the sign-up or sign-in user flow.|
+| resource| scope | The scopes of your web API application registration from step [2.5])(#25-grant-permissions). |
+
+Your final configuration file should look like the following JSON:
+
+```json
+{
+ "credentials": {
+ "tenantName": "<your-tenant-namee>",
+ "clientID": "<your-webapi-application-ID>",
+ "issuer": "https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/"
+ },
+ "policies": {
+ "policyName": "b2c_1_susi"
+ },
+ "resource": {
+ "scope": ["tasks.read"]
+ },
+ // More settings here
+}
+```
+
+## Step 5: Run the Angular SPA and web API
+
+You're now ready to test the Angular's scoped access to the API. In this step, run both the web API and the sample Angular application on your local machine. Then, sign in to the Angular application, and select the **TodoList** button to start a request to the protected API.
+
+### Run the web API
+
+1. Open a console window and change to the directory containing the web API sample. For example:
+
+ ```console
+ cd active-directory-b2c-javascript-nodejs-webapi
+ ```
+
+1. Run the following commands:
+
+ ```console
+ npm install && npm update
+ node index.js
+ ```
+
+ The console window displays the port number where the application is hosted.
+
+ ```console
+ Listening on port 5000...
+ ```
+
+### Run the Angular application
+
+1. Open another console window and change to the directory containing the Angular sample. For example:
+
+ ```console
+ cd ms-identity-javascript-angular-tutorial-main/3-Authorization-II/2-call-api-b2c/SPA
+ ```
+
+1. Run the following commands:
+
+ ```console
+ npm install && npm update
+ npm start
+ ```
+
+ The console window displays the port number of where the application is hosted.
+
+ ```console
+ Listening on port 4200...
+ ```
+
+1. Navigate to `http://localhost:4200` in your browser to view the application.
+1. Select **Login**.
+
+ ![Screenshot showing the Angular sample app with the login link.](./media/configure-authentication-sample-angular-spa-app/sample-app-sign-in.png)
+
+1. Complete the sign-up or sign-in process.
+1. Upon successful login, you should see your profile. From the menu, select **ToDoList**.
+
+ ![Screenshot showing the Angular sample app with the user profile, and the call to the to do list.](./media/configure-authentication-sample-angular-spa-app/sample-app-result.png)
+
+1. **Add** new items to the list, **delete**, or **edit** items.
+
+ ![Screenshot showing the Angular sample app's call to the to do list.](./media/configure-authentication-sample-angular-spa-app/sample-app-calls-web-api.png)
+
+## Deploy your application
+
+In a production application, the app registration redirect URI is typically a publicly accessible endpoint where your app is running, like `https://contoso.com`.
+
+You can add and modify redirect URIs in your registered applications at any time. The following restrictions apply to redirect URIs:
+
+* The reply URL must begin with the scheme `https`.
+* The reply URL is case-sensitive. Its case must match the case of the URL path of your running application.
+
+## Next steps
+
+* Learn more [about the code sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/)
+* [Enable authentication in your own Angular application](enable-authentication-angular-spa-app.md)
+* Configure [authentication options in your Angular application](enable-authentication-angular-spa-app-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Configure Authentication Sample Ios App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-ios-app.md
+
+ Title: Configure authentication in a sample iOS Swift application using Azure Active Directory B2C
+description: Using Azure Active Directory B2C to sign in and sign up users in an iOS Swift application.
++++++ Last updated : 07/29/2021+++++
+# Configure authentication in a sample iOS Swift application using Azure Active Directory B2C
+
+This article uses a sample [iOS Swift](https://developer.apple.com/swift/) application to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your mobile apps.
+
+## Overview
+
+OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0, which you can securely use to sign-in in a user to an application. This mobile app sample uses [MSAL](../active-directory/develop/msal-overview.md) library with OpenId Connect authorization code PKCE flow. The MSAL library is a Microsoft provided library that simplifies adding authentication and authorization support to mobile apps.
+
+The sign-in flow involves following steps:
+
+1. The user opens the app and selects **sign-in**.
+1. The app opens the mobile device's system browser, and starts an authentication request to Azure AD B2C.
+1. The user [signs up or signs in](add-sign-up-and-sign-in-policy.md), [resets the password](add-password-reset-policy.md), or signs in with a [social account](add-identity-provider.md).
+1. Upon successful sign-in, Azure AD B2C returns an authorization code to the app.
+1. The app takes the following actions:
+ 1. Exchanges the authorization code for an ID token, access token and refresh token.
+ 1. Reads the ID token claims.
+ 1. Stores the tokens to an in-memory cache for later use.
+
+### App registration overview
+
+To enable your app to sign in with Azure AD B2C and call a web API, register two applications in the Azure AD B2C directory.
+
+- The **mobile application** registration enables your app to sign in with Azure AD B2C. During app registration, specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your mobile app. For example, **App ID: 1**.
+
+- The **web API** registration enables your app to call a protected web API. The registration exposes the web API permissions (scopes). The app registration process generates an *Application ID* that uniquely identifies your web API. For example, **App ID: 2**. Grant your mobile app (App ID: 1) permissions to the web API scopes (App ID: 2).
++
+The following diagrams describe the apps registration and the application architecture.
+
+![Diagram describes a mobile app with web API, registrations and tokens.](./media/configure-authentication-sample-ios-app/mobile-app-with-api-architecture.png)
+
+### Call to a web API
++
+### Sign-out
++
+## Prerequisites
+
+A computer that's running:
+
+- [Xcode](https://developer.apple.com/xcode/) 13, or above.
+- [CocoaPods](https://cocoapods.org/) dependency manager for Swift and Objective-C Cocoa projects.
++
+## Step 1: Configure your user flow
++
+## Step 2: Register mobile applications
+
+In this step, create the mobile app and the web API application registration, and specify the scopes of your web API.
+
+### 2.1 Register the web API app
++
+### 2.2 Configure web API app scopes
+++
+### 2.3 Register the mobile app
+
+Follow these steps to create the mobile app registration:
+
+1. Select **App registrations**, and then select **New registration**.
+1. Enter a **Name** for the application. For example, *iOs-app1*.
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Redirect URI**, select **Public client/native (mobile & desktop)**, and then enter: `msauth.com.microsoft.identitysample.MSALiOS://auth`.
+1. Select **Register**.
+1. After the app registration is completed, select **Overview**.
+1. Record the **Application (client) ID** for use in a later step when you configure the mobile application.
+ ![Screenshot showing how to get the mobile application ID.](./media/configure-authentication-sample-ios-app/get-azure-ad-b2c-app-id.png)
++
+### 2.4 Grant the mobile app permissions for the web API
++
+## Step 3: Configure the sample web API
+
+This sample acquires an access token with the relevant scopes the mobile app can use for a web API. To call a web API from code, follow these steps:
+
+1. Use an existing web API, or create a new one. For more information, see [Enable authentication in your own web API using Azure AD B2C](enable-authentication-web-api.md).
+1. Change the sample code to [call a web API](enable-authentication-iOs-app.md#call-a-web-api).
+
+After you configure the web API, copy the URI of the web API endpoint. You will use the web API endpoint in the next steps.
+
+> [!TIP]
+> If you don't have a web API, you can still run this sample. In this case, the app returns the access token but won't be able to call the web API.
+
+## Step 4: Get the iOS mobile app sample
+
+1. [Download the zip file](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/archive/refs/heads/vNext.zip), or clone the sample web application from [GitHub repo](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal).
+
+ ```bash
+ git clone https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/tree/vNext.git
+ ```
+
+1. Use [CocoaPods](https://cocoapods.org/) to install the MSAL library. In a terminal window, navigate to the project root folder. This folder contains the `podfile`. Run the following command:
+
+ ```bash
+ pod install
+ ```
+
+1. Open `MSALiOS.xcworkspace` workspace with Xcode.
+++
+## Step 5: Configure the sample mobile app
+
+Open the `ViewController.swift` file. The `ViewController` class members contain information about your Azure AD B2C identity provider. The mobile app uses this information to establish a trust relationship with Azure AD B2C, sign the user in and out, acquire tokens, and validate them.
+
+Update the following members:
+
+|Key |Value |
+|||
+|kTenantName| Your Azure AD B2C tenant full [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`.|
+|kAuthorityHostName|The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.b2clogin.com`.|
+|kClientID|The mobile application ID from [step 2.3](#23-register-the-mobile-app).|
+|kRedirectUri|The mobile application redirect URI from [step 2.3](#23-register-the-mobile-app), `msauth.com.microsoft.identitysample.MSALiOS://auth`.|
+|kSignupOrSigninPolicy| The sign-up or sign-in user flow or custom policy you created in [step 1](#step-1-configure-your-user-flow).|
+|kEditProfilePolicy|The edit profile user flow or custom policy you created in [step 1](#step-1-configure-your-user-flow).|
+|kGraphURI| (Optional) the web API endpoint you created in [Step 3](#step-3-configure-the-sample-web-api). For example, `https://contoso.azurewebsites.net/hello`.|
+| kScopes | The web API scopes you created in [step 2.4](#24-grant-the-mobile-app-permissions-for-the-web-api).|
+++
+## Step 6: Run and test the mobile app
+
+1. Build and run the project with a [simulator of a connected iOS device](https://developer.apple.com/documentation/xcode/running-your-app-in-the-simulator-or-on-a-device).
+
+1. Select **Sign In**. Then sign up or sign in with your Azure AD B2C local or social account.
+
+ ![Screenshot demonstrates how to start the sign-in flow.](./media/configure-authentication-sample-ios-app/sign-in.png)
+
+1. After successful authentication, you'll see your display name in the navigation bar.
+
+ ![Screenshot showing the Azure AD B2C access token and user ID.](./media/configure-authentication-sample-ios-app/post-sign-in.png)
+
+## Next steps
+
+* Learn how to [Enable authentication in your own iOS application](enable-authentication-ios-app.md)
+* [Configure authentication options in an iOS application](enable-authentication-ios-app-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Enable Authentication Angular Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-angular-spa-app-options.md
+
+ Title: Enable Angular application options using Azure Active Directory B2C
+description: Enable the use of Angular application options by using several ways.
++++++ Last updated : 07/29/2021+++++
+# Configure authentication options in an Angular application using Azure Active Directory B2C
+
+This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your Angular application. Before you start, familiarize yourself with the following article: [Configure authentication in an Angular SPA application](configure-authentication-sample-angular-spa-app.md), or [Enable authentication in your own Angular SPA application](enable-authentication-angular-spa-app.md).
++
+## Single-page application sign-in and sign-out behavior
++
+You can configure your single page application to sign in users with MSAL.js in two ways:
+
+- **Pop-up window** - The authentication happens in a pop-up window, the state of the application is preserved. Use this approach if you don't want users to move away from your application page during authentication. Note, there are [known issues with pop-up windows on Internet Explorer](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups).
+ - To sign in with popup windows, in the *src/app/app.component.ts* class, use the `loginPopup` method.
+ - In the *src/app/app.module.ts* class, set the `interactionType` attribute to `InteractionType.Popup`.
+ - To sign out with popup windows, in the *src/app/app.component.ts* class, use the `logoutPopup` method. You can also configure `logoutPopup` to redirect the main window to a different page, such as the home page or sign-in page, after logout is complete by passing `mainWindowRedirectUri` as part of the request.
+- **Redirect** - The user is redirected to Azure AD B2C to complete the authentication flow. Use this approach if users have browser constraints or policies where pop-up windows are disabled.
+ - To sign-in with redirection, in the *src/app/app.component.ts* class, use the `loginRedirect` method.
+ - In the *src/app/app.module.ts* class, set the `interactionType` attribute to `InteractionType.Redirect`.
+ - To sign out with redirection, in the *src/app/app.component.ts* class, use the `logoutRedirect` method. Configure the URI to which it should redirect after sign-out by setting `postLogoutRedirectUri`. This URI should be registered as a redirect Uri in your application registration.
+
+The following sample demonstrates how to sign in and sign out:
+
+#### [Popup](#tab/popup)
++
+```typescript
+//src/app/app.component.ts
+login() {
+ if (this.msalGuardConfig.authRequest){
+ this.authService.loginPopup({...this.msalGuardConfig.authRequest} as PopupRequest);
+ } else {
+ this.authService.loginPopup();
+ }
+}
+
+logout() {
+ this.authService.logoutPopup({
+ mainWindowRedirectUri: '/',
+ });
+}
+```
+
+#### [Redirect](#tab/redirect)
+
+```typescript
+//src/app/app.component.ts
+login() {
+ if (this.msalGuardConfig.authRequest){
+ this.authService.loginRedirect({...this.msalGuardConfig.authRequest} as RedirectRequest);
+ } else {
+ this.authService.loginRedirect();
+ }
+}
+
+logout() {
+ this.authService.logoutRedirect({
+ postLogoutRedirectUri: 'http://localhost:4200'
+ });
+}
+```
+++
+The MSAL Angular library has three sign-in flows: interactive sign-in (where a user selects the sign-in button), MSAL Guard, and MSAL Interceptor. The MSAL Guard and MSAL Interceptor configurations take effect when a user tries to access a protected resource without a valid access token. In such cases, the MSAL library forces the user to sign in. The following samples demonstrate how to configure MSAL Guard and MSAL Interceptor for sign-in with a pop-up window or redirection.
+
+#### [Popup](#tab/popup)
+
+```typescript
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Popup,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ }
+ },
+ {
+ interactionType: InteractionType.Popup,
+ protectedResourceMap: new Map([
+ [protectedResources.todoListApi.endpoint, protectedResources.todoListApi.scopes]
+ ])
+ })
+```
+
+#### [Redirect](#tab/redirect)
+
+```typescript
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ }
+ },
+ {
+ interactionType: InteractionType.Redirect,
+ protectedResourceMap: new Map([
+ [protectedResources.todoListApi.endpoint, protectedResources.todoListApi.scopes]
+ ])
+ })
+```
+
+
++
+1. If you use a custom policy, add the required input claim as described in [Set up direct sign-in](direct-signin.md#prepopulate-the-sign-in-name).
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object.
+1. Set the `loginHint` attribute with the corresponding login hint. For example: bob@contoso.com.
+
+The following code snippets demonstrate how to pass the login hint parameter:
+
+#### [Popup](#tab/popup)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: PopupRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as PopupRequest
+}
+
+authRequestConfig.loginHint = "bob@contoso.com"
+
+this.authService.loginPopup(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Popup,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ loginHint: "bob@contoso.com"
+ }
+ },
+```
+
+#### [Redirect](#tab/redirect)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: RedirectRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as RedirectRequest
+}
+
+authRequestConfig.loginHint = "bob@contoso.com"
+
+this.authService.loginRedirect(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ loginHint: "bob@contoso.com"
+ }
+ },
+```
+
+
+++
+1. Check the domain name of your external identity provider. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider).
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object.
+1. Set the `domainHint` attribute with the corresponding domain hint. For example: facebook.com.
+
+The following code snippets demonstrate how to pass the domain hint parameter:
+
+#### [Popup](#tab/popup)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: PopupRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as PopupRequest
+}
+
+authRequestConfig.domainHint = "facebook.com";
+
+this.authService.loginPopup(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Popup,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ domainHint: "facebook.com"
+ }
+ },
+```
+
+#### [Redirect](#tab/redirect)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: RedirectRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as RedirectRequest
+}
+
+authRequestConfig.domainHint = "facebook.com";
+
+this.authService.loginRedirect(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ domainHint: "facebook.com"
+ }
+ },
+```
+
+
++
+1. [Configure Language customization](language-customization.md).
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes.
+1. Add the `ui_locales` parameter with the corresponding language code to the `extraQueryParameters` attributes. For example, `es-es`.
+
+The following code snippets demonstrate how to pass the domain hint parameter:
+
+#### [Popup](#tab/popup)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: PopupRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as PopupRequest
+}
+
+authRequestConfig.extraQueryParameters = {"ui_locales" : "es-es"};
+
+this.authService.loginPopup(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Popup,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ extraQueryParameters: {"ui_locales" : "es-es"}
+ }
+ },
+```
+
+#### [Redirect](#tab/redirect)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: RedirectRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as RedirectRequest
+}
+
+authRequestConfig.extraQueryParameters = {"ui_locales" : "es-es"};
+
+this.authService.loginRedirect(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ extraQueryParameters: {"ui_locales" : "es-es"}
+ }
+ },
+```
+
+
+
++
+1. Configure the [ContentDefinitionParameters](customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) element.
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes.
+1. Add the custom query string parameter, such as `campaignId`. Set the parameter value. For example, `germany-promotion`.
+
+The following code snippets demonstrate how to pass a custom query string parameter:
+
+#### [Popup](#tab/popup)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: PopupRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as PopupRequest
+}
+
+authRequestConfig.extraQueryParameters = {"campaignId": 'germany-promotion'}
+
+this.authService.loginPopup(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Popup,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ extraQueryParameters: {"ui_locales" : "es-es"}
+ }
+ },
+```
+
+#### [Redirect](#tab/redirect)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: RedirectRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as RedirectRequest
+}
+
+authRequestConfig.extraQueryParameters = {"campaignId": 'germany-promotion'}
+
+this.authService.loginRedirect(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ extraQueryParameters: {"campaignId" : "germany-promotion"}
+ }
+ },
+```
++++
+1. In your custom policy, define an [ID token hint technical profile](id-token-hint.md).
+1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes.
+1. Add the `id_token_hint` parameter with the corresponding variable that stores the ID token.
+
+The following code snippets demonstrate how to an ID token hint:
+
+#### [Popup](#tab/popup)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: PopupRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as PopupRequest
+}
+
+authRequestConfig.extraQueryParameters = {"id_token_hint": idToken};
+
+this.authService.loginPopup(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Popup,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ extraQueryParameters: {"id_token_hint" : idToken}
+ }
+ },
+```
+
+#### [Redirect](#tab/redirect)
+
+```typescript
+// src/app/app.component.ts
+let authRequestConfig: RedirectRequest;
+
+if (this.msalGuardConfig.authRequest) {
+ authRequestConfig = { ...this.msalGuardConfig.authRequest } as RedirectRequest
+}
+
+authRequestConfig.extraQueryParameters = {"id_token_hint": idToken};;
+
+this.authService.loginRedirect(authRequestConfig);
+
+// src/app/app.module.ts
+MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes,
+ extraQueryParameters: {"id_token_hint" : idToken}
+ }
+ },
+```
++++
+To use your custom domain your tenant ID in the authentication URL, follow the guidance in [Enable custom domains](custom-domain.md). Open the *src/app/auth-config.ts* MSAL configuration object and change the **authorities** and **knownAuthorities** to use your custom domain name and tenant ID.
+
+The following JavaScript shows the MSAL configuration object before the change:
+
+```typescript
+const msalConfig = {
+ auth: {
+ ...
+ authority: "https://fabrikamb2c.b2clogin.com/fabrikamb2c.onmicrosoft.com/B2C_1_susi",
+ knownAuthorities: ["fabrikamb2c.b2clogin.com"],
+ ...
+ },
+ ...
+}
+```
+
+The following JavaScript shows the MSAL configuration object after the change:
+
+```typescript
+const msalConfig = {
+ auth: {
+ ...
+ authority: "https://custom.domain.com/00000000-0000-0000-0000-000000000000/B2C_1_susi",
+ knownAuthorities: ["custom.domain.com"],
+ ...
+ },
+ ...
+}
+```
+++
+To configure Angular [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/logging.md), in the *src/app/auth-config.ts* configure the following keys:
+
+- `loggerCallback` is the logger callback function.
+- `logLevel` lets you specify the level of logging you want. Possible values: `Error`, `Warning`, `Info`, and `Verbose`.
+- `piiLoggingEnabled` enables the input of personal data. Possible values: `true`, or `false`.
+
+The following code snippet demonstrates how to configure MSAL logging:
+
+```typescript
+export const msalConfig: Configuration = {
+ ...
+ system: {
+ loggerOptions: {
+ loggerCallback: (logLevel, message, containsPii) => {
+ console.log(message);
+ },
+ logLevel: LogLevel.Verbose,
+ piiLoggingEnabled: false
+ }
+ }
+ ...
+}
+```
+
+## Next steps
+
+- Learn more: [MSAL.js configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md)
active-directory-b2c Enable Authentication Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-angular-spa-app.md
+
+ Title: Enable authentication in an Angular application using Azure Active Directory B2C building blocks
+description: The building blocks of Azure Active Directory B2C to sign in and sign up users in an Angular application.
++++++ Last updated : 07/29/2021+++++
+# Enable authentication in your own Angular Application using Azure Active Directory B2C
+
+This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own Angular Single Page Application (SPA). Learn how to integrate an Angular application with [MSAL for Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-angular) authentication library.
+
+Use this article with [Configure authentication in a sample Angular SPA application](./configure-authentication-sample-angular-spa-app.md), substituting the sample Angular app with your own Angular app. After completing the steps in this article, your application will accept sign-ins via Azure AD B2C.
+
+## Prerequisites
+
+Review the prerequisites and integration steps in [Configure authentication in a sample Angular SPA application](configure-authentication-sample-angular-spa-app.md) article.
+
+## Create an Angular app project
+
+You can use an existing Angular app project, or create a new one. To create a new project, run the following commands.
+
+The following commands:
+
+1. Install the [Angular CLI](https://angular.io/cli) using the npm package manager.
+1. [Creates an Angular workspace](https://angular.io/cli/new) with routing module. The app name is `msal-angular-tutorial`, you can change it to any valid angular app name, such as `contoso-car-service`.
+1. Change to the app directory folder.
+
+```
+npm install -g @angular/cli
+ng new msal-angular-tutorial --routing=true --style=css --strict=false
+cd msal-angular-tutorial
+```
+
+## Install the dependencies
+
+To install the [MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-browser) and [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-angular) libraries in your application, in your command shell run the following commands:
+
+```
+npm install @azure/msal-browser @azure/msal-angular
+```
+
+Install the [Angular Material component library](https://material.angular.io/) (optional, for UI).
+
+```
+npm install @angular/material @angular/cdk
+```
+
+## Add the authentication components
+
+The sample code is made up of the following components:
+
+|Component |Type |Description |
+||||
+| auth-config.ts| Constants | A configuration file that contains information about your Azure AD B2C identity provider and the web API service. The Angular app uses this information to establish a trust relationship with Azure AD B2C, sign the user in and out, acquire tokens, and validate them. |
+| app.module.ts| [Angular module](https://angular.io/guide/architecture-modules)| Describes how the application parts fit together. This is the root module that is used to bootstrap and launch the application. In this walkthrough, you add some components to the *app.module.ts* module, and initiate the MSAL library with the MSAL config object. |
+| app-routing.module.ts | [Angular routing module](https://angular.io/tutorial/toh-pt5) | Enables navigation by interpreting a browser URL and loading the corresponding component. In this walkthrough, you add some components to the routing module, and protect components with [MSAL guard](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-guard.md). Only authorized users can access the protected components. |
+| app.component.* | [Angular component](https://angular.io/guide/architecture-components) | The `ng new` command created an Angular project with a root component. In this walkthrough, you change the app component to host the top navigation bar. The navigation bar contains various buttons, including sign-in and sign-out. The *app.component.ts* class handles the sign-in and sign-out events. |
+| home.component.* | [Angular component](https://angular.io/guide/architecture-components)|In this walkthrough, you add the *home* component to render the anonymous access home page. This component demonstrates how to check whether a user has signed in. |
+| profile.component.* | [Angular component](https://angular.io/guide/architecture-components) | In this walkthrough, you add the *profile* component to learn how to read the ID token claims. |
+| webapi.component.* | [Angular component](https://angular.io/guide/architecture-components)| In this walkthrough, you add the *webapi* component to learn how to call a web API. |
+++
+To add the following components to your app, run the following Angular CLI commands. The `generate component` commands:
+
+1. Creates a folder for each component. The folder contains the TypeScript, HTML, CSS, and test files.
+1. Updates the `app.module.ts` and the `app-routing.module.ts` files with references to the new components.
+
+```
+ng generate component home
+ng generate component profile
+ng generate component webapi
+```
+
+## Add the app settings
+
+Azure AD B2C identity provider and web API settings are stored in the `auth-config.ts` file. In your *src/app* folder, create a file named *auth-config.ts* containing the following code. Then change the settings as described in the [3.1 Configure the Angular sample](configure-authentication-sample-angular-spa-app.md#31-configure-the-angular-sample).
+
+```typescript
+import { LogLevel, Configuration, BrowserCacheLocation } from '@azure/msal-browser';
+
+const isIE = window.navigator.userAgent.indexOf("MSIE ") > -1 || window.navigator.userAgent.indexOf("Trident/") > -1;
+
+export const b2cPolicies = {
+ names: {
+ signUpSignIn: "b2c_1_susi_reset_v2",
+ editProfile: "b2c_1_edit_profile_v2"
+ },
+ authorities: {
+ signUpSignIn: {
+ authority: "https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/b2c_1_susi_reset_v2",
+ },
+ editProfile: {
+ authority: "https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/b2c_1_edit_profile_v2"
+ }
+ },
+ authorityDomain: "your-tenant-name.b2clogin.com"
+ };
+
+
+export const msalConfig: Configuration = {
+ auth: {
+ clientId: '<your-MyApp-application-ID>',
+ authority: b2cPolicies.authorities.signUpSignIn.
+ knownAuthorities: [b2cPolicies.authorityDomain],
+ redirectUri: '/',
+ },
+ cache: {
+ cacheLocation: BrowserCacheLocation.LocalStorage,.
+ storeAuthStateInCookie: isIE,
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback: (logLevel, message, containsPii) => {
+ console.log(message);
+ },
+ logLevel: LogLevel.Verbose,
+ piiLoggingEnabled: false
+ }
+ }
+ }
+
+export const protectedResources = {
+ todoListApi: {
+ endpoint: "http://localhost:5000/api/todolist",
+ scopes: ["https://your-tenant-namee.onmicrosoft.com/api/tasks.read"],
+ },
+}
+export const loginRequest = {
+ scopes: []
+};
+```
+
+## Initiate the authentication libraries
+
+Public client applications are not trusted to safely keep application secrets and therefore don't have client secrets. In the *src/app* folder, open the *app.module.ts*, and make the following changes:
+
+1. Import MSAL and MSAL browser libraries.
+1. Import the Azure AD B2C configuration module.
+1. Import the `HttpClientModule`. The HTTP client is used to call web APIs.
+1. Import the Angular HTTP interceptor. MSAL uses the interceptor to inject the bearer token to the HTTP authorization header.
+1. Add the essential Angular materials.
+1. Instantiate MSAL using the multiple account public client application object. The MSAL initialization includes passing:
+ 1. The *auth-config.ts* configuration object.
+ 1. The routing guard configuration object.
+ 1. The MSAL interceptor configuration object. The interceptor class automatically acquires tokens for outgoing requests that use the Angular [HttpClient](https://angular.io/api/common/http/HttpClient) to known protected resources.
+1. Configure the `HTTP_INTERCEPTORS`, and `MsalGuard` [Angular providers](https://angular.io/guide/providers).
+1. Add the `MsalRedirectComponent` to the [Angular bootstrap](https://angular.io/guide/bootstrapping).
+
+In the *src/app* folder, edit *app.module.ts* and make the following modifications shown in the code snippet below. The changes are flagged with *Changes start here*, and *Changes end here*. After the changes, your code should look like the following code snippet.
+
+```typescript
+import { NgModule } from '@angular/core';
+import { BrowserModule } from '@angular/platform-browser';
+
+import { AppRoutingModule } from './app-routing.module';
+import { AppComponent } from './app.component';
+
+/* Changes start here. */
+// Import MSAL and MSAL browser libraries.
+import { MsalGuard, MsalInterceptor, MsalModule, MsalRedirectComponent } from '@azure/msal-angular';
+import { InteractionType, PublicClientApplication } from '@azure/msal-browser';
+
+// Import the Azure AD B2C configuration
+import { msalConfig, protectedResources } from './auth-config';
+
+// Import the Angular HTTP interceptor.
+import { HttpClientModule, HTTP_INTERCEPTORS } from '@angular/common/http';
+import { ProfileComponent } from './profile/profile.component';
+import { HomeComponent } from './home/home.component';
+import { WebapiComponent } from './webapi/webapi.component';
+
+// Add the essential Angular materials.
+import { MatButtonModule } from '@angular/material/button';
+import { MatToolbarModule } from '@angular/material/toolbar';
+import { MatListModule } from '@angular/material/list';
+import { MatTableModule } from '@angular/material/table';
+/* Changes end here. */
+
+@NgModule({
+ declarations: [
+ AppComponent,
+ ProfileComponent,
+ HomeComponent,
+ WebapiComponent
+ ],
+ imports: [
+ BrowserModule,
+ AppRoutingModule,
+ /* Changes start here. */
+ // Import the following Angular materials.
+ MatButtonModule,
+ MatToolbarModule,
+ MatListModule,
+ MatTableModule,
+ // Import the HTTP client.
+ HttpClientModule,
+
+ // Initiate the MSAL library with the MSAL config object
+ MsalModule.forRoot(new PublicClientApplication(msalConfig),
+ {
+ // The routing guard configuration.
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: protectedResources.todoListApi.scopes
+ }
+ },
+ {
+ // MSAL interceptor configuration.
+ // The protected resource mapping maps your web API with the corresponding app scopes. If your code needs to call another web API, add the URI mapping here.
+ interactionType: InteractionType.Redirect,
+ protectedResourceMap: new Map([
+ [protectedResources.todoListApi.endpoint, protectedResources.todoListApi.scopes]
+ ])
+ })
+ /* Changes end here. */
+ ],
+ providers: [
+ /* Changes start here. */
+ {
+ provide: HTTP_INTERCEPTORS,
+ useClass: MsalInterceptor,
+ multi: true
+ },
+ MsalGuard
+ /* Changes end here. */
+ ],
+ bootstrap: [
+ AppComponent,
+ /* Changes start here. */
+ MsalRedirectComponent
+ /* Changes end here. */
+ ]
+})
+export class AppModule { }
+```
+
+## Configure routes
+
+In this section, configure the routes for your Angular application. When a user selects a link on the page to navigate within your single-page application, or types a URL in the address bar, the routes map the URL to an Angular component. The Angular routing [canActivate](https://angular.io/api/router/CanActivate) interface uses the MSAL Guard to checks if user is signed-in. If the user isn't signed-in, MSAL takes the user to Azure AD B2C to authenticate.
+
+In the *src/app* folder, edit *app-routing.module.ts* make the following modifications shown in the code snippet below. The changes are flagged with *Changes start here*, and *Changes end here*.
+
+After the changes, your code should look like the following code snippet.
+
+```typescript
+import { NgModule } from '@angular/core';
+import { RouterModule, Routes } from '@angular/router';
+import { MsalGuard } from '@azure/msal-angular';
+import { HomeComponent } from './home/home.component';
+import { ProfileComponent } from './profile/profile.component';
+import { WebapiComponent } from './webapi/webapi.component';
+
+const routes: Routes = [
+ /* Changes start here. */
+ {
+ path: 'profile',
+ component: ProfileComponent,
+ // The profile component is protected with MSAL guard.
+ canActivate: [MsalGuard]
+ },
+ {
+ path: 'webapi',
+ component: WebapiComponent,
+ // The profile component is protected with MSAL guard.
+ canActivate: [MsalGuard]
+ },
+ {
+ // The home component allows anonymous access
+ path: '',
+ component: HomeComponent
+ }
+ /* Changes end here. */
+];
++
+@NgModule({
+ /* Changes start here. */
+ // Replace the following line with the next one
+ //imports: [RouterModule.forRoot(routes)],
+ imports: [RouterModule.forRoot(routes, {
+ initialNavigation:'enabled'
+ })],
+ /* Changes end here. */
+ exports: [RouterModule]
+})
+export class AppRoutingModule { }
+```
+
+## Add the sign-in and sign-out buttons
+
+In this section, you add the sign-in and sign-out buttons the *app* component. In the *src/app* folder, open the *app.component.ts*, and make the following changes:
+
+1. Import the required components.
+1. Change the class to implement [OnInit method](https://angular.io/api/core/OnInit). The `OnInit` method subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `inProgress$` observable event. Use this event to know the status of user interactions, particularly to check that interactions are completed. Before interacting with MSAL account object, check the `InteractionStatus` property returns `InteractionStatus.None`. The `subscribe` event calls the `setLoginDisplay` method to check if the user is authenticated.
+1. Add class variables.
+1. Add the `login` method that initiates authorization flow.
+1. Add the `logout` method that signs out the user.
+1. Add the `setLoginDisplay` method that checks if the user is authenticated.
+1. Add the [ngOnDestroy](https://angular.io/api/core/OnDestroy) method to clean up the `inProgress$` subscribe event.
+
+After the changes, your code should look like the following code snippet:
+
+```typescript
+import { Component, OnInit, Inject } from '@angular/core';
+import { MsalService, MsalBroadcastService, MSAL_GUARD_CONFIG, MsalGuardConfiguration } from '@azure/msal-angular';
+import { InteractionStatus, RedirectRequest } from '@azure/msal-browser';
+import { Subject } from 'rxjs';
+import { filter, takeUntil } from 'rxjs/operators';
+
+@Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+})
+
+/* Changes start here. */
+export class AppComponent implements OnInit{
+ title = 'msal-angular-tutorial';
+ loginDisplay = false;
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(@Inject(MSAL_GUARD_CONFIG) private msalGuardConfig: MsalGuardConfiguration, private broadcastService: MsalBroadcastService, private authService: MsalService) { }
+
+ ngOnInit() {
+
+ this.broadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None),
+ takeUntil(this._destroying$)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ login() {
+ if (this.msalGuardConfig.authRequest){
+ this.authService.loginRedirect({...this.msalGuardConfig.authRequest} as RedirectRequest);
+ } else {
+ this.authService.loginRedirect();
+ }
+ }
+
+ logout() {
+ this.authService.logoutRedirect({
+ postLogoutRedirectUri: 'http://localhost:4200'
+ });
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
+ /* Changes end here. */
+}
+```
+
+In the *src/app* folder, edit *app.component.html*, and make the following changes:
+
+1. Add a link to the profile and web API components.
+1. Add the login button with click event attribute set to the `login()` method. This button appears only if `loginDisplay` class variable is `false`.
+1. Add the logout button with click event attribute set to the `logout()` method. This button appears only if `loginDisplay` class variable is `true`.
+1. Add a [router-outlet](https://angular.io/api/router/RouterOutlet) element.
+
+After the changes, your code should look like the following code snippet.
+
+```html
+<mat-toolbar color="primary">
+ <a class="title" href="/">{{ title }}</a>
+
+ <div class="toolbar-spacer"></div>
+
+ <a mat-button [routerLink]="['profile']">Profile</a>
+ <a mat-button [routerLink]="['webapi']">Web API</a>
+
+ <button mat-raised-button *ngIf="!loginDisplay" (click)="login()">Login</button>
+ <button mat-raised-button *ngIf="loginDisplay" (click)="logout()">Logout</button>
+
+</mat-toolbar>
+<div class="container">
+ <router-outlet></router-outlet>
+</div>
+```
+
+Optionally, update the *app.component.css* file with the following CSS snippet.
+
+```css
+.toolbar-spacer {
+ flex: 1 1 auto;
+ }
+
+ a.title {
+ color: white;
+ }
+```
+
+## Handle the app redirects
+
+When using redirects with MSAL, it is mandatory to add the [app-redirect](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/redirects.md) directive to the *https://docsupdatetracker.net/index.html*. In the *src* folder, edit *https://docsupdatetracker.net/index.html*.
+
+After the changes, your code should look like the following code snippet.
+
+```html
+<!doctype html>
+<html lang="en">
+<head>
+ <meta charset="utf-8">
+ <title>MsalAngularTutorial</title>
+ <base href="/">
+ <meta name="viewport" content="width=device-width, initial-scale=1">
+ <link rel="icon" type="image/x-icon" href="favicon.ico">
+</head>
+<body>
+ <app-root></app-root>
+ <!-- Changes start here -->
+ <app-redirect></app-redirect>
+ <!-- Changes end here -->
+</body>
+</html>
+```
+
+## Set app CSS (Optional)
+
+In the */src* folder, update the *styles.css* file with the following CSS snippet.
+
+```css
+@import '~@angular/material/prebuilt-themes/deeppurple-amber.css';
+
+html, body { height: 100%; }
+body { margin: 0; font-family: Roboto, "Helvetica Neue", sans-serif; }
+.container { margin: 1%; }
+```
+
+> [!TIP]
+> At this point you can run your app and test the sign-in experience. To run your application, see the [Run the Angular application](#run-the-angular-application) section.
+
+## Check if a user is authenticated
+
+The `home.component` demonstrates how to check the user is authenticated. In the *src/app/home* folder, update the *home.component.ts* with the following code snippet.
++
+The code:
+
+1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `msalSubject$` and `inProgress$` observable events.
+1. The `msalSubject$` writes the authentication result to the browser console.
+1. The `inProgress$` checks if a user is authenticated. The `getAllAccounts()` returns one, or more objects.
++
+```typescript
+import { Component, OnInit } from '@angular/core';
+import { MsalBroadcastService, MsalService } from '@azure/msal-angular';
+import { EventMessage, EventType, InteractionStatus } from '@azure/msal-browser';
+import { filter } from 'rxjs/operators';
+
+@Component({
+ selector: 'app-home',
+ templateUrl: './home.component.html',
+ styleUrls: ['./home.component.css']
+})
+export class HomeComponent implements OnInit {
+ loginDisplay = false;
+
+ constructor(private authService: MsalService, private msalBroadcastService: MsalBroadcastService) { }
+
+ ngOnInit(): void {
+ this.msalBroadcastService.msalSubject$
+ .pipe(
+ filter((msg: EventMessage) => msg.eventType === EventType.LOGIN_SUCCESS),
+ )
+ .subscribe((result: EventMessage) => {
+ console.log(result);
+ });
+
+ this.msalBroadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+}
+```
+
+In the *src/app/home* folder, update *home.component.html* with the following HTML snippet. The [*ngIf](https://angular.io/api/common/NgIf) directive checks the `loginDisplay` class variable to show or hide the welcome messages.
+
+```html
+<div *ngIf="!loginDisplay">
+ <p>Please sign-in to see your profile information.</p>
+</div>
+
+<div *ngIf="loginDisplay">
+ <p>Login successful!</p>
+ <p>Request your profile information by clicking Profile above.</p>
+</div>
+```
+
+## Read the ID token claims
+
+The `profile.component` demonstrates how to access the user's ID token claims. In the *src/app/profile* folder, update the *profile.component.ts* with the following code snippet.
+
+The code:
+
+1. Imports the required components.
+1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `inProgress$` observable event. The event loads the account, and reads the ID token claims.
+1. The `checkAndSetActiveAccount` method checks and sets the active account. This is common when the app interacts with multiple Azure AD B2C user flows or custom policies.
+1. The `getClaims` method gets the ID token claims from the active MSAL account object. Then adds them to the `dataSource` array. The array is rendered to the user with the component's template binding.
+
+```typescript
+import { Component, OnInit } from '@angular/core';
+import { MsalBroadcastService, MsalService } from '@azure/msal-angular';
+import { EventMessage, EventType, InteractionStatus } from '@azure/msal-browser';
+import { Subject } from 'rxjs';
+import { filter, takeUntil } from 'rxjs/operators';
+
+@Component({
+ selector: 'app-profile',
+ templateUrl: './profile.component.html',
+ styleUrls: ['./profile.component.css']
+})
+
+export class ProfileComponent implements OnInit {
+ displayedColumns: string[] = ['claim', 'value'];
+ dataSource: Claim[] = [];
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(private authService: MsalService, private msalBroadcastService: MsalBroadcastService) { }
+
+ ngOnInit(): void {
+
+ this.msalBroadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None || status === InteractionStatus.HandleRedirect),
+ takeUntil(this._destroying$)
+ )
+ .subscribe(() => {
+ this.checkAndSetActiveAccount();
+ this.getClaims(this.authService.instance.getActiveAccount()?.idTokenClaims)
+ })
+ }
+
+ checkAndSetActiveAccount() {
+
+ let activeAccount = this.authService.instance.getActiveAccount();
+
+ if (!activeAccount && this.authService.instance.getAllAccounts().length > 0) {
+ let accounts = this.authService.instance.getAllAccounts();
+ this.authService.instance.setActiveAccount(accounts[0]);
+ }
+ }
+
+ getClaims(claims: any) {
+
+ let list: Claim[] = new Array<Claim>();
+
+ Object.keys(claims).forEach(function(k, v){
+
+ let c = new Claim()
+ c.id = v;
+ c.claim = k;
+ c.value = claims ? claims[k]: null;
+ list.push(c);
+ });
+ this.dataSource = list;
+
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
+}
+
+export class Claim {
+ id: number;
+ claim: string;
+ value: string;
+}
+```
+
+In the *src/app/profile* folder, update the *profile.component.html* with the following HTML snippet.
+
+```html
+<h1>ID token claims:</h1>
+
+<table mat-table [dataSource]="dataSource" class="mat-elevation-z8">
+
+ <!-- Claim Column -->
+ <ng-container matColumnDef="claim">
+ <th mat-header-cell *matHeaderCellDef> Claim </th>
+ <td mat-cell *matCellDef="let element"> {{element.claim}} </td>
+ </ng-container>
+
+ <!-- Value Column -->
+ <ng-container matColumnDef="value">
+ <th mat-header-cell *matHeaderCellDef> Value </th>
+ <td mat-cell *matCellDef="let element"> {{element.value}} </td>
+ </ng-container>
+
+ <tr mat-header-row *matHeaderRowDef="displayedColumns"></tr>
+ <tr mat-row *matRowDef="let row; columns: displayedColumns;"></tr>
+</table>
+```
+
+## Call a web API
+
+To call a [token-based authorization web API](enable-authentication-web-api.md), the app needs to have a valid access token. The [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-interceptor.md) provider automatically acquires tokens for outgoing requests that use the Angular [HttpClient](https://angular.io/api/common/http/HttpClient) to known protected resources.
+
+> [!IMPORTANT]
+> The MSAL initialization method (in the *app.module.ts* class) maps protected resources, such as web APIs with the required app scopes using the `protectedResourceMap` object. If your code needs to call another web API, add the web API URI, the web API HTTP method, with the corresponding scopes to the `protectedResourceMap` object. For more information, see [Protected Resource Map](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/master/lib/msal-angular/docs/v2-docs/msal-interceptor.md#protected-resource-map) article.
++
+When the [HttpClient](https://angular.io/api/common/http/HttpClient) object calls a web API, the [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-interceptor.md) provider takes the following steps:
+
+1. Acquires an access token with the required permissions (scopes) for the web API endpoint.
+1. Passes the access token as a bearer token in the authorization header of the HTTP request using this format:
+
+```http
+Authorization: Bearer <access-token>
+```
+
+The `webapi.component` demonstrates how to call a web API. In the *src/app/webapi* folder, update the *webapi.component.ts* with the following code snippet.
+
+The following code:
+
+1. Uses the Angular [HttpClient](https://angular.io/guide/http) to call the web API.
+1. Reads the `auth-config` class's `protectedResources.todoListApi.endpoint`. This element specifies the web API URI. Based on the web API URI, the MSAL interceptor acquires an access token with the corresponding scopes.
+1. Gets the profile from the web API, and sets the `profile` class variable.
+
+```typescript
+import { Component, OnInit } from '@angular/core';
+import { HttpClient } from '@angular/common/http';
+import { protectedResources } from '../auth-config';
+
+type ProfileType = {
+ name?: string
+};
+
+@Component({
+ selector: 'app-webapi',
+ templateUrl: './webapi.component.html',
+ styleUrls: ['./webapi.component.css']
+})
+export class WebapiComponent implements OnInit {
+ todoListEndpoint: string = protectedResources.todoListApi.endpoint;
+ profile!: ProfileType;
+
+ constructor(
+ private http: HttpClient
+ ) { }
+
+ ngOnInit() {
+ this.getProfile();
+ }
+
+ getProfile() {
+ this.http.get(this.todoListEndpoint)
+ .subscribe(profile => {
+ this.profile = profile;
+ });
+ }
+}
+```
+
+In the *src/app/webapi* folder, update *webapi.component.html* with the following HTML snippet. The component's template renders the `name` that returned by the web API. At the bottom of the page, the template renders the web API address.
+
+```html
+<h1>The web API returns:</h1>
+<div>
+ <p><strong>Name: </strong> {{profile?.name}}</p>
+</div>
+
+<div class="footer-text">
+ Web API: {{todoListEndpoint}}
+</div>
+```
+
+Optionally, update the *webapi.component.css* file with the following CSS snippet.
+
+```css
+.footer-text {
+ position: absolute;
+ bottom: 50px;
+ color: gray;
+}
+```
+
+## Run the Angular application
++
+Run the following commands:
+
+```console
+npm start
+```
+
+The console window displays the port number of where the application is hosted.
+
+```console
+Listening on port 4200...
+```
+
+> [!TIP]
+> Alternatively to run the `npm start` command, use [VS Code debugger](https://code.visualstudio.com/docs/editor/debugging). VS Code's built-in debugger helps accelerate your edit, compile and debug loop.
+
+Navigate to `http://localhost:4200` in your browser to view the application.
++
+## Next steps
+
+* Configure [Authentication options in your own Angular application using Azure AD B2C](enable-authentication-angular-spa-app-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Enable Authentication Ios App Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-ios-app-options.md
+
+ Title: Enable iOS Swift mobile application options using Azure Active Directory B2C
+description: Enable the use of iOS Swift mobile application options by using several ways.
++++++ Last updated : 07/29/2021+++++
+# Configure authentication options in an iOS Swift application using Azure Active Directory B2C
+
+This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your iOS Swift application. Before you start, familiarize yourself with the following articles: [Configure authentication in a sample iOS Swift application](configure-authentication-sample-ios-app.md) and [Enable authentication in your own iOS Swift app using Azure Active Directory B2C](enable-authentication-ios-app.md).
++
+To use a custom domain and your tenant ID in the authentication URL:
+
+1. Follow the guidance in [Enable custom domains](custom-domain.md).
+1. Update the `kAuthorityHostName` class member with your custom domain.
+1. Update the `kTenantName` class member with your [tenant ID](tenant-management.md#get-your-tenant-id).
+
+The following Swift code shows the app settings before the change:
+
+```swift
+let kTenantName = "contoso.onmicrosoft.com"
+let kAuthorityHostName = "contoso.b2clogin.com"
+```
+
+The following JSON shows the app settings after the change:
+
+```swift
+let kTenantName = "00000000-0000-0000-0000-000000000000"
+let kAuthorityHostName = "login.contoso.com"
+```
++
+1. If you're using a custom policy, add the required input claim as described in [Set up direct sign-in](direct-signin.md#prepopulate-the-sign-in-name).
+1. Find your MSAL configuration object and add the **withLoginHint()** method with the login hint.
+
+```swift
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+parameters.promptType = .selectAccount
+parameters.authority = authority
+parameters.loginHint = "bob@contoso.com"
+// More settings here
+
+applicationContext.acquireToken(with: parameters) { (result, error) in
+...
+```
++
+1. Check the domain name of your external identity provider. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider).
+1. Create or use an existing list object to store extra query parameters.
+1. Add the `domain_hint` parameter with the corresponding domain name to the list. For example, `facebook.com`.
+1. Pass the extra query parameters list into the MSAL configuration object's `extraQueryParameters` attribute.
+
+```swift
+let extraQueryParameters: [String: String] = ["domain_hint": "facebook.com"]
+
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+parameters.promptType = .selectAccount
+parameters.authority = authority
+parameters.extraQueryParameters = extraQueryParameters
+// More settings here
+
+applicationContext.acquireToken(with: parameters) { (result, error) in
+...
+```
++
+1. [Configure Language customization](language-customization.md).
+1. Create or use an existing list object to store extra query parameters.
+1. Add the `ui_locales` parameter with the corresponding language code to the list. For example, `en-us`.
+1. Pass the extra query parameters list into the MSAL configuration object's `extraQueryParameters` attribute.
+
+```swift
+let extraQueryParameters: [String: String] = ["ui_locales": "en-us"]
+
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+parameters.promptType = .selectAccount
+parameters.authority = authority
+parameters.extraQueryParameters = extraQueryParameters
+// More settings here
+
+applicationContext.acquireToken(with: parameters) { (result, error) in
+...
+```
++
+1. Configure the [ContentDefinitionParameters](customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) element.
+1. Create or use an existing list object to store extra query parameters.
+1. Add the custom query string parameter, such as `campaignId`. Set the parameter value. For example, `germany-promotion`.
+1. Pass the extra query parameters list into the MSAL configuration object's `extraQueryParameters` attribute.
+
+```swift
+let extraQueryParameters: [String: String] = ["campaignId": "germany-promotion"]
+
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+parameters.promptType = .selectAccount
+parameters.authority = authority
+parameters.extraQueryParameters = extraQueryParameters
+// More settings here
+
+applicationContext.acquireToken(with: parameters) { (result, error) in
+...
+```
+++
+1. In your custom policy, define an [ID token hint technical profile](id-token-hint.md).
+1. In your code, generate or acquire an ID token, and set the token to a variable. For example, `idToken`.
+1. Create or use an existing list object to store extra query parameters.
+1. Add the `id_token_hint` parameter with the corresponding variable that stores the ID token.
+1. Pass the extra query parameters list into the MSAL configuration object's `extraQueryParameters` attribute.
+
+```swift
+let extraQueryParameters: [String: String] = ["id_token_hint": idToken]
+
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+parameters.promptType = .selectAccount
+parameters.authority = authority
+parameters.extraQueryParameters = extraQueryParameters
+// More settings here
+
+applicationContext.acquireToken(with: parameters) { (result, error) in
+...
+```
+++
+The MSAL Logger should be set as early as possible in the app launch sequence, before any MSAL requests are made. Configure MSAL [logging](../active-directory/develop/msal-logging-ios.md) in the *AppDelegate.swift* `application` method.
+
+The following code snippet demonstrates how to configure MSAL logging:
+
+```swift
+func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
+
+ MSALGlobalConfig.loggerConfig.logLevel = .verbose
+ MSALGlobalConfig.loggerConfig.setLogCallback { (logLevel, message, containsPII) in
+
+ // If PiiLoggingEnabled is set YES, this block will potentially contain sensitive information (Personally Identifiable Information), but not all messages will contain it.
+ // containsPII == YES indicates if a particular message contains PII.
+ // You might want to capture PII only in debug builds, or only if you take necessary actions to handle PII properly according to legal requirements of the region
+ if let displayableMessage = message {
+ if (!containsPII) {
+ #if DEBUG
+ // NB! This sample uses print just for testing purposes
+ // You should only ever log to NSLog in debug mode to prevent leaking potentially sensitive information
+ print(displayableMessage)
+ #endif
+ }
+ }
+ }
+ return true
+ }
+```
+
+## Embedded webview experience
+
+Web browsers are required for interactive authentication. By default, the MSAL library uses the system webview. During sign-in, the MSAL library pops up the iOS system webview with the Azure AD B2C user interface.
+
+For more information, see the [Customize browsers and WebViews for iOS/macOS](../active-directory/develop/customize-webviews.md) article.
+
+Depending on your requirements, you can use the embedded webview. There are visual and single sign-on behavior differences between the embedded webview and the system webview in MSAL.
+
+![Screenshot demonstrates the different between the system webview experience and the embedded webview experience.](./media/enable-authentication-ios-app-options/system-web-browser-vs-embedded-view.png)
+
+> [!IMPORTANT]
+> It's recommended that you use the platform default, which is typically the system browser. The system browser is better at remembering the users that have logged in before. Some identity providers, such as Google, don't support an embedded view experience.
+
+To change this behavior, change the `webviewType` attribute of the `MSALWebviewParameters` to `wkWebView`. The following example demonstrates how to change the webview type to embedded view.
+
+```swift
+func initWebViewParams() {
+ self.webViewParamaters = MSALWebviewParameters(authPresentationViewController: self)
+
+ // Use embedded view experience
+ self.webViewParamaters?.webviewType = .wkWebView
+}
+```
+
+## Next steps
+
+- Learn more: [MSAL for iOS Swift configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-objc/wiki)
active-directory-b2c Enable Authentication Ios App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-ios-app.md
+
+ Title: Enable authentication in an iOS Swift app - Azure AD B2C
+description: Enable authentication in an iOS Swift application using Azure Active Directory B2C building blocks. Learn how to use Azure AD B2C to sign in and sign up users in an iOS Swift application.
++++++ Last updated : 07/29/2021+++++
+# Enable authentication in your own iOS Swift application using Azure Active Directory B2C
+
+This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own iOS Swift mobile application. Learn how to integrate an iOS Swift application with the [MSAL for iOS](https://github.com/AzureAD/microsoft-authentication-library-for-objc) authentication library.
+
+Use this article with [Configure authentication in a sample iOS Swift application](./configure-authentication-sample-ios-app.md), substituting the sample iOS Swift app with your own iOS Swift app. After completing the steps in this article, your application will accept sign-ins via Azure AD B2C.
+
+## Prerequisites
+
+Review the prerequisites and integration steps in [Configure authentication in a sample iOS Swift application](configure-authentication-sample-ios-app.md) article.
+
+## Create an iOS Swift app project
+
+If you don't already have an iOS Swift application, follow these steps to set up a new project.
+
+1. Open [Xcode](https://developer.apple.com/xcode/) and select **File** > **New** > **Project**.
+1. For iOS apps, select **iOS** > **App** and select **Next**.
+1. For the **Choose options for your new project**, provide the following:
+ 1. **Product name**, such as `MSALiOS`.
+ 1. **Organization identifier**, such as `contoso.com`.
+ 1. For the **Interface**, select **Storyboard**.
+ 1. For the **Life cycle**, select **UIKit App Delegate**.
+ 1. For the **Language**, select **Swift**.
+1. Select **Next**.
+1. Select a folder to create your app and select **Create**.
++
+## Install the MSAL library
+
+1. Use [CocoaPods](https://cocoapods.org/) to install the MSAL library. In the same folder as your project's `.xcodeproj` file, if the `podfile` file doesn't exist, create an empty file called `podfile`. Add the following code to the `podfile` file:
+
+ ```
+ use_frameworks!
+
+ target '<your-target-here>' do
+ pod 'MSAL'
+ end
+ ```
+
+1. Replace `<your-target-here>` with the name of your project. For example, `MSALiOS`. For more information, see [Podfile Syntax Reference](https://guides.cocoapods.org/syntax/podfile.html#podfile).
+1. In a terminal window, navigate to the folder that contains the `podfile` file. Run `pod install` to install the MSAL library.
+1. After you run the `pod install` command, a `<your project name>.xcworkspace` file is created. To reload the project in Xcode, close Xcode and open `<your project name>.xcworkspace`.
+
+## Set the app URL scheme
+
+When a user authenticates, Azure AD B2C sends an authorization code to the app by using the redirect URI configured on the Azure AD B2C application registration.
+
+The MSAL default redirect URI format is `msauth.[Your_Bundle_Id]://auth`. For example, `msauth.com.microsoft.identitysample.MSALiOS://auth`, where `msauth.com.microsoft.identitysample.MSALiOS` is the URL scheme.
+
+In this step, register your URL scheme using the `CFBundleURLSchemes` array. Your application listens on the URL scheme for the callback from Azure AD B2C.
+
+In Xcode, open the [Info.plist file](https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Introduction/Introduction.html) as a source code file. Add the following XML snippet inside of the `<dict>` section.
+
+```xml
+<key>CFBundleURLTypes</key>
+<array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>msauth.com.microsoft.identitysample.MSALiOS</string>
+ </array>
+ </dict>
+</array>
+<key>LSApplicationQueriesSchemes</key>
+<array>
+ <string>msauthv2</string>
+ <string>msauthv3</string>
+</array>
+```
+
+## Add the authentication code
+
+The [sample code](configure-authentication-sample-ios-app.md#step-4-get-the-ios-mobile-app-sample) is made up of a `UIViewController` class. The class:
+
+- Defines the structure for a user interface.
+- Contains information about your Azure AD B2C identity provider. The app uses this information to establish a trust relationship with Azure AD B2C.
+- Contains the authentication code to authenticates users, acquires tokens, and validates them.
+
+Choose a `UIViewController` where the users will authenticate. Merge the code with the one [provided here](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/blob/vNext/MSALiOS/ViewController.swift) into your `UIViewController`.
+
+## Configure your iOS Swift app
+
+After you [add the authentication code](#add-the-authentication-code), configure your iOS Swift app with your Azure AD B2C settings. Azure AD B2C identity provider settings are configured in the `UIViewController` class chosen in the previous section.
+
+Follow the guidance for how to [Configure the sample mobile app](configure-authentication-sample-ios-app.md#step-5-configure-the-sample-mobile-app).
+
+## Run and test the mobile app
+
+1. Build and run the project with a [simulator of a connected iOS device](https://developer.apple.com/documentation/xcode/running-your-app-in-the-simulator-or-on-a-device).
+1. Select **Sign In**. Then sign up or sign in with your Azure AD B2C local or social account.
+1. After successful authentication, you'll see your display name in the navigation bar.
+
+## How it works?
+
+This section describes the code building blocks that enable the authentication for your iOS Swift app. It lists the UIViewController's methods and how to customize your code.
+
+### Instantiate a public client application
+
+Public client applications are not trusted to safely keep application secrets, and they don't have client secrets. In [viewDidLoad](https://developer.apple.com/documentation/uikit/uiviewcontroller/1621495-viewdidload), instantiate MSAL using a public client application object.
+
+The following code snippet demonstrates how to initialize the MSAL library with a `MSALPublicClientApplicationConfig` configuration object.
+
+The configuration object provides information about your Azure AD B2C environment, for example, the client ID, redirect URI, and authority to build authentication requests to Azure AD B2C. For information about the configuration object, see [Configure the sample mobile app](configure-authentication-sample-ios-app.md#step-5-configure-the-sample-mobile-app).
+
+```swift
+do {
+
+ let siginPolicyAuthority = try self.getAuthority(forPolicy: self.kSignupOrSigninPolicy)
+ let editProfileAuthority = try self.getAuthority(forPolicy: self.kEditProfilePolicy)
+
+ let pcaConfig = MSALPublicClientApplicationConfig(clientId: kClientID, redirectUri: kRedirectUri, authority: siginPolicyAuthority)
+ pcaConfig.knownAuthorities = [siginPolicyAuthority, editProfileAuthority]
+
+ self.applicationContext = try MSALPublicClientApplication(configuration: pcaConfig)
+ self.initWebViewParams()
+
+ } catch {
+ self.updateLoggingText(text: "Unable to create application \(error)")
+ }
+```
+
+The `initWebViewParams` method configures the [interactive authentication](../active-directory/develop/customize-webviews.md) experience.
+
+The following Swift code snippet initializes the `webViewParamaters` class member with the system webview. For more information, see the [Customize browsers and WebViews for iOS/macOS](../active-directory/develop/customize-webviews.md) article.
+
+```swift
+func initWebViewParams() {
+ self.webViewParamaters = MSALWebviewParameters(authPresentationViewController: self)
+ self.webViewParamaters?.webviewType = .default
+}
+```
+
+### Interactive authorization request
+
+An interactive authorization request is a flow where the user is prompted for sign-up or sign-in using the system webview. When the user clicks on the **Sign In** button, the `authorizationButton` method is called.
+
+The `authorizationButton` method prepares the `MSALInteractiveTokenParameters` object with relevant data about the authorization request. The `acquireToken` method uses the `MSALInteractiveTokenParameters` to authenticate the user using the system webview.
+
+The following code snippet demonstrates how to start the interactive authorization request.
+
+```swift
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+parameters.promptType = .selectAccount
+parameters.authority = authority
+
+applicationContext.acquireToken(with: parameters) { (result, error) in
+
+// On error code
+guard let result = result else {
+ self.updateLoggingText(text: "Could not acquire token: \(error ?? "No error informarion" as! Error)")
+ return
+}
+
+// On success code
+self.accessToken = result.accessToken
+self.updateLoggingText(text: "Access token is \(self.accessToken ?? "Empty")")
+}
+```
+
+Once the user finishes the authorization flow (successfully or unsuccessfully), the result is returned to the [closure](https://docs.swift.org/swift-book/LanguageGuide/Closures.html) of the `acquireToken` method.
+
+The acquireToken method returns the `result` and `error` objects. Use this closure to:
+
+- Update the mobile app UI with information after the authentication is completed.
+- Call a web API service with an access token.
+- Handle authentication errors, for example, when a user cancels the sign-in flow.
+
+### Call a web API
+
+To call a [token-based authorization web API](enable-authentication-web-api.md), the app needs a valid access token. The app takes the following steps:
+
+1. Acquires an access token with the required permissions (scopes) for the web API endpoint.
+1. Passes the access token as a bearer token in the authorization header of the HTTP request using this format:
+
+```http
+Authorization: Bearer <access-token>
+```
+
+When users [authenticate interactively](#interactive-authorization-request), the app gets an access token in the `acquireToken` closure. For subsequent web API calls, use the acquire token silent (`acquireTokenSilent`) method as described in this section.
+
+The `acquireTokenSilent` method takes the following steps:
+
+1. Attempts to fetch an access token with the requested scopes from the token cache. If the token is present and not expired, the token is returned.
+1. If the token isn't present in the token cache or has expired, the MSAL library attempts to use the refresh token to acquire a new access token.
+1. If the refresh token doesn't exist or has expired, an exception is returned. In this case, you should prompt the user to [sign in interactively](#interactive-authorization-request).
+
+The following code snippet demonstrates how to acquire an access token:
+
+```swift
+do {
+
+// Get the authority using the sign-in or sign-up user flow
+let authority = try self.getAuthority(forPolicy: self.kSignupOrSigninPolicy)
+
+// Get the current account from the application context
+guard let thisAccount = try self.getAccountByPolicy(withAccounts: applicationContext.allAccounts(), policy: kSignupOrSigninPolicy) else {
+ self.updateLoggingText(text: "There is no account available!")
+ return
+}
+
+// Configure the acquire token silent parameters
+let parameters = MSALSilentTokenParameters(scopes: kScopes, account:thisAccount)
+parameters.authority = authority
+parameters.loginHint = "username"
+
+// Acquire token silent
+self.applicationContext.acquireTokenSilent(with: parameters) { (result, error) in
+ if let error = error {
+
+ let nsError = error as NSError
+
+ // interactionRequired means we need to ask the user to sign in. This usually happens
+ // when the user's Refresh Token is expired or if the user has changed their password
+ // among other possible reasons.
+
+ if (nsError.domain == MSALErrorDomain) {
+
+ if (nsError.code == MSALError.interactionRequired.rawValue) {
+
+ // Start an interactive authorization code
+ // Notice we supply the account here. This ensures we acquire token for the same account
+ // as we originally authenticated.
+
+ ...
+ }
+ }
+
+ self.updateLoggingText(text: "Could not acquire token: \(error)")
+ return
+ }
+
+ guard let result = result else {
+
+ self.updateLoggingText(text: "Could not acquire token: No result returned")
+ return
+ }
+
+ // On success, set the access token to the accessToken class member.
+ // The callGraphAPI method uses the access token to call a web API
+ self.accessToken = result.accessToken
+ ...
+}
+} catch {
+self.updateLoggingText(text: "Unable to construct parameters before calling acquire token \(error)")
+}
+```
+
+The `callGraphAPI` method retrieves the access token and calls the web API.
+
+```swift
+@objc func callGraphAPI(_ sender: UIButton) {
+ guard let accessToken = self.accessToken else {
+ self.updateLoggingText(text: "Operation failed because could not find an access token!")
+ return
+ }
+
+ let sessionConfig = URLSessionConfiguration.default
+ sessionConfig.timeoutIntervalForRequest = 30
+ let url = URL(string: self.kGraphURI)
+ var request = URLRequest(url: url!)
+ request.setValue("Bearer \(accessToken)", forHTTPHeaderField: "Authorization")
+ let urlSession = URLSession(configuration: sessionConfig, delegate: self, delegateQueue: OperationQueue.main)
+
+ self.updateLoggingText(text: "Calling the API....")
+
+ urlSession.dataTask(with: request) { data, response, error in
+ guard let validData = data else {
+ self.updateLoggingText(text: "Could not call API: \(error ?? "No error informarion" as! Error)")
+ return
+ }
+
+ let result = try? JSONSerialization.jsonObject(with: validData, options: [])
+
+ guard let validResult = result as? [String: Any] else {
+ self.updateLoggingText(text: "Nothing returned from API")
+ return
+ }
+
+ self.updateLoggingText(text: "API response: \(validResult.debugDescription)")
+ }.resume()
+}
+```
+
+### Sign-out
+
+Signing out with MSAL removes all known information about a user from the application. Use the sign-out method to sign out users and update the UI. For example, hide protected UI elements and the sign-out button, and show the sign-in button.
+
+The following code snippet demonstrates how to sign-out a user:
+
+```swift
+@objc func signoutButton(_ sender: UIButton) {
+do {
+
+
+ let thisAccount = try self.getAccountByPolicy(withAccounts: applicationContext.allAccounts(), policy: kSignupOrSigninPolicy)
+
+ if let accountToRemove = thisAccount {
+ try applicationContext.remove(accountToRemove)
+ } else {
+ self.updateLoggingText(text: "There is no account to signing out!")
+ }
+
+ ...
+
+} catch {
+ self.updateLoggingText(text: "Received error signing out: \(error)")
+}
+}
+```
+
+## Next steps
+
+* [Configure authentication options in an iOS Swift application](enable-authentication-ios-app-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
After you've obtained the code sample, configure it for your environment and the
The application displays a list of commands you can execute. For example, get all users, get a single user, delete a user, update a user's password, and bulk import.
+> [!NOTE]
+> For the application to update user account passwords, you'll need to [grant the user administrator role](microsoft-graph-get-started.md#optional-grant-user-administrator-role) to the application.
+
### Code discussion The sample code uses the [Microsoft Graph SDK](/graph/sdks/sdks-overview), which is designed to simplify building high-quality, efficient, and resilient applications that access Microsoft Graph.
public static async Task ListUsers(GraphServiceClient graphClient)
<!-- LINK --> [graph-objectIdentity]: /graph/api/resources/objectidentity
-[graph-user]: (https://docs.microsoft.com/graph/api/resources/user)
+[graph-user]: (https://docs.microsoft.com/graph/api/resources/user)
active-directory-b2c Signin Appauth Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/signin-appauth-android.md
- Title: Acquire a token in an Android application-
-description: How to create an Android app that uses AppAuth with Azure Active Directory B2C to manage user identities and authenticate users.
------- Previously updated : 05/12/2020----
-# Sign-in using an Android application in Azure Active Directory B2C
-
-The Microsoft identity platform uses open standards such as OAuth2 and OpenID Connect. These standards allow you to leverage any library you wish to integrate with Azure Active Directory B2C. To help you use other libraries, you can use a walkthrough like this one to demonstrate how to configure 3rd party libraries to connect to the Microsoft identity platform. Most libraries that implement [the RFC6749 OAuth2 spec](https://tools.ietf.org/html/rfc6749) can connect to the Microsoft Identity platform.
-
-> [!WARNING]
-> Microsoft does not provide fixes for 3rd party libraries and has not done a review of those libraries. This sample is using a 3rd party library called AppAuth that has been tested for compatibility in basic scenarios with the Azure AD B2C. Issues and feature requests should be directed to the library's open-source project. Please see [this article](../active-directory/develop/reference-v2-libraries.md) for more information.
->
->
-
-If you're new to OAuth2 or OpenID Connect much of this sample configuration may not make much sense to you. We recommend you look at a brief [overview of the protocol we've documented here](protocols-overview.md).
-
-## Get an Azure AD B2C directory
-
-Before you can use Azure AD B2C, you must create a directory, or tenant. A directory is a container for all of your users, apps, groups, and more. If you don't have one already, [create a B2C directory](tutorial-create-tenant.md) before you continue.
-
-## Create an application
-
-Next, register an application in your Azure AD B2C tenant. This gives Azure AD the information it needs to communicate securely with your app.
--
-Record the **Application (client) ID** for use in a later step.
-
-Also record your custom redirect URI for use in a later step. For example, `com.onmicrosoft.contosob2c.exampleapp://oauth/redirect`.
-
-## Create your user flows
-
-In Azure AD B2C, every user experience is defined by a [user flow](user-flow-overview.md), which is a set of policies that control the behavior of Azure AD. This application requires a sign-in and sign-up user flow. When you create the user flow, be sure to:
-
-* Choose the **Display name** as a sign-up attribute in your user flow.
-* Choose the **Display name** and **Object ID** application claims in every user flow. You can choose other claims as well.
-* Copy the **Name** of each user flow after you create it. It should have the prefix `b2c_1_`. You'll need the user flow name later.
-
-After you have created your user flows, you're ready to build your app.
-
-## Download the sample code
-
-We have provided a working sample that uses AppAuth with Azure AD B2C [on GitHub](https://github.com/Azure-Samples/active-directory-android-native-appauth-b2c). You can download the code and run it. You can quickly get started with your own app using your own Azure AD B2C configuration by following the instructions in the [README.md](https://github.com/Azure-Samples/active-directory-android-native-appauth-b2c/blob/master/README.md).
-
-The sample is a modification of the sample provided by [AppAuth](https://openid.github.io/AppAuth-Android/). Please visit their page to learn more about AppAuth and its features.
-
-## Modifying your app to use Azure AD B2C with AppAuth
-
-> [!NOTE]
-> AppAuth supports Android API 16 (Jellybean) and above. We recommend using API 23 and above.
->
-
-### Configuration
-
-You can configure communication with Azure AD B2C by either specifying the discovery URI or by specifying both the authorization endpoint and token endpoint URIs. In either case, you will need the following information:
-
-* Tenant ID (e.g. contoso.onmicrosoft.com)
-* User flow name (e.g. B2C\_1\_SignUpIn)
-
-If you choose to automatically discover the authorization and token endpoint URIs, you will need to fetch information from the discovery URI. The discovery URI can be generated by replacing the `<tenant-id>` and the `<policy-name>` in the following URL:
-
-```java
-String mDiscoveryURI = "https://<tenant-name>.b2clogin.com/<tenant-id>/<policy-name>/v2.0/.well-known/openid-configuration";
-```
-
-You can then acquire the authorization and token endpoint URIs and create an AuthorizationServiceConfiguration object by running the following:
-
-```java
-final Uri issuerUri = Uri.parse(mDiscoveryURI);
-AuthorizationServiceConfiguration config;
-
-AuthorizationServiceConfiguration.fetchFromIssuer(
- issuerUri,
- new RetrieveConfigurationCallback() {
- @Override public void onFetchConfigurationCompleted(
- @Nullable AuthorizationServiceConfiguration serviceConfiguration,
- @Nullable AuthorizationException ex) {
- if (ex != null) {
- Log.w(TAG, "Failed to retrieve configuration for " + issuerUri, ex);
- } else {
- // service configuration retrieved, proceed to authorization...
- }
- }
- });
-```
-
-Instead of using discovery to obtain the authorization and token endpoint URIs, you can also specify them explicitly by replacing the `<tenant-id>` and the `<policy-name>` in the URLs below:
-
-```java
-String mAuthEndpoint = "https://<tenant-name>.b2clogin.com/<tenant-id>/<policy-name>/oauth2/v2.0/authorize";
-
-String mTokenEndpoint = "https://<tenant-name>.b2clogin.com/<tenant-id>/<policy-name>/oauth2/v2.0/token";
-```
-
-Run the following code to create your AuthorizationServiceConfiguration object:
-
-```java
-AuthorizationServiceConfiguration config =
- new AuthorizationServiceConfiguration(name, mAuthEndpoint, mTokenEndpoint);
-
-// perform the auth request...
-```
-
-### Authorizing
-
-After configuring or retrieving an authorization service configuration, an authorization request can be constructed. To create the request, you will need the following information:
-
-* Client ID (APPLICATION ID) that you recorded earlier. For example, `00000000-0000-0000-0000-000000000000`.
-* Custom Redirect URI that you recorded earlier. For example, `com.onmicrosoft.contosob2c.exampleapp://oauth/redirect`.
-
-Both items should have been saved when you were [registering your app](#create-an-application).
-
-```java
-AuthorizationRequest req = new AuthorizationRequest.Builder(
- config,
- clientId,
- ResponseTypeValues.CODE,
- redirectUri)
- .build();
-```
-
-Please refer to the [AppAuth guide](https://openid.github.io/AppAuth-Android/) on how to complete the rest of the process. If you need to quickly get started with a working app, check out [our sample](https://github.com/Azure-Samples/active-directory-android-native-appauth-b2c). Follow the steps in the [README.md](https://github.com/Azure-Samples/active-directory-android-native-appauth-b2c/blob/master/README.md) to enter your own Azure AD B2C configuration.
active-directory-b2c Signin Appauth Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/signin-appauth-ios.md
- Title: Use AppAuth in an iOS application-
-description: How to create an iOS app that uses AppAuth with Azure Active Directory B2C to manage user identities and authenticate users.
------- Previously updated : 11/30/2018----
-# Azure AD B2C: Sign-in using an iOS application
-
-The Microsoft identity platform uses open standards such as OAuth2 and OpenID Connect. Using an open standard protocol offers more developer choice when selecting a library to integrate with our services. We've provided this walkthrough and others like it to aid developers with writing applications that connect to the Microsoft Identity platform. Most libraries that implement [the RFC6749 OAuth2 spec](https://tools.ietf.org/html/rfc6749) are able to connect to the Microsoft Identity platform.
-
-> [!WARNING]
-> Microsoft does not provide fixes for third-party libraries and has not done a review of those libraries. This sample is using a third-party library called AppAuth that has been tested for compatibility in basic scenarios with the Azure AD B2C. Issues and feature requests should be directed to the library's open-source project. For more information, see [this article](../active-directory/develop/reference-v2-libraries.md).
->
->
-
-If you're new to OAuth2 or OpenID Connect, much of this sample configuration may not make much sense to you. We recommend you look at a brief [overview of the protocol we've documented here](protocols-overview.md).
-
-## Get an Azure AD B2C directory
-Before you can use Azure AD B2C, you must create a directory, or tenant. A directory is a container for all your users, apps, groups, and more. If you don't have one already, [create a B2C directory](tutorial-create-tenant.md) before you continue.
-
-## Create an application
-
-Next, register an application in your Azure AD B2C tenant. This gives Azure AD the information it needs to communicate securely with your app.
--
-Record the **Application (client) ID** for use in a later step.
-
-Also record your custom redirect URI for use in a later step. For example, `com.onmicrosoft.contosob2c.exampleapp://oauth/redirect`.
-
-## Create your user flows
-In Azure AD B2C, every user experience is defined by a [user flow](user-flow-overview.md). This application contains one identity experience: a combined sign-in and sign-up. When you create the user flow, be sure to:
-
-* Under **Sign-up attributes**, select the attribute **Display name**. You can select other attributes as well.
-* Under **Application claims**, select the claims **Display name** and **User's Object ID**. You can select other claims as well.
-* Copy the **Name** of each user flow after you create it. Your user flow name is prefixed with `b2c_1_` when you save the user flow. You need the user flow name later.
-
-After you have created your user flows, you're ready to build your app.
-
-## Download the sample code
-We have provided a working sample that uses AppAuth with Azure AD B2C [on GitHub](https://github.com/Azure-Samples/active-directory-ios-native-appauth-b2c). You can download the code and run it. To use your own Azure AD B2C tenant, follow the instructions in the [README.md](https://github.com/Azure-Samples/active-directory-ios-native-appauth-b2c/blob/master/README.md).
-
-This sample was created by following the README instructions by the [iOS AppAuth project on GitHub](https://github.com/openid/AppAuth-iOS). For more details on how the sample and the library work, reference the AppAuth README on GitHub.
-
-## Modifying your app to use Azure AD B2C with AppAuth
-
-> [!NOTE]
-> AppAuth supports iOS 7 and above. However, to support social logins on Google, SFSafariViewController is needed which requires iOS 9 or higher.
->
-
-### Configuration
-
-You can configure communication with Azure AD B2C by specifying both the authorization endpoint and token endpoint URIs. To generate these URIs, you need the following information:
-* Tenant ID (for example, contoso.onmicrosoft.com)
-* User flow name (for example, B2C\_1\_SignUpIn)
-
-The token endpoint URI can be generated by replacing the Tenant\_ID and the Policy\_Name in the following URL:
-
-```objc
-static NSString *const tokenEndpoint = @"https://<Tenant_name>.b2clogin.com/te/<Tenant_ID>/<Policy_Name>/oauth2/v2.0/token";
-```
-
-The authorization endpoint URI can be generated by replacing the Tenant\_ID and the Policy\_Name in the following URL:
-
-```objc
-static NSString *const authorizationEndpoint = @"https://<Tenant_name>.b2clogin.com/te/<Tenant_ID>/<Policy_Name>/oauth2/v2.0/authorize";
-```
-
-Run the following code to create your AuthorizationServiceConfiguration object:
-
-```objc
-OIDServiceConfiguration *configuration =
- [[OIDServiceConfiguration alloc] initWithAuthorizationEndpoint:authorizationEndpoint tokenEndpoint:tokenEndpoint];
-// now we are ready to perform the auth request...
-```
-
-### Authorizing
-
-After configuring or retrieving an authorization service configuration, an authorization request can be constructed. To create the request, you need the following information:
-
-* Client ID (APPLICATION ID) that you recorded earlier. For example, `00000000-0000-0000-0000-000000000000`.
-* Custom Redirect URI that you recorded earlier. For example, `com.onmicrosoft.contosob2c.exampleapp://oauth/redirect`.
-
-Both items should have been saved when you were [registering your app](#create-an-application).
-
-```objc
-OIDAuthorizationRequest *request =
- [[OIDAuthorizationRequest alloc] initWithConfiguration:configuration
- clientId:kClientId
- scopes:@[OIDScopeOpenID, OIDScopeProfile]
- redirectURL:[NSURL URLWithString:kRedirectUri]
- responseType:OIDResponseTypeCode
- additionalParameters:nil];
-
-AppDelegate *appDelegate = (AppDelegate *)[UIApplication sharedApplication].delegate;
-appDelegate.currentAuthorizationFlow =
- [OIDAuthState authStateByPresentingAuthorizationRequest:request
- presentingViewController:self
- callback:^(OIDAuthState *_Nullable authState, NSError *_Nullable error) {
- if (authState) {
- NSLog(@"Got authorization tokens. Access token: %@", authState.lastTokenResponse.accessToken);
- [self setAuthState:authState];
- } else {
- NSLog(@"Authorization error: %@", [error localizedDescription]);
- [self setAuthState:nil];
- }
- }];
-```
-
-To set up your application to handle the redirect to the URI with the custom scheme, you need to update the list of 'URL Schemes' in your Info.pList:
-* Open Info.pList.
-* Hover over a row like 'Bundle OS Type Code' and click the \+ symbol.
-* Rename the new row 'URL types'.
-* Click the arrow to the left of 'URL types' to open the tree.
-* Click the arrow to the left of 'Item 0' to open the tree.
-* Rename first item underneath Item 0 to 'URL Schemes'.
-* Click the arrow to the left of 'URL Schemes' to open the tree.
-* In the 'Value' column, there is a blank field to the left of 'Item 0' underneath 'URL Schemes'. Set the value to your application's unique scheme. The value must match the scheme used in redirectURL when creating the OIDAuthorizationRequest object. In the sample, the scheme 'com.onmicrosoft.fabrikamb2c.exampleapp' is used.
-
-Refer to the [AppAuth guide](https://openid.github.io/AppAuth-iOS/) on how to complete the rest of the process. If you need to quickly get started with a working app, check out [the sample](https://github.com/Azure-Samples/active-directory-ios-native-appauth-b2c). Follow the steps in the [README.md](https://github.com/Azure-Samples/active-directory-ios-native-appauth-b2c/blob/master/README.md) to enter your own Azure AD B2C configuration.
-
-We are always open to feedback and suggestions! If you have any difficulties with this article, or have recommendations for improving this content, we would appreciate your feedback at the bottom of the page. For feature requests, add them to [UserVoice](https://feedback.azure.com/forums/169401-azure-active-directory/category/160596-b2c).
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 07/23/2021 Last updated : 07/29/2021
The syntax for Expressions for Attribute Mappings is reminiscent of Visual Basic
## List of Functions
-[Append](#append) &nbsp;&nbsp;&nbsp;&nbsp; [BitAnd](#bitand) &nbsp;&nbsp;&nbsp;&nbsp; [CBool](#cbool) &nbsp;&nbsp;&nbsp;&nbsp; [CDate](#cdate) &nbsp;&nbsp;&nbsp;&nbsp; [Coalesce](#coalesce) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToBase64](#converttobase64) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToUTF8Hex](#converttoutf8hex) &nbsp;&nbsp;&nbsp;&nbsp; [Count](#count) &nbsp;&nbsp;&nbsp;&nbsp; [CStr](#cstr) &nbsp;&nbsp;&nbsp;&nbsp; [DateAdd](#dateadd) &nbsp;&nbsp;&nbsp;&nbsp; [DateFromNum](#datefromnum) &nbsp;[FormatDateTime](#formatdatetime) &nbsp;&nbsp;&nbsp;&nbsp; [Guid](#guid) &nbsp;&nbsp;&nbsp;&nbsp; [IgnoreFlowIfNullOrEmpty](#ignoreflowifnullorempty) &nbsp;&nbsp;&nbsp;&nbsp;[IIF](#iif) &nbsp;&nbsp;&nbsp;&nbsp;[InStr](#instr) &nbsp;&nbsp;&nbsp;&nbsp; [IsNull](#isnull) &nbsp;&nbsp;&nbsp;&nbsp; [IsNullOrEmpty](#isnullorempty) &nbsp;&nbsp;&nbsp;&nbsp; [IsPresent](#ispresent) &nbsp;&nbsp;&nbsp;&nbsp; [IsString](#isstring) &nbsp;&nbsp;&nbsp;&nbsp; [Item](#item) &nbsp;&nbsp;&nbsp;&nbsp; [Join](#join) &nbsp;&nbsp;&nbsp;&nbsp; [Left](#left) &nbsp;&nbsp;&nbsp;&nbsp; [Mid](#mid) &nbsp;&nbsp;&nbsp;&nbsp; [NormalizeDiacritics](#normalizediacritics) &nbsp;&nbsp; &nbsp;&nbsp; [Not](#not) &nbsp;&nbsp;&nbsp;&nbsp; [Now](#now) &nbsp;&nbsp;&nbsp;&nbsp; [NumFromDate](#numfromdate) &nbsp;&nbsp;&nbsp;&nbsp; [RemoveDuplicates](#removeduplicates) &nbsp;&nbsp;&nbsp;&nbsp; [Replace](#replace) &nbsp;&nbsp;&nbsp;&nbsp; [SelectUniqueValue](#selectuniquevalue)&nbsp;&nbsp;&nbsp;&nbsp; [SingleAppRoleAssignment](#singleapproleassignment)&nbsp;&nbsp;&nbsp;&nbsp; [Split](#split)&nbsp;&nbsp;&nbsp;&nbsp;[StripSpaces](#stripspaces) &nbsp;&nbsp;&nbsp;&nbsp; [Switch](#switch)&nbsp;&nbsp;&nbsp;&nbsp; [ToLower](#tolower)&nbsp;&nbsp;&nbsp;&nbsp; [ToUpper](#toupper)&nbsp;&nbsp;&nbsp;&nbsp; [Word](#word)
+[Append](#append) &nbsp;&nbsp;&nbsp;&nbsp; [AppRoleAssignmentsComplex](#approleassignmentscomplex) &nbsp;&nbsp;&nbsp;&nbsp; [BitAnd](#bitand) &nbsp;&nbsp;&nbsp;&nbsp; [CBool](#cbool) &nbsp;&nbsp;&nbsp;&nbsp; [CDate](#cdate) &nbsp;&nbsp;&nbsp;&nbsp; [Coalesce](#coalesce) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToBase64](#converttobase64) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToUTF8Hex](#converttoutf8hex) &nbsp;&nbsp;&nbsp;&nbsp; [Count](#count) &nbsp;&nbsp;&nbsp;&nbsp; [CStr](#cstr) &nbsp;&nbsp;&nbsp;&nbsp; [DateAdd](#dateadd) &nbsp;&nbsp;&nbsp;&nbsp; [DateFromNum](#datefromnum) &nbsp;[FormatDateTime](#formatdatetime) &nbsp;&nbsp;&nbsp;&nbsp; [Guid](#guid) &nbsp;&nbsp;&nbsp;&nbsp; [IgnoreFlowIfNullOrEmpty](#ignoreflowifnullorempty) &nbsp;&nbsp;&nbsp;&nbsp;[IIF](#iif) &nbsp;&nbsp;&nbsp;&nbsp;[InStr](#instr) &nbsp;&nbsp;&nbsp;&nbsp; [IsNull](#isnull) &nbsp;&nbsp;&nbsp;&nbsp; [IsNullOrEmpty](#isnullorempty) &nbsp;&nbsp;&nbsp;&nbsp; [IsPresent](#ispresent) &nbsp;&nbsp;&nbsp;&nbsp; [IsString](#isstring) &nbsp;&nbsp;&nbsp;&nbsp; [Item](#item) &nbsp;&nbsp;&nbsp;&nbsp; [Join](#join) &nbsp;&nbsp;&nbsp;&nbsp; [Left](#left) &nbsp;&nbsp;&nbsp;&nbsp; [Mid](#mid) &nbsp;&nbsp;&nbsp;&nbsp; [NormalizeDiacritics](#normalizediacritics) &nbsp;&nbsp; &nbsp;&nbsp; [Not](#not) &nbsp;&nbsp;&nbsp;&nbsp; [Now](#now) &nbsp;&nbsp;&nbsp;&nbsp; [NumFromDate](#numfromdate) &nbsp;&nbsp;&nbsp;&nbsp; [RemoveDuplicates](#removeduplicates) &nbsp;&nbsp;&nbsp;&nbsp; [Replace](#replace) &nbsp;&nbsp;&nbsp;&nbsp; [SelectUniqueValue](#selectuniquevalue)&nbsp;&nbsp;&nbsp;&nbsp; [SingleAppRoleAssignment](#singleapproleassignment)&nbsp;&nbsp;&nbsp;&nbsp; [Split](#split)&nbsp;&nbsp;&nbsp;&nbsp;[StripSpaces](#stripspaces) &nbsp;&nbsp;&nbsp;&nbsp; [Switch](#switch)&nbsp;&nbsp;&nbsp;&nbsp; [ToLower](#tolower)&nbsp;&nbsp;&nbsp;&nbsp; [ToUpper](#toupper)&nbsp;&nbsp;&nbsp;&nbsp; [Word](#word)
### Append
Example: If you are using a Salesforce Sandbox, you might need to append an addi
* **INPUT**: (userPrincipalName): "John.Doe@contoso.com" * **OUTPUT**: "John.Doe@contoso.com.test" +
+### AppRoleAssignmentsComplex
+
+**Function:**
+AppRoleAssignmentsComplex([appRoleAssignments])
+
+**Description:**
+Used to provision multiple roles for a user. For detailed usage, see [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md#provisioning-a-role-to-a-scim-app).
+
+**Parameters:**
+
+| Name | Required/ Repeating | Type | Notes |
+| | | | |
+| **[appRoleAssignments]** |Required |String |**[appRoleAssignments]** object. |
### BitAnd
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 07/23/2021 Last updated : 07/29/2021
Combined registration supports the following authentication methods and actions:
Users can set one of the following options as the default Multi-Factor Authentication method: -- Microsoft Authenticator ΓÇô notification.-- Authenticator app or hardware token ΓÇô code.-- Phone call.-- Text message.
+- Microsoft Authenticator ΓÇô push notification
+- Authenticator app or hardware token ΓÇô code
+- Phone call
+- Text message
-As we continue to add more authentication methods to Azure AD, those methods are available in combined registration.
+Third party authenticator apps do not provide push notification. As we continue to add more authentication methods to Azure AD, those methods become available in combined registration.
## Combined registration modes
For example, a user sets Microsoft Authenticator app push notification as the pr
This user is also configured with SMS/Text option on a resource tenant. If this user removes SMS/Text as one of the authentication option on their home tenant, they get confused when access to the resource tenant asks them to respond to SMS/Text message. - To switch the directory in the Azure portal, click the user account name in the upper right corner and click **Switch directory**. ![External users can switch directory.](media/concept-registration-mfa-sspr-combined/switch-directory.png)
active-directory Howto Mfaserver Dir Radius https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfaserver-dir-radius.md
Previously updated : 11/21/2019 Last updated : 07/29/2021
To configure the RADIUS client, use the guidelines:
* Configure your appliance/server to authenticate via RADIUS to the Azure Multi-Factor Authentication Server's IP address, which acts as the RADIUS server. * Use the same shared secret that was configured earlier.
-* Configure the RADIUS timeout to 30-60 seconds so that there is time to validate the user's credentials, perform two-step verification, receive their response, and then respond to the RADIUS access request.
+* Configure the RADIUS timeout to 60 seconds so that there is time to validate the user's credentials, perform two-step verification, receive their response, and then respond to the RADIUS access request.
## Next steps
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-aadsts-error-codes.md
Previously updated : 07/01/2021 Last updated : 07/28/2021
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50048 | SubjectMismatchesIssuer - Subject mismatches Issuer claim in the client assertion. Contact the tenant admin. | | AADSTS50049 | NoSuchInstanceForDiscovery - Unknown or invalid instance. | | AADSTS50050 | MalformedDiscoveryRequest - The request is malformed. |
-| AADSTS50053 | IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. The user is blocked due to repeated sign-in attempts. See [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md). |
+| AADSTS50053 | This error can result from two different reasons: <br><ul><li>IdsLocked - The account is locked because the user tried to sign in too many times with an incorrect user ID or password. The user is blocked due to repeated sign-in attempts. See [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md).</li><li>Or, sign-in was blocked because it came from an IP address with malicious activity.</li></ul> <br>To determine which failure reason caused this error, sign in to the [Azure portal](https://portal.azure.com). Navigate to your Azure AD tenant and then **Monitoring** -> **Sign-ins**. Find the failed user sign-in with **Sign-in error code** 50053 and check the **Failure reason**.|
| AADSTS50055 | InvalidPasswordExpiredPassword - The password is expired. The user's password is expired, and therefore their login or session was ended. They will be offered the opportunity to reset it, or may ask an admin to reset it via [Reset a user's password using Azure Active Directory](../fundamentals/active-directory-users-reset-password-azure-portal.md). | | AADSTS50056 | Invalid or null password: password does not exist in the directory for this user. The user should be asked to enter their password again. | | AADSTS50057 | UserDisabled - The user account is disabled. The user object in Active Directory backing this account has been disabled. An admin can re-enable this account [through Powershell](/powershell/module/activedirectory/enable-adaccount) |
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/plan-device-deployment.md
BYOD and corporate owned mobile device are registered by users installing the Co
* [Windows 10](/mem/intune/user-help/enroll-windows-10-device)
+* [macOS](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp)
+ If registering your devices is the best option for your organization, see the following resources: * This overview of [Azure AD registered devices](concept-azure-ad-register.md).
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-applications.md
After setting up Azure Key Vault, be sure to [enable logging](../../key-vault/ge
| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: <li>high profile or highly privileged accounts.<li> app requests high-risk permissions<li>apps with suspicious names, for example generic, misspelled, etc. |
-The act of consenting to an application is not in itself malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](/security/fundamentals/steps-secure-identity).
+The act of consenting to an application is not in itself malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](/azure/security/fundamentals/steps-secure-identity).
For more information on consent operations, see the following resources:
See these security operations guide articles:
[Security operations for devices](security-operations-devices.md)
-[Security operations for infrastructure](security-operations-infrastructure.md)
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
You can move SaaS applications that are currently federated with ADFS to Azure A
For more information, see ΓÇô - [Moving application authentication from Active Directory Federation Services to Azure Active Directory](/manage-apps/migrate-adfs-apps-to-azure) and-- [AD FS to Azure AD application migration playbook for developers](/samples/azure-samples/ms-identity-dotnet-adfs-to-aad)
+- [AD FS to Azure AD application migration playbook for developers](/samples/azure-samples/ms-identity-adfs-to-aad/ms-identity-dotnet-adfs-to-aad)
### Remove relying party trust
active-directory Role Definitions List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/role-definitions-list.md
Previously updated : 03/07/2021 Last updated : 07/23/2021 -+
A role definition is a collection of permissions that can be performed, such as
This article describes how to list the Azure AD built-in and custom roles along with their permissions.
-## List all roles
+## Prerequisites
+
+- AzureADPreview module when using PowerShell
+- Admin consent when using Graph explorer for Microsoft Graph API
+
+For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+## Azure portal
1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) and select **Azure Active Directory**.
This article describes how to list the Azure AD built-in and custom roles along
The page includes links to relevant documentation to help guide you through managing roles.
- ![Screenshot that shows the "Global Administrator - Description" page.](./media/role-definitions-list/role-description.png)
+ ![Screenshot that shows the "Global Administrator - Description" page.](./media/role-definitions-list/role-description-updated.png)
+
+## PowerShell
+
+Follow these steps to list Azure AD roles using PowerShell.
+
+1. Open a PowerShell window and use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import the AzureADPreview module. For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+ ```powershell
+ Import-Module -Name AzureADPreview -Force
+ ```
+
+2. In a PowerShell window, use [Connect-AzureAD](/powershell/module/azuread/connect-azuread) to sign in to your tenant.
+
+ ```powershell
+ Connect-AzureAD
+ ```
+3. Use [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) to get all roles.
+
+ ```powershell
+ Get-AzureADMSRoleDefinition
+ ```
+
+4. To view the list of permissions of a role, use the following cmdlet.
+
+ ```powershell
+ # Do this avoid truncation of the list of permissions
+ $FormatEnumerationLimit = -1
+
+ (Get-AzureADMSRoleDefinition -Filter "displayName eq 'Conditional Access Administrator'").RolePermissions | Format-list
+ ```
+
+## Microsoft Graph API
+
+Follow these instructions to list Azure AD roles using the Microsoft Graph API in [Graph Explorer](https://aka.ms/ge).
+
+1. Sign in to the [Graph Explorer](https://aka.ms/ge).
+2. Select **GET** as the HTTP method from the dropdown.
+3. Select the API version to **beta**.
+4. Add the following query to use the [List roleDefinitions](/graph/api/rbacapplication-list-roledefinitions) API.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions
+ ```
+
+5. Select **Run query** to list the roles.
+6. To view permissions of a role, use the following API.
+
+ ```HTTP
+ GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter=DisplayName eq 'Conditional Access Administrator'&$select=rolePermissions
+ ```
## Next steps
-* Feel free to share with us on the [Azure AD administrative roles forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032).
-* For more about role permissions, see [Azure AD built-in roles](permissions-reference.md).
-* For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md).
+* [List Azure AD role assignments](view-assignments.md).
+* [Assign Azure AD roles to users](manage-roles-portal.md).
+* [Azure AD built-in roles](permissions-reference.md).
active-directory Cirrus Identity Bridge For Azure Ad Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Cirrus Identity Bridge for Azure AD | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Cirrus Identity Bridge for Azure AD.
++++++++ Last updated : 07/23/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Cirrus Identity Bridge for Azure AD
+
+In this tutorial, you'll learn how to integrate Cirrus Identity Bridge for Azure AD with Azure Active Directory (Azure AD). When you integrate Cirrus Identity Bridge for Azure AD with Azure AD, you can:
+
+* Control in Azure AD who has access to Cirrus Identity Bridge for Azure AD.
+* Enable your users to be automatically signed-in to Cirrus Identity Bridge for Azure AD with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Cirrus Identity Bridge for Azure AD single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Cirrus Identity Bridge for Azure AD supports **SP** initiated SSO.
+
+## Add Cirrus Identity Bridge for Azure AD from the gallery
+
+To configure the integration of Cirrus Identity Bridge for Azure AD into Azure AD, you need to add Cirrus Identity Bridge for Azure AD from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Cirrus Identity Bridge for Azure AD** in the search box.
+1. Select **Cirrus Identity Bridge for Azure AD** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Cirrus Identity Bridge for Azure AD
+
+Configure and test Azure AD SSO with Cirrus Identity Bridge for Azure AD using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cirrus Identity Bridge for Azure AD.
+
+To configure and test Azure AD SSO with Cirrus Identity Bridge for Azure AD, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Cirrus Identity Bridge for Azure AD SSO](#configure-cirrus-identity-bridge-for-azure-ad-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Cirrus Identity Bridge for Azure AD test user](#create-cirrus-identity-bridge-for-azure-ad-test-user)** - to have a counterpart of B.Simon in Cirrus Identity Bridge for Azure AD that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Cirrus Identity Bridge for Azure AD** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.cirrusidentity.com/bridge`
+
+ b. In the **Sign on URL** text box, type a value using the following pattern:
+ `<CUSTOMER_LOGIN_URL>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Cirrus Identity Bridge for Azure AD Client support team](https://www.cirrusidentity.com/resources/service-desk) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Cirrus Identity Bridge for Azure AD application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Cirrus Identity Bridge for Azure AD application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source attribute|
+ | | |
+ | displayname | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cirrus Identity Bridge for Azure AD.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Cirrus Identity Bridge for Azure AD**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Cirrus Identity Bridge for Azure AD SSO
+
+To configure single sign-on on **Cirrus Identity Bridge for Azure AD** side, you need to send the **App Federation Metadata Url** to [Cirrus Identity Bridge for Azure AD support team](https://www.cirrusidentity.com/resources/service-desk). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Cirrus Identity Bridge for Azure AD test user
+
+In this section, you create a user called Britta Simon in Cirrus Identity Bridge for Azure AD. Work with [Cirrus Identity Bridge for Azure AD support team](https://www.cirrusidentity.com/resources/service-desk) to add the users in the Cirrus Identity Bridge for Azure AD platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Cirrus Identity Bridge for Azure AD Sign-on URL where you can initiate the login flow.
+
+* Go to Cirrus Identity Bridge for Azure AD Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Cirrus Identity Bridge for Azure AD tile in the My Apps, this will redirect to Cirrus Identity Bridge for Azure AD Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Cirrus Identity Bridge for Azure AD you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Github Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-provisioning-tutorial.md
Title: 'Tutorial: User provisioning for GitHub - Azure AD'
-description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to GitHub.
+description: Learn how to configure Azure Active Directory to automatically provision and de-provision user organization membership in GitHub Enterprise Cloud.
# Tutorial: Configure GitHub for automatic user provisioning
-The objective of this tutorial is to show you the steps you need to perform in GitHub and Azure AD to automatically provision and de-provision user accounts from Azure AD to GitHub.
+The objective of this tutorial is to show you the steps you need to perform in GitHub and Azure AD to automate provisioning of GitHub Enterprise Cloud organization membership.
> [!NOTE] > The Azure AD provisioning integration relies on the [GitHub SCIM API](https://developer.github.com/v3/scim/), which is available to [GitHub Enterprise Cloud](https://help.github.com/articles/github-s-products/#github-enterprise) customers on the [GitHub Enterprise billing plan](https://help.github.com/articles/github-s-billing-plans/#billing-plans-for-organizations).
Azure Active Directory uses a concept called "assignments" to determine which us
Before configuring and enabling the provisioning service, you need to decide what users and/or groups in Azure AD represent the users who need access to your GitHub app. Once decided, you can assign these users to your GitHub app by following the instructions here:
-[Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md)
+For more information, see [Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md).
### Important tips for assigning users to GitHub
-* It is recommended that a single Azure AD user is assigned to GitHub to test the provisioning configuration. Additional users and/or groups may be assigned later.
+* We recommend that you assign a single Azure AD user to GitHub to test the provisioning configuration. Additional users and/or groups may be assigned later.
* When assigning a user to GitHub, you must select either the **User** role, or another valid application-specific role (if available) in the assignment dialog. The **Default Access** role does not work for provisioning, and these users are skipped. ## Configuring user provisioning to GitHub
-This section guides you through connecting your Azure AD to GitHub's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in GitHub based on user and group assignment in Azure AD.
+This section guides you through connecting your Azure AD to GitHub's SCIM provisioning API to automate provisioning of GitHub organization membership. This integration, which leverages an [OAuth app](https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/authorizing-oauth-apps#oauth-apps-and-organizations), automatically adds, manages, and removes members' access to a GitHub Enterprise Cloud organization based on user and group assignment in Azure AD. When users are [provisioned to a GitHub organization via SCIM](https://docs.github.com/en/free-pro-team@latest/rest/reference/scim#provision-and-invite-a-scim-user), an email invitation is sent to the user's email address.
### Configure automatic user account provisioning to GitHub in Azure AD
-1. In the [Azure portal](https://portal.azure.com), browse to the **Azure Active Directory > Enterprise Apps > All applications** section.
+1. In the [Azure portal](https://portal.azure.com), browse to the **Azure Active Directory > Enterprise Apps > All applications** section.
2. If you have already configured GitHub for single sign-on, search for your instance of GitHub using the search field. Otherwise, select **Add** and search for **GitHub** in the application gallery. Select GitHub from the search results, and add it to your list of applications.
This section guides you through connecting your Azure AD to GitHub's user accoun
4. Set the **Provisioning Mode** to **Automatic**.
- ![GitHub Provisioning](./media/github-provisioning-tutorial/GitHub1.png)
+ ![GitHub Provisioning](./media/github-provisioning-tutorial/github1.png)
5. Under the **Admin Credentials** section, click **Authorize**. This operation opens a GitHub authorization dialog in a new browser window. Note that you need to ensure you are approved to authorize access. Follow the directions described [here](https://help.github.com/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). 6. In the new window, sign into GitHub using your Admin account. In the resulting authorization dialog, select the GitHub team that you want to enable provisioning for, and then select **Authorize**. Once completed, return to the Azure portal to complete the provisioning configuration.
- ![Screenshot shows the sign-in page for GitHub.](./media/github-provisioning-tutorial/GitHub2.png)
+ ![Screenshot shows the sign-in page for GitHub.](./media/github-provisioning-tutorial/github2.png)
7. In the Azure portal, input **Tenant URL** and click **Test Connection** to ensure Azure AD can connect to your GitHub app. If the connection fails, ensure your GitHub account has Admin permissions and **Tenant URl** is inputted correctly, then try the "Authorize" step again (you can constitute **Tenant URL** by rule: `https://api.github.com/scim/v2/organizations/<Organization_name>`, you can find your organizations under your GitHub account: **Settings** > **Organizations**).
- ![Screenshot shows Organizations page in GitHub.](./media/github-provisioning-tutorial/GitHub3.png)
+ ![Screenshot shows Organizations page in GitHub.](./media/github-provisioning-tutorial/github3.png)
8. Enter the email address of a person or group who should receive provisioning error notifications in the **Notification Email** field, and check the checkbox "Send an email notification when a failure occurs."
This section guides you through connecting your Azure AD to GitHub's user accoun
10. Under the Mappings section, select **Synchronize Azure Active Directory Users to GitHub**.
-11. In the **Attribute Mappings** section, review the user attributes that are synchronized from Azure AD to GitHub. The attributes selected as **Matching** properties are used to match the user accounts in GitHub for update operations. Select the Save button to commit any changes.
+11. In the **Attribute Mappings** section, review the user attributes that are synchronized from Azure AD to GitHub. The attributes selected as **Matching** properties are used to match the user accounts in GitHub for update operations. Do not enable the **Matching precendence** setting for the other default attributes in the **Provisioning** section because errors might occur. Select **Save** to commit any changes.
-12. To enable the Azure AD provisioning service for GitHub, change the **Provisioning Status** to **On** in the **Settings** section
+12. To enable the Azure AD provisioning service for GitHub, change the **Provisioning Status** to **On** in the **Settings** section.
13. Click **Save**.
-This operation starts the initial synchronization of any users and/or groups assigned to GitHub in the Users and Groups section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity logs, which describe all actions performed by the provisioning service.
+This operation starts the initial synchronization of any users and/or groups assigned to GitHub in the Users and Groups section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity logs, which describe all actions performed by the provisioning service.
For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
As part of your design considerations focused on security, we recommend the foll
* Create a dedicated Key Vault for VC issuance. Limit Azure Key Vault permissions to the Azure AD Verifiable Credentials issuance service and the issuance service frontend website service principal.
- * Treat Azure Key Vault as a highly privileged system - Azure Key Vault issues credentials to customers. We recommend that no human identities have standing permissions over the Azure Key Vault service. Administrators should have only just I time access to Key Vault. For more best practices for Azure Key Vault usage, refer to [Azure Security Baseline for Key Vault](https://docs.microsoft.com/security/benchmark/azure/baselines/key-vault-security-baseline).
+ * Treat Azure Key Vault as a highly privileged system - Azure Key Vault issues credentials to customers. We recommend that no human identities have standing permissions over the Azure Key Vault service. Administrators should have only just I time access to Key Vault. For more best practices for Azure Key Vault usage, refer to [Azure Security Baseline for Key Vault](/security/benchmark/azure/baselines/key-vault-security-baseline).
* For service principal that represents the issuance frontend website:
For security logging and monitoring, we recommend the following:
* Enable logging of your Azure Storage account to monitor and send alert for configuration changes. More information can be found at [Monitoring Azure Blob Storage](../../storage/blobs/monitor-blob-storage.md).
-* Archive logs in a security information and event management (SIEM) systems, such as [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel.md) for long-term retention.
+* Archive logs in a security information and event management (SIEM) systems, such as [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel) for long-term retention.
* Mitigate spoofing risks by using the following
For security logging and monitoring, we recommend the following:
* Mitigate distributed denial of service (DDOS) and Key Vault resource exhaustion risks. Every request that triggers a VC issuance request generates Key Vault signing operations that accrue towards service limits. We recommend protecting traffic by incorporating authentication or captcha before generating issuance requests.
-For guidance on managing your Azure environment, we recommend you review [Azure Security Benchmark](https://docs.microsoft.com/security/benchmark/azure/) and [Securing Azure environments with Azure Active Directory](https://aka.ms/AzureADSecuredAzure). These guides provide best practices for managing the underlying Azure resources, including Azure Key Vault, Azure Storage, websites, and other Azure-related services and capabilities.
+For guidance on managing your Azure environment, we recommend you review [Azure Security Benchmark](/security/benchmark/azure/) and [Securing Azure environments with Azure Active Directory](https://aka.ms/AzureADSecuredAzure). These guides provide best practices for managing the underlying Azure resources, including Azure Key Vault, Azure Storage, websites, and other Azure-related services and capabilities.
## Additional considerations
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-ad-integration-cli.md
description: Learn how to use the Azure CLI to create and Azure Active Directory
Previously updated : 07/20/2020- Last updated : 07/29/2021+ # Integrate Azure Active Directory with Azure Kubernetes Service using the Azure CLI (legacy)
+> [!WARNING]
+> **The feature described in this document, Azure AD Integration (legacy), will be deprecated on February 29th 2024.
+>
+> AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client application. If you want to migrate follow the instructions [here][managed-aad-migrate].
+ Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you can log into an AKS cluster using an Azure AD authentication token. Cluster operators can also configure Kubernetes role-based access control (Kubernetes RBAC) based on a user's identity or directory group membership. This article shows you how to create the required Azure AD components, then deploy an Azure AD-enabled cluster and create a basic Kubernetes role in the AKS cluster. For the complete sample script used in this article, see [Azure CLI samples - AKS integration with Azure AD][complete-script].
-> [!Important]
-> AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client application. If you want to migrate follow the instructions [here][managed-aad-migrate].
- ## The following limitations apply: - Azure AD can only be enabled on Kubernetes RBAC-enabled cluster.
error: You must be logged in to the server (Unauthorized)
* You defined the appropriate object ID or UPN, depending on if the user account is in the same Azure AD tenant or not. * The user is not a member of more than 200 groups. * Secret defined in the application registration for server matches the value configured using `--aad-server-app-secret`
+* Be sure that only one version of kubectl is installed on your machine at a time. Conflicting versions can cause issues during authorization. To install the latest version, use [az aks install-cli][az-aks-install-cli].
## Next steps
For best practices on identity and resource control, see [Best practices for aut
<!-- LINKS - internal --> [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
[az-group-create]: /cli/azure/group#az_group_create [open-id-connect]: ../active-directory/develop/v2-protocols-oidc.md [az-ad-user-show]: /cli/azure/ad/user#az_ad_user_show
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/troubleshooting.md
There might be various reasons for the pod being stuck in that mode. You might l
* The pod itself, by using `kubectl describe pod <pod-name>`. * The logs, by using `kubectl logs <pod-name>`.
-For more information on how to troubleshoot pod problems, see [Debug applications](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods).
+For more information about how to troubleshoot pod problems, see [Debugging Pods](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods) in the Kubernetes documentation.
## I'm receiving `TCP timeouts` when using `kubectl` or other third-party tools connecting to the API server AKS has HA control planes that scale vertically according to the number of cores to ensure its Service Level Objectives (SLOs) and Service Level Agreements (SLAs). If you're experiencing connections timing out, check the below:
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/upgrade-cluster.md
AKS accepts both integer values and a percentage value for max surge. An integer
During an upgrade, the max surge value can be a minimum of 1 and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge won't be higher than the number of nodes in the pool at the time of upgrade. > [!Important]
-> The max surge setting on a node pool is permanent. Subsequent Kubernetes upgrades or node version upgrades will use this setting. You may change the max surge value for your node pools at any time. For production node pools, we recommend a max-surge setting of 33%.
+> The max surge setting on a node pool is persistent. Subsequent Kubernetes upgrades or node version upgrades will use this setting. You may change the max surge value for your node pools at any time. For production node pools, we recommend a max-surge setting of 33%.
Use the following commands to set max surge values for new or existing node pools.
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
A custom control plane identity enables access to be granted to the existing ide
You must have the Azure CLI, version 2.15.1 or later installed. ### Limitations
-* Azure Government isn't currently supported.
-* Azure China 21Vianet isn't currently supported.
+* USDOD Central, USDOD East, USGov Iowa in Azure Government aren't currently supported.
If you don't have a managed identity yet, you should go ahead and create one for example by using [az identity CLI][az-identity-create].
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-vnet.md
Common misconfiguration issues that can occur while deploying API Management ser
* For reference, see the [ports table](#required-ports) and network requirements. > [!IMPORTANT]
- > If you plan to use a Custom DNS server(s) for the VNET, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS Server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/2019-12-01/apimanagementservice/applynetworkconfigurationupdates).
+ > If you plan to use a Custom DNS server(s) for the VNET, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS Server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/2020-12-01/api-management-service/apply-network-configuration-updates).
* **Ports required for API Management:** You can control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups][network security groups]. If any of the following ports are unavailable, API Management may not operate properly and may become inaccessible. Blocked ports are another common misconfiguration issue when using API Management with a VNET.
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
Azure App Service for Linux supports out of the box tuning and customization thr
To set allocated memory or other JVM runtime options, create an [app setting](configure-common.md#configure-app-settings) named `JAVA_OPTS` with the options. App Service passes this setting as an environment variable to the Java runtime when it starts.
-In the Azure portal, under **Application Settings** for the web app, create a new app setting named `JAVA_OPTS` that includes the additional settings, such as `-Xms512m -Xmx1204m`.
+In the Azure portal, under **Application Settings** for the web app, create a new app setting named `JAVA_OPTS` for Java SE or `CATALINA_OPTS` for Tomcat that includes the additional settings, such as `-Xms512m -Xmx1204m`.
To configure the app setting from the Maven plugin, add setting/value tags in the Azure plugin section. The following example sets a specific minimum and maximum Java heap size:
app-service Integrate With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/integrate-with-application-gateway.md
ms.assetid: a6a74f17-bb57-40dd-8113-a20b50ba3050 Previously updated : 03/03/2018 Last updated : 07/26/2021
The [App Service Environment](./intro.md) is a deployment of Azure App Service in the subnet of a customer's Azure virtual network. It can be deployed with a public or private endpoint for app access. The deployment of the App Service Environment with a private endpoint (that is, an internal load balancer) is called an ILB App Service Environment.
-Web application firewalls help secure your web applications by inspecting inbound web traffic to block SQL injections, Cross-Site Scripting, malware uploads & application DDoS and other attacks. It also inspects the responses from the back-end web servers for Data Loss Prevention (DLP). You can get a WAF device from the Azure marketplace or you can use the [Azure Application Gateway][appgw].
+Web application firewalls help secure your web applications by inspecting inbound web traffic to block SQL injections, Cross-Site Scripting, malware uploads & application DDoS and other attacks. You can get a WAF device from the Azure marketplace or you can use the [Azure Application Gateway][appgw].
The Azure Application Gateway is a virtual appliance that provides layer 7 load balancing, TLS/SSL offloading, and web application firewall (WAF) protection. It can listen on a public IP address and route traffic to your application endpoint. The following information describes how to integrate a WAF-configured application gateway with an app in an ILB App Service Environment.
After setup is completed and you have allowed a short amount of time for your DN
<!--LINKS--> [appgw]: ../../application-gateway/overview.md [custom-domain]: ../app-service-web-tutorial-custom-domain.md
-[ilbase]: ./create-ilb-ase.md
+[ilbase]: ./create-ilb-ase.md
application-gateway Certificates For Backend Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/certificates-for-backend-authentication.md
From your TLS/SSL certificate, export the public key .cer file (not the private
6. Click **Finish** to export the certificate.
- ![Screenshot shows the Certificate Export Wizard after you complete the file export.](./media/certificates-for-backend-authentication/finish.png)
+ ![Screenshot shows the Certificate Export Wizard after you complete the file export.](./media/certificates-for-backend-authentication/finish-screen.png)
7. Your certificate is successfully exported.
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/multiple-site-overview.md
Similarly, you can host multiple subdomains of the same parent domain on the sam
While using multi-site listeners, to ensure that the client traffic is routed to the accurate backend, it is important to have the request routing rules be present in the correct order. For example, if you have 2 listeners with associated Host name as `*.contoso.com` and `shop.contoso.com` respectively, the listener with the `shop.contoso.com` Host name would have to be processed before the listener with `*.contoso.com`. If the listener with `*.contoso.com` is processed first, then no client traffic would be received by the more specific `shop.contoso.com` listener.
-This ordering can be established by providing a 'Priority' field value to the request routing rules associated with the listeners. You can specify an integer value from 1 to 2000 with 1 being the highest priority and 20000 being the lowest priority. In case the incoming client traffic matches with multiple listeners, the request routing rule with highest priority will be used for serving the request.
+This ordering can be established by providing a 'Priority' field value to the request routing rules associated with the listeners. You can specify an integer value from 1 to 20000 with 1 being the highest priority and 20000 being the lowest priority. In case the incoming client traffic matches with multiple listeners, the request routing rule with highest priority will be used for serving the request.
The priority field only impacts the order of evaluation of a request routing rule, this will not change the order of evaluation of path based rules within a `PathBasedRouting` request routing rule.
application-gateway Mutual Authentication Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/mutual-authentication-certificate-management.md
The following steps help you export the .pem or .cer file for your certificate:
6. Click **Finish** to export the certificate. > [!div class="mx-imgBorder"]
- > ![Screenshot shows the Certificate Export Wizard after you complete the file export.](./media/certificates-for-backend-authentication/finish.png)
+ > ![Screenshot shows the Certificate Export Wizard after you complete the file export.](./media/certificates-for-backend-authentication/finish-screen.png)
7. Your certificate is successfully exported.
avere-vfxt Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/avere-vfxt/disaster-recovery.md
To access the backup container from an Avere vFXT for Azure cluster, follow this
* For more information about customizing settings for Avere vFXT for Azure, read [Cluster tuning](avere-vfxt-tuning.md). * Learn more about disaster recovery and building resilient applications in Azure:
- * [Azure resiliency technical guidance](/azure/architecture/framework/resiliency/overview)
+ * [Azure resiliency technical guidance](/azure/architecture/reliability/architect)
* [Recover from a region-wide service disruption](/azure/architecture/resiliency/recovery-loss-azure-region) * [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery)
- <!-- can't find these in the source tree to use relative links -->
+ <!-- can't find these in the source tree to use relative links -->
azure-app-configuration Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-javascript.md
+
+ Title: Quickstart for using Azure App Configuration with JavaScript apps | Microsoft Docs
+description: In this quickstart, create a Node.js app with Azure App Configuration to centralize storage and management of application settings separate from your code.
+++++ Last updated : 07/12/2021++
+#Customer intent: As a JavaScript developer, I want to manage all my app settings in one place.
+
+# Quickstart: Create a JavaScript app with Azure App Configuration
+
+In this quickstart, you will use Azure App Configuration to centralize storage and management of application settings using the [App Configuration client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/appconfiguration/app-configuration/README.md).
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- [LTS versions of Node.js](https://nodejs.org/en/about/releases/). For information about installing Node.js either directly on Windows or using the Windows Subsystem for Linux (WSL), see [Get started with Node.js](/windows/dev-environment/javascript/nodejs-overview)
+
+## Create an App Configuration store
++
+7. Select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
+
+ | Key | Value |
+ |||
+ | TestApp:Settings:Message | Data from Azure App Configuration |
+
+ Leave **Label** and **Content Type** empty for now.
+
+8. Select **Apply**.
+
+## Setting up the Node.js app
+
+1. In this tutorial, you'll create a new directory for the project named *app-configuration-quickstart*.
+
+ ```console
+ mkdir app-configuration-quickstart
+ ```
+
+1. Switch to the newly created *app-configuration-quickstart* directory.
+
+ ```console
+ cd app-configuration-quickstart
+ ```
+
+1. Install the Azure App Configuration client library by using the `npm install` command.
+
+ ```console
+ npm install @azure/app-configuration
+ ```
+
+1. Create a new file called *app.js* in the *app-configuration-quickstart* directory and add the following code:
+
+ ```javascript
+ const appConfig = require("@azure/app-configuration");
+ ```
+
+## Configure your connection string
+
+1. Set an environment variable named **AZURE_APP_CONFIG_CONNECTION_STRING**, and set it to the access key to your App Configuration store. At the command line, run the following command:
+
+ ### [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ $Env:AZURE_APP_CONFIG_CONNECTION_STRING = "connection-string-of-your-app-configuration-store"
+ ```
+
+ ### [Command line](#tab/command-line)
+
+ ```cmd
+ setx AZURE_APP_CONFIG_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
+ ```
+
+ ### [macOS](#tab/macOS)
+ ```console
+ export AZURE_APP_CONFIG_CONNECTION_STRING='connection-string-of-your-app-configuration-store'
+ ```
+
+
+
+2. Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly.
+
+## Connect to an App Configuration store
+
+The following code snippet creates an instance of **AppConfigurationClient** using the connection string stored in your environment variables.
+
+```javascript
+const connection_string = process.env.AZURE_APP_CONFIG_CONNECTION_STRING;
+const client = new appConfig.AppConfigurationClient(connection_string);
+```
+
+## Get a configuration setting
+
+The following code snippet retrieves a configuration setting by `key` name. The key shown in this example was created in the previous steps of this article.
+
+```javascript
+async function run() {
+
+ let retrievedSetting = await client.getConfigurationSetting({
+ key: "TestApp:Settings:Message"
+ });
+
+ console.log("Retrieved value:", retrievedSetting.value);
+}
+
+run().catch((err) => console.log("ERROR:", err));
+```
+
+## Build and run the app locally
+
+1. Run the following command to run the Node.js app:
+
+ ```powershell
+ node app.js
+ ```
+1. You should see the following output at the command prompt:
+
+ ```powershell
+ Retrieved value: Data from Azure App Configuration
+ ```
+## Clean up resources
+++
+## Next steps
+
+In this quickstart, you created a new App Configuration store and learned how to access key-values from a Node.js app.
+
+For additional code samples, visit:
+
+> [!div class="nextstepaction"]
+> [Azure App Configuration client library samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/appconfiguration/azure-appconfiguration/samples)
azure-app-configuration Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-python.md
Key: TestApp:Settings:NewSetting, Value: Value has been updated!
## Next steps
-In this quickstart, you created a new App Configuration store and learnt how to access key-values from a Python app.
+In this quickstart, you created a new App Configuration store and learned how to access key-values from a Python app.
For additional code samples, visit:
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
The `azdata arc dc export` command is no longer functional. Use `az arcdata dc e
#### Required property: `infrastructure`
-The `infrastructure` property is a new required property when deploying a data controller. Adjust your yaml files, azdata/az scripts, and ARM templates to account for specifying this property value. Allowed values are `alibaba`, `aws`, `azure`, `gpc`, `onpremises`, `other`.
+The `infrastructure` property is a new required property when deploying a data controller. Adjust your yaml files, azdata/az scripts, and ARM templates to account for specifying this property value. Allowed values are `alibaba`, `aws`, `azure`, `gcp`, `onpremises`, `other`.
#### Kibana login
azure-arc Supported Versions Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/supported-versions-postgres-hyperscale.md
To learn more, read about each version on the official Postgres site:
At creation time, you have the possibility to indicate what version to create by passing the _--engine-version_ parameter. If you do not indicate a version information, by default, a server group of Postgres version 12 will be created.
-## How do be notified when other versions are available?
+## How can I be notified when other versions are available?
Come back and read this article. It will be updated as appropriate. You can also list the kinds of custom resource definitions (CRD) in the Arc Data Controller in your Kubernetes cluster. Run the following command: ```console
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
Previously updated : 07/26/2021 Last updated : 07/28/2021 #Customer intent: As a DoD mission owner, I want to know how to implement a workload at Impact Level 5 in Microsoft Azure Government. + # Isolation guidelines for Impact Level 5 workloads Azure Government supports applications that use Impact Level 5 (IL5) data in all available regions. IL5 requirements are defined in the [US Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG)](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#3INFORMATIONSECURITYOBJECTIVES/IMPACTLEVELS). IL5 workloads have a higher degree of impact to the DoD and must be secured to a higher standard. When you deploy these workloads on Azure Government, you can meet their isolation requirements in various ways. The guidance in this document addresses configurations and settings needed to meet the IL5 isolation requirements. We'll update this document as we enable new isolation options and the Defense Information Systems Agency (DISA) authorizes new services for IL5 data.
For Analytics services availability in Azure Government, see [Products available
For Compute services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,azure-vmware,cloud-services,batch,app-service,service-fabric,functions,virtual-machine-scale-sets,virtual-machines&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
-### [Azure Functions](https://azure.microsoft.com/services/functions/)
--- To accommodate proper network and workload isolation, deploy your Azure functions on App Service plans configured to use the Isolated SKU. For more information, see the [App Service plan documentation](../app-service/overview-hosting-plans.md).- ### [Batch](https://azure.microsoft.com/services/batch/) - Enable user subscription mode, which will require a Key Vault instance for proper encryption and key storage. For more information, see the documentation on [batch account configurations](../batch/batch-account-create-portal.md).
For Databases services availability in Azure Government, see [Products available
For Integration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid,api-management,service-bus,logic-apps&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
-### [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)
--- Azure Logic Apps supports Impact Level 5 workloads in Azure Government. To meet these requirements, Logic Apps supports the capability for you to create and run workflows in an environment with dedicated resources so that you can avoid sharing computing resources with other tenants. For more information, see [Secure access and data in Azure Logic Apps: Isolation guidance](../logic-apps/logic-apps-securing-a-logic-app.md#isolation-logic-apps).- ### [Service Bus](https://azure.microsoft.com/services/service-bus/) - Configure encryption of data at rest in Azure Service Bus by [using customer-managed keys in Azure Key Vault](../service-bus-messaging/configure-customer-managed-key.md).
For Internet of Things services availability in Azure Government, see [Products
## Management and governance
-For Management and governance services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=azure-automanage,resource-mover,azure-portal,azure-lighthouse,cloud-shell,managed-applications,azure-policy,monitor,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+For Management and governance services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-automanage,resource-mover,azure-portal,azure-lighthouse,cloud-shell,managed-applications,azure-policy,monitor,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
### [Automation](https://azure.microsoft.com/services/automation/)
For Management and governance services availability in Azure Government, see [Pr
- By default, all data and saved queries are encrypted at rest using Microsoft-managed keys. Configure encryption at rest of your data in Azure Monitor [using customer-managed keys in Azure Key Vault](../azure-monitor/logs/customer-managed-keys.md). > [!IMPORTANT]
-> See additional guidance below for **[Log Analytics]**, which is a feature of Azure Monitor.
+> See additional guidance below for **Log Analytics**, which is a feature of Azure Monitor.
#### [Log Analytics](../azure-monitor/logs/data-platform-logs.md)
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/worker-service.md
Specific instructions for each type of application is described in the following
## .NET Core 3.0 worker service application
-Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerServiceSDK/WorkerServiceSampleWithApplicationInsights)
+Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService)
1. Download and install [.NET Core 3.0](https://dotnet.microsoft.com/download/dotnet-core/3.0) 2. Create a new Worker Service project either by using Visual Studio new project template or command line `dotnet new worker`
Typically, `APPINSIGHTS_INSTRUMENTATIONKEY` specifies the instrumentation key fo
[This](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio) document describes how to create backgrounds tasks in ASP.NET Core 2.1/2.2 application.
-Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerServiceSDK/BackgroundTasksWithHostedService)
+Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService)
1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application. 2. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `ConfigureServices()` method, as in this example:
Following is the code for `TimedHostedService` where the background task logic r
As mentioned in the beginning of this article, the new package can be used to enable Application Insights Telemetry from even a regular console application. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), and hence can be used for console apps in .NET Core 2.0 or higher, and .NET Framework 4.7.2 or higher.
-Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerServiceSDK/ConsoleAppWithApplicationInsights)
+Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp)
1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 07/06/2021 Last updated : 07/19/2021
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|total-requests|Yes|total-requests|Count|Average|Total number of requests in the lifetime of the process|Deployment, AppName, Pod| |working-set|Yes|working-set|Count|Average|Amount of working set used by the process (MB)|Deployment, AppName, Pod| + ## Microsoft.Automation/automationAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|UnusableNodeCount|No|Unusable Node Count|Count|Total|Number of unusable nodes|No Dimensions| |WaitingForStartTaskNodeCount|No|Waiting For Start Task Node Count|Count|Total|Number of nodes waiting for the Start Task to complete|No Dimensions| - ## Microsoft.BatchAI/workspaces |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Unusable Cores|Yes|Unusable Cores|Count|Average|Number of unusable cores|Scenario, ClusterName| |Unusable Nodes|Yes|Unusable Nodes|Count|Average|Number of unusable nodes|Scenario, ClusterName| - ## microsoft.bing/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|HyperVVirtualProcessorUtilization|Yes|Edge Compute - Percentage CPU|Percent|Average|Percent CPU Usage|InstanceName| |NICReadThroughput|Yes|Read Throughput (Network)|BytesPerSecond|Average|The read throughput of the network interface on the device in the reporting period for all volumes in the gateway.|InstanceName| |NICWriteThroughput|Yes|Write Throughput (Network)|BytesPerSecond|Average|The write throughput of the network interface on the device in the reporting period for all volumes in the gateway.|InstanceName|
-|TotalCapacity|Yes|Total Capacity|Bytes|Average|Total Capacity|No Dimensions|
+|TotalCapacity|Yes|Total Capacity|Bytes|Average|The total capacity of the device in bytes during the reporting period.|No Dimensions|
+ ## Microsoft.DataCollaboration/workspaces
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|ProposalCount|Yes|Created Proposals|Count|Maximum|Number of created proposals|ProposalName| |ScriptCount|Yes|Created Scripts|Count|Maximum|Number of created scripts|ScriptName| + ## Microsoft.DataFactory/datafactories |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|d2c.endpoints.latency.serviceBusQueues|Yes|Routing: message latency for Service Bus Queue|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint.|No Dimensions| |d2c.endpoints.latency.serviceBusTopics|Yes|Routing: message latency for Service Bus Topic|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint.|No Dimensions| |d2c.endpoints.latency.storage|Yes|Routing: message latency for storage|Milliseconds|Average|The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint.|No Dimensions|
-|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped |Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
+|d2c.telemetry.egress.dropped|Yes|Routing: telemetry messages dropped |Count|Total|The number of times messages were dropped by IoT Hub routing due to dead endpoints. This value does not count messages delivered to fallback route as dropped messages are not delivered there.|No Dimensions|
|d2c.telemetry.egress.fallback|Yes|Routing: messages delivered to fallback|Count|Total|The number of times IoT Hub routing delivered messages to the endpoint associated with the fallback route.|No Dimensions| |d2c.telemetry.egress.invalid|Yes|Routing: telemetry messages incompatible|Count|Total|The number of times IoT Hub routing failed to deliver messages due to an incompatibility with the endpoint. This value does not include retries.|No Dimensions|
-|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned |Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule). |No Dimensions|
+|d2c.telemetry.egress.orphaned|Yes|Routing: telemetry messages orphaned |Count|Total|The number of times messages were orphaned by IoT Hub routing because they didn't match any routing rules (including the fallback rule). |No Dimensions|
|d2c.telemetry.egress.success|Yes|Routing: telemetry messages delivered|Count|Total|The number of times messages were successfully delivered to all endpoints using IoT Hub routing. If a message is routed to multiple endpoints, this value increases by one for each successful delivery. If a message is delivered to the same endpoint multiple times, this value increases by one for each successful delivery.|No Dimensions| |d2c.telemetry.ingress.allProtocol|Yes|Telemetry message send attempts|Count|Total|Number of device-to-cloud telemetry messages attempted to be sent to your IoT hub|No Dimensions| |d2c.telemetry.ingress.sendThrottle|Yes|Number of throttling errors|Count|Total|Number of throttling errors due to device throughput throttles|No Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|IoTConnectorMeasurementIngestionLatencyMs|Yes|Average Group Stage Latency|Milliseconds|Average|The time period between when the IoT Connector received the device data and when the data is processed by the FHIR conversion stage.|Operation, ConnectorName| |IoTConnectorNormalizedEvent|Yes|Number of Normalized Messages|Count|Sum|The total number of mapped normalized values outputted from the normalization stage of the the Azure IoT Connector for FHIR.|Operation, ConnectorName| |IoTConnectorTotalErrors|Yes|Total Error Count|Count|Sum|The total number of errors logged by the Azure IoT Connector for FHIR|Name, Operation, ErrorType, ErrorSeverity, ConnectorName|
+|ServiceApiErrors|Yes|Service Errors|Count|Sum|The total number of internal server errors generated by the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
+|ServiceApiLatency|Yes|Service Latency|Milliseconds|Average|The response latency of the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
+|ServiceApiRequests|Yes|Service Requests|Count|Sum|The total number of requests received by the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
|TotalErrors|Yes|Total Errors|Count|Sum|The total number of internal server errors encountered by the service.|Protocol, StatusCode, StatusCodeClass, StatusCodeText| |TotalLatency|Yes|Total Latency|Milliseconds|Average|The response latency of the service.|Protocol| |TotalRequests|Yes|Total Requests|Count|Sum|The total number of requests received by the service.|Protocol|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|BlobsProcessed|Yes|Blobs Processed|Count|Total|Number of blobs processed by a component.|Database, ComponentType, ComponentName| |BlobsReceived|Yes|Blobs Received|Count|Total|Number of blobs received from input stream by a component.|Database, ComponentType, ComponentName| |CacheUtilization|Yes|Cache utilization|Percent|Average|Utilization level in the cluster scope|No Dimensions|
+|CacheUtilizationFactor|Yes|Cache utilization factor|Percent|Average|Percentage difference between the current number of instances and the optimal number of instances (per cache utilization)|No Dimensions|
|ContinuousExportMaxLatenessMinutes|Yes|Continuous Export Max Lateness|Count|Maximum|The lateness (in minutes) reported by the continuous export jobs in the cluster|No Dimensions| |ContinuousExportNumOfRecordsExported|Yes|Continuous export ΓÇô num of exported records|Count|Total|Number of records exported, fired for every storage artifact written during the export operation|ContinuousExportName, Database| |ContinuousExportPendingCount|Yes|Continuous Export Pending Count|Count|Maximum|The number of pending continuous export jobs ready for execution|No Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|EventsReceived|Yes|Events Received|Count|Total|Number of events received by data connection.|ComponentType, ComponentName| |ExportUtilization|Yes|Export Utilization|Percent|Maximum|Export utilization|No Dimensions| |IngestionLatencyInSeconds|Yes|Ingestion Latency|Seconds|Average|Latency of data ingested, from the time the data was received in the cluster until it's ready for query. The ingestion latency period depends on the ingestion scenario.|No Dimensions|
-|IngestionResult|Yes|Ingestion result|Count|Total|Number of ingestion operations|IngestionResultDetails|
+|IngestionResult|Yes|Ingestion result|Count|Total|Total number of sources that either failed or succeeded to be ingested. Splitting the metric by status, you can get detailed information about the status of the ingestion operations.|IngestionResultDetails, FailureKind|
|IngestionUtilization|Yes|Ingestion utilization|Percent|Average|Ratio of used ingestion slots in the cluster|No Dimensions| |IngestionVolumeInMB|Yes|Ingestion Volume|Bytes|Total|Overall volume of ingested data to the cluster|Database| |InstanceCount|Yes|Instance Count|Count|Average|Total instance count|No Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|TotalNumberOfExtents|Yes|Total number of extents|Count|Total|Total number of data extents|No Dimensions| |TotalNumberOfThrottledCommands|Yes|Total number of throttled commands|Count|Total|Total number of throttled commands|CommandType| |TotalNumberOfThrottledQueries|Yes|Total number of throttled queries|Count|Maximum|Total number of throttled queries|No Dimensions|
+|WeakConsistencyLatency|Yes|Weak consistency latency|Seconds|Average|The max latency between the previous metadata sync and the next one (in DB/node scope)|Database, RoleInstance|
## Microsoft.Logic/IntegrationServiceEnvironments
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|QueryVolume|No|Query Volume|Count|Total|Number of queries served for a DNS zone|No Dimensions|
+|QueryVolume|Yes|Query Volume|Count|Total|Number of queries served for a DNS zone|No Dimensions|
|RecordSetCapacityUtilization|No|Record Set Capacity Utilization|Percent|Maximum|Percent of Record Set capacity utilized by a DNS zone|No Dimensions|
-|RecordSetCount|No|Record Set Count|Count|Maximum|Number of Record Sets in a DNS zone|No Dimensions|
+|RecordSetCount|Yes|Record Set Count|Count|Maximum|Number of Record Sets in a DNS zone|No Dimensions|
## Microsoft.Network/expressRouteCircuits
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, |
+|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
|ByteCount|Yes|Byte Count|Bytes|Total|Total number of Bytes transmitted within time period|FrontendIPAddress, FrontendPort, Direction| |DipAvailability|Yes|Health Probe Status|Count|Average|Average Load Balancer health probe status per time duration|ProtocolType, BackendPort, FrontendIPAddress, FrontendPort, BackendIPAddress| |PacketCount|Yes|Packet Count|Count|Total|Total number of Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction| |SnatConnectionCount|Yes|SNAT Connection Count|Count|Total|Total number of new SNAT connections created within time period|FrontendIPAddress, BackendIPAddress, ConnectionState| |SYNCount|Yes|SYN Count|Count|Total|Total number of SYN Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
-|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Total number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, |
+|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Total number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
|VipAvailability|Yes|Data Path Availability|Count|Average|Average Load Balancer data path availability per time duration|FrontendIPAddress, FrontendPort|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
-|P2SConnectionCount|Yes|P2S Connection Count|BytesPerSecond|Average|Point-to-site connection count of a gateway|Protocol, Instance|
+|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
+|UserVpnRouteCount|No|User Vpn Route Count|Count|Total|Count of P2S User Vpn routes learned by gateway|RouteType, Instance|
## Microsoft.Network/privateDnsZones
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance| |ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network (Preview)|Count|Maximum|Number of VMs in the Virtual Network|roleInstance| |ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
+|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance|
|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance|
-|P2SConnectionCount|Yes|P2S Connection Count|BytesPerSecond|Average|Point-to-site connection count of a gateway|Protocol, Instance|
+|P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
+|QmsaCount|Yes|Tunnel QMSA Count|Count|Total|QMSA Count|ConnectionName, RemoteIP, Instance|
|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance| |TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Site-to-site bandwidth of a gateway in bytes per second|Instance|
+|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance|
+|BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance|
+|BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
+|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance|
+|QmsaCount|Yes|Tunnel QMSA Count|Count|Total|QMSA Count|ConnectionName, RemoteIP, Instance|
|TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP, Instance| |TunnelEgressBytes|Yes|Tunnel Egress Bytes|Bytes|Total|Outgoing bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelEgressPacketDropCount|Yes|Tunnel Egress Packet Drop Count|Count|Total|Count of outgoing packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelEgressPacketDropTSMismatch|Yes|Tunnel Egress TS Mismatch Packet Drop|Count|Total|Outgoing packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelEgressPackets|Yes|Tunnel Egress Packets|Count|Total|Outgoing packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP, Instance|
+|TunnelIngressPacketDropCount|Yes|Tunnel Ingress Packet Drop Count|Count|Total|Count of incoming packets dropped by tunnel|ConnectionName, RemoteIP, Instance|
|TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP, Instance| |TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP, Instance|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance| |TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, FlowType, ConnectionName, RemoteIP, Instance| |TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, DropType, ConnectionName, RemoteIP, Instance|
+|TunnelPeakPackets|Yes|Tunnel Peak PPS|Count|Maximum|Tunnel Peak Packets Per Second|ConnectionName, RemoteIP, Instance|
|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance| |TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP, Instance|
+|TunnelTotalFlowCount|Yes|Tunnel Total Flow Count|Count|Total|Total flow count on a tunnel|ConnectionName, RemoteIP, Instance|
+|VnetAddressPrefixCount|Yes|VNet Address Prefix Count|Count|Total|Count of Vnet address prefixes behind gateway|Instance|
## Microsoft.NotificationHubs/Namespaces/NotificationHubs
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat|Computer, OSType, Version, SourceComputerId| |Update|Yes|Update|Count|Average|Update|Computer, Product, Classification, UpdateState, Optional, Approved| + ## Microsoft.Peering/peerings |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|||||||| |PrefixLatency|Yes|Prefix Latency|Milliseconds|Average|Median prefix latency|PrefixName| + ## Microsoft.PowerBIDedicated/capacities |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|CleanerCurrentPrice|Yes|Memory: Cleaner Current Price|Count|Average|Current price of memory, $/byte/time, normalized to 1000.|No Dimensions|
+|CleanerMemoryNonshrinkable|Yes|Memory: Cleaner Memory nonshrinkable|Bytes|Average|Amount of memory, in bytes, not subject to purging by the background cleaner.|No Dimensions|
+|CleanerMemoryShrinkable|Yes|Memory: Cleaner Memory shrinkable|Bytes|Average|Amount of memory, in bytes, subject to purging by the background cleaner.|No Dimensions|
+|CommandPoolBusyThreads|Yes|Threads: Command pool busy threads|Count|Average|Number of busy threads in the command thread pool.|No Dimensions|
+|CommandPoolIdleThreads|Yes|Threads: Command pool idle threads|Count|Average|Number of idle threads in the command thread pool.|No Dimensions|
+|CommandPoolJobQueueLength|Yes|Command Pool Job Queue Length|Count|Average|Number of jobs in the queue of the command thread pool.|No Dimensions|
|cpu_metric|Yes|CPU (Gen2)|Percent|Average|CPU Utilization. Supported only for Power BI Embedded Generation 2 resources.|No Dimensions| |cpu_workload_metric|Yes|CPU Per Workload (Gen2)|Percent|Average|CPU Utilization Per Workload. Supported only for Power BI Embedded Generation 2 resources.|Workload|
+|CurrentConnections|Yes|Connection: Current connections|Count|Average|Current number of client connections established.|No Dimensions|
+|CurrentUserSessions|Yes|Current User Sessions|Count|Average|Current number of user sessions established.|No Dimensions|
+|LongParsingBusyThreads|Yes|Threads: Long parsing busy threads|Count|Average|Number of busy threads in the long parsing thread pool.|No Dimensions|
+|LongParsingIdleThreads|Yes|Threads: Long parsing idle threads|Count|Average|Number of idle threads in the long parsing thread pool.|No Dimensions|
+|LongParsingJobQueueLength|Yes|Threads: Long parsing job queue length|Count|Average|Number of jobs in the queue of the long parsing thread pool.|No Dimensions|
|memory_metric|Yes|Memory (Gen1)|Bytes|Average|Memory. Range 0-3 GB for A1, 0-5 GB for A2, 0-10 GB for A3, 0-25 GB for A4, 0-50 GB for A5 and 0-100 GB for A6. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions| |memory_thrashing_metric|Yes|Memory Thrashing (Datasets) (Gen1)|Percent|Average|Average memory thrashing. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
+|MemoryLimitHard|Yes|Memory: Memory Limit Hard|Bytes|Average|Hard memory limit, from configuration file.|No Dimensions|
+|MemoryLimitHigh|Yes|Memory: Memory Limit High|Bytes|Average|High memory limit, from configuration file.|No Dimensions|
+|MemoryLimitLow|Yes|Memory: Memory Limit Low|Bytes|Average|Low memory limit, from configuration file.|No Dimensions|
+|MemoryLimitVertiPaq|Yes|Memory: Memory Limit VertiPaq|Bytes|Average|In-memory limit, from configuration file.|No Dimensions|
+|MemoryUsage|Yes|Memory: Memory Usage|Bytes|Average|Memory usage of the server process as used in calculating cleaner memory price. Equal to counter Process\PrivateBytes plus the size of memory-mapped data, ignoring any memory which was mapped or allocated by the xVelocity in-memory analytics engine (VertiPaq) in excess of the xVelocity engine Memory Limit.|No Dimensions|
|overload_metric|Yes|Overload (Gen2)|Count|Average|Resource Overload, 1 if resource is overloaded, otherwise 0. Supported only for Power BI Embedded Generation 2 resources.|No Dimensions|
+|ProcessingPoolBusyIOJobThreads|Yes|Threads: Processing pool busy I/O job threads|Count|Average|Number of threads running I/O jobs in the processing thread pool.|No Dimensions|
+|ProcessingPoolBusyNonIOThreads|Yes|Threads: Processing pool busy non-I/O threads|Count|Average|Number of threads running non-I/O jobs in the processing thread pool.|No Dimensions|
+|ProcessingPoolIdleIOJobThreads|Yes|Threads: Processing pool idle I/O job threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|No Dimensions|
+|ProcessingPoolIdleNonIOThreads|Yes|Threads: Processing pool idle non-I/O threads|Count|Average|Number of idle threads in the processing thread pool dedicated to non-I/O jobs.|No Dimensions|
+|ProcessingPoolIOJobQueueLength|Yes|Threads: Processing pool I/O job queue length|Count|Average|Number of I/O jobs in the queue of the processing thread pool.|No Dimensions|
+|ProcessingPoolJobQueueLength|Yes|Processing Pool Job Queue Length|Count|Average|Number of non-I/O jobs in the queue of the processing thread pool.|No Dimensions|
|qpu_high_utilization_metric|Yes|QPU High Utilization (Gen1)|Count|Total|QPU High Utilization In Last Minute, 1 For High QPU Utilization, Otherwise 0. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
+|qpu_metric|Yes|QPU (Gen1)|Count|Average|QPU. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
|QueryDuration|Yes|Query Duration (Datasets) (Gen1)|Milliseconds|Average|DAX Query duration in last interval. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
+|QueryPoolBusyThreads|Yes|Query Pool Busy Threads|Count|Average|Number of busy threads in the query thread pool.|No Dimensions|
+|QueryPoolIdleThreads|Yes|Threads: Query pool idle threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|No Dimensions|
|QueryPoolJobQueueLength|Yes|Query Pool Job Queue Length (Datasets) (Gen1)|Count|Average|Number of jobs in the queue of the query thread pool. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
+|Quota|Yes|Memory: Quota|Bytes|Average|Current memory quota, in bytes. Memory quota is also known as a memory grant or memory reservation.|No Dimensions|
+|QuotaBlocked|Yes|Memory: Quota Blocked|Count|Average|Current number of quota requests that are blocked until other memory quotas are freed.|No Dimensions|
+|RowsConvertedPerSec|Yes|Processing: Rows converted per sec|CountPerSecond|Average|Rate of rows converted during processing.|No Dimensions|
+|RowsReadPerSec|Yes|Processing: Rows read per sec|CountPerSecond|Average|Rate of rows read from all relational databases.|No Dimensions|
+|RowsWrittenPerSec|Yes|Processing: Rows written per sec|CountPerSecond|Average|Rate of rows written during processing.|No Dimensions|
+|ShortParsingBusyThreads|Yes|Threads: Short parsing busy threads|Count|Average|Number of busy threads in the short parsing thread pool.|No Dimensions|
+|ShortParsingIdleThreads|Yes|Threads: Short parsing idle threads|Count|Average|Number of idle threads in the short parsing thread pool.|No Dimensions|
+|ShortParsingJobQueueLength|Yes|Threads: Short parsing job queue length|Count|Average|Number of jobs in the queue of the short parsing thread pool.|No Dimensions|
+|SuccessfullConnectionsPerSec|Yes|Successful Connections Per Sec|CountPerSecond|Average|Rate of successful connection completions.|No Dimensions|
+|TotalConnectionFailures|Yes|Total Connection Failures|Count|Average|Total failed connection attempts.|No Dimensions|
+|TotalConnectionRequests|Yes|Total Connection Requests|Count|Average|Total connection requests. These are arrivals.|No Dimensions|
+|VertiPaqNonpaged|Yes|Memory: VertiPaq Nonpaged|Bytes|Average|Bytes of memory locked in the working set for use by the in-memory engine.|No Dimensions|
+|VertiPaqPaged|Yes|Memory: VertiPaq Paged|Bytes|Average|Bytes of paged memory in use for in-memory data.|No Dimensions|
+|workload_memory_metric|Yes|Memory Per Workload (Gen1)|Bytes|Average|Memory Per Workload. Supported only for Power BI Embedded Generation 1 resources.|Workload|
+|workload_qpu_metric|Yes|QPU Per Workload (Gen1)|Count|Average|QPU Per Workload. Range for A1 is 0-20, A2 is 0-40, A3 is 0-40, A4 is 0-80, A5 is 0-160, A6 is 0-320. Supported only for Power BI Embedded Generation 1 resources.|Workload|
## microsoft.purview/accounts
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|WSXNS|No|Memory Usage (Deprecated)|Percent|Maximum|Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead.|Replica|
+## Microsoft.ServiceFabricMesh/applications
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|ActualCpu|No|ActualCpu|Count|Average|Actual CPU usage in milli cores|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
+|ActualMemory|No|ActualMemory|Bytes|Average|Actual memory usage in MB|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
+|AllocatedCpu|No|AllocatedCpu|Count|Average|Cpu allocated to this container in milli cores|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
+|AllocatedMemory|No|AllocatedMemory|Bytes|Average|Memory allocated to this container in MB|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
+|ApplicationStatus|No|ApplicationStatus|Count|Average|Status of Service Fabric Mesh application|ApplicationName, Status|
+|ContainerStatus|No|ContainerStatus|Count|Average|Status of the container in Service Fabric Mesh application|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName, Status|
+|CpuUtilization|No|CpuUtilization|Percent|Average|Utilization of CPU for this container as percentage of AllocatedCpu|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
+|MemoryUtilization|No|MemoryUtilization|Percent|Average|Utilization of CPU for this container as percentage of AllocatedCpu|ApplicationName, ServiceName, CodePackageName, ServiceReplicaName|
+|RestartCount|No|RestartCount|Count|Average|Restart count of a container in Service Fabric Mesh application|ApplicationName, Status, ServiceName, ServiceReplicaName, CodePackageName|
+|ServiceReplicaStatus|No|ServiceReplicaStatus|Count|Average|Health Status of a service replica in Service Fabric Mesh application|ApplicationName, Status, ServiceName, ServiceReplicaName|
+|ServiceStatus|No|ServiceStatus|Count|Average|Health Status of a service in Service Fabric Mesh application|ApplicationName, Status, ServiceName|
++ ## Microsoft.SignalRService/SignalR |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|storage_space_used_mb|Yes|Storage space used|Count|Average|Storage space used|No Dimensions| |virtual_core_count|Yes|Virtual core count|Count|Average|Virtual core count|No Dimensions| -
-## Microsoft.Sql/servers/elasticPools
+## Microsoft.Sql/servers
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|allocated_data_storage|Yes|Data space allocated|Bytes|Average|Data space allocated|No Dimensions|
-|allocated_data_storage_percent|Yes|Data space allocated percent|Percent|Maximum|Data space allocated percent|No Dimensions|
-|cpu_limit|Yes|CPU limit|Count|Average|CPU limit. Applies to vCore-based elastic pools.|No Dimensions|
-|cpu_percent|Yes|CPU percentage|Percent|Average|CPU percentage|No Dimensions|
-|cpu_used|Yes|CPU used|Count|Average|CPU used. Applies to vCore-based elastic pools.|No Dimensions|
-|database_allocated_data_storage|No|Data space allocated|Bytes|Average|Data space allocated|DatabaseResourceId|
-|database_cpu_limit|No|CPU limit|Count|Average|CPU limit|DatabaseResourceId|
-|database_cpu_percent|No|CPU percentage|Percent|Average|CPU percentage|DatabaseResourceId|
-|database_cpu_used|No|CPU used|Count|Average|CPU used|DatabaseResourceId|
-|database_dtu_consumption_percent|No|DTU percentage|Percent|Average|DTU percentage|DatabaseResourceId|
-|database_eDTU_used|No|eDTU used|Count|Average|eDTU used|DatabaseResourceId|
-|database_log_write_percent|No|Log IO percentage|Percent|Average|Log IO percentage|DatabaseResourceId|
-|database_physical_data_read_percent|No|Data IO percentage|Percent|Average|Data IO percentage|DatabaseResourceId|
-|database_sessions_percent|No|Sessions percentage|Percent|Average|Sessions percentage|DatabaseResourceId|
-|database_storage_used|No|Data space used|Bytes|Average|Data space used|DatabaseResourceId|
-|database_workers_percent|No|Workers percentage|Percent|Average|Workers percentage|DatabaseResourceId|
-|dtu_consumption_percent|Yes|DTU percentage|Percent|Average|DTU Percentage. Applies to DTU-based elastic pools.|No Dimensions|
-|eDTU_limit|Yes|eDTU limit|Count|Average|eDTU limit. Applies to DTU-based elastic pools.|No Dimensions|
-|eDTU_used|Yes|eDTU used|Count|Average|eDTU used. Applies to DTU-based elastic pools.|No Dimensions|
-|log_write_percent|Yes|Log IO percentage|Percent|Average|Log IO percentage|No Dimensions|
-|physical_data_read_percent|Yes|Data IO percentage|Percent|Average|Data IO percentage|No Dimensions|
-|sessions_percent|Yes|Sessions percentage|Percent|Average|Sessions percentage|No Dimensions|
-|sqlserver_process_core_percent|Yes|SQL Server process core percent|Percent|Maximum|CPU usage as a percentage of the SQL DB process. Applies to elastic pools.|No Dimensions|
-|sqlserver_process_memory_percent|Yes|SQL Server process memory percent|Percent|Maximum|Memory usage as a percentage of the SQL DB process. Applies to elastic pools.|No Dimensions|
-|storage_limit|Yes|Data max size|Bytes|Average|Data max size|No Dimensions|
-|storage_percent|Yes|Data space used percent|Percent|Average|Data space used percent|No Dimensions|
-|storage_used|Yes|Data space used|Bytes|Average|Data space used|No Dimensions|
-|tempdb_data_size|Yes|Tempdb Data File Size Kilobytes|Count|Maximum|Space used in tempdb data files in kilobytes.|No Dimensions|
-|tempdb_log_size|Yes|Tempdb Log File Size Kilobytes|Count|Maximum|Space used in tempdb transaction log file in kilobytes.|No Dimensions|
-|tempdb_log_used_percent|Yes|Tempdb Percent Log Used|Percent|Maximum|Space used percentage in tempdb transaction log file|No Dimensions|
-|workers_percent|Yes|Workers percentage|Percent|Average|Workers percentage|No Dimensions|
-|xtp_storage_percent|Yes|In-Memory OLTP storage percent|Percent|Average|In-Memory OLTP storage percent|No Dimensions|
+|database_dtu_consumption_percent|No|DTU percentage|Percent|Average|DTU percentage|DatabaseResourceId, ElasticPoolResourceId|
+|database_storage_used|No|Data space used|Bytes|Average|Data space used|DatabaseResourceId, ElasticPoolResourceId|
+|dtu_consumption_percent|Yes|DTU percentage|Percent|Average|DTU percentage|ElasticPoolResourceId|
+|dtu_used|Yes|DTU used|Count|Average|DTU used|DatabaseResourceId|
+|storage_used|Yes|Data space used|Bytes|Average|Data space used|ElasticPoolResourceId|
## Microsoft.Sql/servers/databases
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|workers_percent|Yes|Workers percentage|Percent|Average|Workers percentage. Not applicable to data warehouses.|No Dimensions| |xtp_storage_percent|Yes|In-Memory OLTP storage percent|Percent|Average|In-Memory OLTP storage percent. Not applicable to data warehouses.|No Dimensions|
+## Microsoft.Sql/servers/elasticPools
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|allocated_data_storage|Yes|Data space allocated|Bytes|Average|Data space allocated|No Dimensions|
+|allocated_data_storage_percent|Yes|Data space allocated percent|Percent|Maximum|Data space allocated percent|No Dimensions|
+|cpu_limit|Yes|CPU limit|Count|Average|CPU limit. Applies to vCore-based elastic pools.|No Dimensions|
+|cpu_percent|Yes|CPU percentage|Percent|Average|CPU percentage|No Dimensions|
+|cpu_used|Yes|CPU used|Count|Average|CPU used. Applies to vCore-based elastic pools.|No Dimensions|
+|database_allocated_data_storage|No|Data space allocated|Bytes|Average|Data space allocated|DatabaseResourceId|
+|database_cpu_limit|No|CPU limit|Count|Average|CPU limit|DatabaseResourceId|
+|database_cpu_percent|No|CPU percentage|Percent|Average|CPU percentage|DatabaseResourceId|
+|database_cpu_used|No|CPU used|Count|Average|CPU used|DatabaseResourceId|
+|database_dtu_consumption_percent|No|DTU percentage|Percent|Average|DTU percentage|DatabaseResourceId|
+|database_eDTU_used|No|eDTU used|Count|Average|eDTU used|DatabaseResourceId|
+|database_log_write_percent|No|Log IO percentage|Percent|Average|Log IO percentage|DatabaseResourceId|
+|database_physical_data_read_percent|No|Data IO percentage|Percent|Average|Data IO percentage|DatabaseResourceId|
+|database_sessions_percent|No|Sessions percentage|Percent|Average|Sessions percentage|DatabaseResourceId|
+|database_storage_used|No|Data space used|Bytes|Average|Data space used|DatabaseResourceId|
+|database_workers_percent|No|Workers percentage|Percent|Average|Workers percentage|DatabaseResourceId|
+|dtu_consumption_percent|Yes|DTU percentage|Percent|Average|DTU Percentage. Applies to DTU-based elastic pools.|No Dimensions|
+|eDTU_limit|Yes|eDTU limit|Count|Average|eDTU limit. Applies to DTU-based elastic pools.|No Dimensions|
+|eDTU_used|Yes|eDTU used|Count|Average|eDTU used. Applies to DTU-based elastic pools.|No Dimensions|
+|log_write_percent|Yes|Log IO percentage|Percent|Average|Log IO percentage|No Dimensions|
+|physical_data_read_percent|Yes|Data IO percentage|Percent|Average|Data IO percentage|No Dimensions|
+|sessions_percent|Yes|Sessions percentage|Percent|Average|Sessions percentage|No Dimensions|
+|sqlserver_process_core_percent|Yes|SQL Server process core percent|Percent|Maximum|CPU usage as a percentage of the SQL DB process. Applies to elastic pools.|No Dimensions|
+|sqlserver_process_memory_percent|Yes|SQL Server process memory percent|Percent|Maximum|Memory usage as a percentage of the SQL DB process. Applies to elastic pools.|No Dimensions|
+|storage_limit|Yes|Data max size|Bytes|Average|Data max size|No Dimensions|
+|storage_percent|Yes|Data space used percent|Percent|Average|Data space used percent|No Dimensions|
+|storage_used|Yes|Data space used|Bytes|Average|Data space used|No Dimensions|
+|tempdb_data_size|Yes|Tempdb Data File Size Kilobytes|Count|Maximum|Space used in tempdb data files in kilobytes.|No Dimensions|
+|tempdb_log_size|Yes|Tempdb Log File Size Kilobytes|Count|Maximum|Space used in tempdb transaction log file in kilobytes.|No Dimensions|
+|tempdb_log_used_percent|Yes|Tempdb Percent Log Used|Percent|Maximum|Space used percentage in tempdb transaction log file|No Dimensions|
+|workers_percent|Yes|Workers percentage|Percent|Average|Workers percentage|No Dimensions|
+|xtp_storage_percent|Yes|In-Memory OLTP storage percent|Percent|Average|In-Memory OLTP storage percent|No Dimensions|
## Microsoft.Storage/storageAccounts
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|Availability|Yes|Availability|Percent|Average|The percentage of availability for the storage service or the specified API operation. Availability is calculated by taking the TotalBillableRequests value and dividing it by the number of applicable requests, including those that produced unexpected errors. All unexpected errors result in reduced availability for the storage service or the specified API operation.|GeoType, ApiName, Authentication| |Egress|Yes|Egress|Bytes|Total|The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.|GeoType, ApiName, Authentication| |Ingress|Yes|Ingress|Bytes|Total|The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.|GeoType, ApiName, Authentication|
-|SuccessE2ELatency|Yes|Success E2E Latency|Milliseconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
-|SuccessServerLatency|Yes|Success Server Latency|Milliseconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
+|SuccessE2ELatency|Yes|Success E2E Latency|MilliSeconds|Average|The average end-to-end latency of successful requests made to a storage service or the specified API operation, in milliseconds. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.|GeoType, ApiName, Authentication|
+|SuccessServerLatency|Yes|Success Server Latency|MilliSeconds|Average|The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency.|GeoType, ApiName, Authentication|
|Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication| |UsedCapacity|Yes|Used capacity|Bytes|Average|The amount of storage used by the storage account. For standard storage accounts, it's the sum of capacity used by blob, table, file, and queue. For premium storage accounts and Blob storage accounts, it is the same as BlobCapacity or FileCapacity.|No Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|EventsReceived|Yes|Events Received|Count|Total|Number of events received by data connection.|ComponentType, ComponentName| |ExportUtilization|Yes|Export Utilization|Percent|Maximum|Export utilization|No Dimensions| |IngestionLatencyInSeconds|Yes|Ingestion Latency|Seconds|Average|Latency of data ingested, from the time the data was received in the cluster until it's ready for query. The ingestion latency period depends on the ingestion scenario.|No Dimensions|
-|IngestionResult|Yes|Ingestion result|Count|Total|Number of ingestion operations|IngestionResultDetails|
+|IngestionResult|Yes|Ingestion result|Count|Total|Total number of sources that either failed or succeeded to be ingested. Splitting the metric by status, you can get detailed information about the status of the ingestion operations.|IngestionResultDetails, FailureKind|
|IngestionUtilization|Yes|Ingestion utilization|Percent|Average|Ratio of used ingestion slots in the cluster|No Dimensions| |IngestionVolumeInMB|Yes|Ingestion Volume|Bytes|Total|Overall volume of ingested data to the cluster|Database| |InstanceCount|Yes|Instance Count|Count|Average|Total instance count|No Dimensions|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. Not applicable to Azure Functions. For more information about this metric,lease see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions|
-|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count. Only present for Azure Functions.|Instance|
-|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units. Only present for Azure Functions.|Instance|
+|FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
+|FunctionExecutionUnits|Yes|Function Execution Units|Count|Total|Function Execution Units|Instance|
|Gen0Collections|Yes|Gen 0 Garbage Collections|Count|Total|The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance| |Gen1Collections|Yes|Gen 1 Garbage Collections|Count|Total|The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|Instance| |Gen2Collections|Yes|Gen 2 Garbage Collections|Count|Total|The number of times the generation 2 objects are garbage collected since the start of the app process.|Instance|
The Azure Monitor Agent replaces the Azure Diagnostics extension and Log Analyti
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. Not applicable to Azure Functions. For more information about this metric, please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Azure Monitor Resource Logs supported services and categories description: Reference of Azure Monitor Understand the supported services and event schema for Azure resource logs. Previously updated : 07/06/2021 Last updated : 07/19/2021 # Supported categories for Azure Resource Logs
If you think there is something is missing, you can open a GitHub comment at the
|ServiceLog|Service Logs|No|
-## Microsoft.BatchAI/workspaces
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BaiClusterEvent|BaiClusterEvent|No|
-|BaiClusterNodeEvent|BaiClusterNodeEvent|No|
-|BaiJobEvent|BaiJobEvent|No|
-- ## Microsoft.Blockchain/blockchainMembers |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Category|Category Display Name|Costs To Export| |||| |AuthOperational|Operational Authentication Logs|Yes|
+|CallDiagnosticsPRIVATEPREVIEW|Call Diagnostics Logs - PRIVATE PREVIEW|Yes|
+|CallSummaryPRIVATEPREVIEW|Call Summary Logs - PRIVATE PREVIEW|Yes|
|ChatOperational|Operational Chat Logs|No| |SMSOperational|Operational SMS Logs|No| |Usage|Usage Records|No|
If you think there is something is missing, you can open a GitHub comment at the
|ssh|Databricks SSH|No| |workspace|Databricks Workspace|No| + ## Microsoft.DataCollaboration/workspaces |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Proposals|Proposals|No| |Scripts|Scripts|No| + ## Microsoft.DataFactory/factories |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Category|Category Display Name|Costs To Export| ||||
-|Audit|Audit Logs|No|
+|Audit|Audit|No|
## Microsoft.PowerBI/tenants
If you think there is something is missing, you can open a GitHub comment at the
|QueryStoreWaitStatistics|Query Store Wait Statistics|No| |SQLInsights|SQL Insights|No| - ## Microsoft.Sql/servers/databases |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|||| |BigDataPoolAppsEnded|Big Data Pool Applications Ended|No| - ## Microsoft.Synapse/workspaces/sqlPools |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|AppServicePlatformLogs|App Service Platform logs|No| |FunctionAppLogs|Function Application Logs|No| - ## Next Steps * [Learn more about resource logs](../essentials/platform-logs-overview.md) * [Stream resource resource logs to **Event Hubs**](./resource-logs.md#send-to-azure-event-hubs) * [Change resource log diagnostic settings using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings) * [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)-
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
description: Information and steps to configure Customer-managed key to encrypt
Previously updated : 04/21/2021 Last updated : 07/29/2021
Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clust
Data ingested in the last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This data remains encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD data adheres to [key revocation](#key-revocation). We are working to have SSD data encrypted with Customer-managed key in the first half of 2021.
-Log Analytics Dedicated Clusters use a Capacity Reservation [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) starting at 1000 GB/day.
+Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) requires commitment Tier starting at 500 GB/day and can have values of 500, 1000, 2000 or 5000 GB/day.
## How Customer-managed key works in Azure Monitor
Content-type: application/json
}, "sku": { "name": "CapacityReservation",
- "capacity": 1000
+ "capacity": 500
} } ```
A response to GET request should look like this when the key update is complete:
}, "sku": { "name": "capacityReservation",
- "capacity": 1000,
+ "capacity": 500,
"lastSkuUpdate": "Sun, 22 Mar 2020 15:39:29 GMT" }, "properties": {
Customer-Managed key is provided on dedicated cluster and these operations are r
- 400 -- The body of the request is null or in bad format. - 400 -- SKU name is invalid. Set SKU name to capacityReservation. - 400 -- Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.
- - 400 -- Missing Capacity in SKU. Set Capacity value to 1000 or higher in steps of 100 (GB).
- - 400 -- Capacity in SKU is not in range. Should be minimum 1000 and up to the max allowed capacity which is available under ΓÇÿUsage and estimated costΓÇÖ in your workspace.
+ - 400 -- Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.
- 400 -- Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
- - 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 1000 or higher in steps of 100 (GB).
+ - 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.
- 400 -- Identity is null or empty. Set Identity with systemAssigned type. - 400 -- KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation. - 400 -- Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
N/A
Export rules can be disabled to let you stop the export when you donΓÇÖt need to retain data for a certain period such as when testing is being performed. Use the following command to disable a data export rule using CLI. ```azurecli
-az monitor log-analytics workspace data-export update --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --enable false
+az monitor log-analytics workspace data-export update --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --enable false
``` # [REST](#tab/rest)
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-dedicated-clusters.md
description: Customers who ingest more than 1 TB a day of monitoring data may us
Previously updated : 09/16/2020 Last updated : 07/29/2021
All operations on the cluster level require the `Microsoft.OperationalInsights/c
## Cluster pricing model
-Log Analytics Dedicated Clusters use a Commitment Tier pricing model which of at least 1000 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that Commitment Tier. Commitment Tier pricing information is available at the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
+Log Analytics Dedicated Clusters use a Commitment Tier pricing model which of at least 500 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that Commitment Tier. Commitment Tier pricing information is available at the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
-The cluster Commitment Tier level is configured via programmatically with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 1000, 2000 or 5000 GB/day.
+The cluster Commitment Tier level is configured programmatically with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day.
There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when configuring your cluster.
The following properties must be specified:
- **ClusterName**: Used for administrative purposes. Users are not exposed to this name. - **ResourceGroupName**: As for any Azure resource, clusters belong to a resource group. We recommended you use a central IT resource group because clusters are usually shared by many teams in the organization. For more design considerations, review [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) - **Location**: A cluster is located in a specific Azure region. Only workspaces located in this region can be linked to this cluster.-- **SkuCapacity**: You must specify the Commitment Tier (sku) when creating a cluster resource. The Commitment Tier can be set to 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Manage Costs for Log Analytics clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters). Note that commitment tiers were formerly called capacity reservations.
+- **SkuCapacity**: You must specify the Commitment Tier (sku) when creating a cluster resource. The Commitment Tier can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Manage Costs for Log Analytics clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters). Note that commitment tiers were formerly called capacity reservations.
After you create your *cluster* resource, you can edit additional properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below.
Content-type: application/json
}, "sku": { "name": "capacityReservation",
- "Capacity": 1000
+ "Capacity": 500
}, "properties": { "billingType": "cluster",
The provisioning of the Log Analytics cluster takes a while to complete. You can
}, "sku": { "name": "capacityReservation",
- "capacity": 1000,
+ "capacity": 500,
"lastSkuUpdate": "Sun, 22 Mar 2020 15:39:29 GMT" }, "properties": {
Get-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name"
}, "sku": { "name": "capacityReservation",
- "capacity": 1000,
+ "capacity": 500,
"lastSkuUpdate": "Sun, 22 Mar 2020 15:39:29 GMT" }, "properties": {
The same as for 'clusters in a resource group', but in subscription scope.
### Update commitment tier in cluster
-When the data volume to your linked workspaces change over time and you want to update the Commitment Tier level appropriately. The tier is specified in units of GB and can have values of 1000, 2000 or 5000 GB/day. Note that you donΓÇÖt have to provide the full REST request body but should include the sku.
+When the data volume to your linked workspaces change over time and you want to update the Commitment Tier level appropriately. The tier is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day. Note that you donΓÇÖt have to provide the full REST request body but should include the sku.
**CLI** ```azurecli
-az monitor log-analytics cluster update --name "cluster-name" --resource-group "resource-group-name" --sku-capacity 1000
+az monitor log-analytics cluster update --name "cluster-name" --resource-group "resource-group-name" --sku-capacity 500
``` **PowerShell** ```powershell
-Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity 1000
+Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity 500
``` **REST**
Use the following REST call to delete a cluster:
- 400 -- The body of the request is null or in bad format. - 400 -- SKU name is invalid. Set SKU name to capacityReservation. - 400 -- Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.
- - 400 -- Missing Capacity in SKU. Set Capacity value to 1000 or higher in steps of 100 (GB).
- - 400 -- Capacity in SKU is not in range. Should be minimum 1000 and up to the max allowed capacity which is available under ΓÇÿUsage and estimated costΓÇÖ in your workspace.
+ - 400 -- Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.
- 400 -- Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
- - 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 1000 or higher in steps of 100 (GB).
+ - 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.
- 400 -- Identity is null or empty. Set Identity with systemAssigned type. - 400 -- KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation. - 400 -- Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 07/27/2021 Last updated : 07/29/2021
Also, some solutions, such as [Azure Defender (Security Center)](https://azure.m
### Log Analytics Dedicated Clusters
-[Log Analytics Dedicated Clusters](logs-dedicated-clusters.md) are collections of workspaces in a single managed Azure Data Explorer cluster to support advanced scenarios, like [Customer-Managed Keys](customer-managed-keys.md). Log Analytics Dedicated Clusters use a commitment tier pricing model that must be configured to at least 1000 GB/day. The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level using the configured commitment tier level. Learn more about [creating a Log Analytics Clusters](customer-managed-keys.md#create-cluster) and [associating workspaces to it](customer-managed-keys.md#link-workspace-to-cluster). For information about commitment tier pricing, see the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
+[Log Analytics Dedicated Clusters](logs-dedicated-clusters.md) are collections of workspaces in a single managed Azure Data Explorer cluster to support advanced scenarios, like [Customer-Managed Keys](customer-managed-keys.md). Log Analytics Dedicated Clusters use the same commitment tier pricing model as workspaces, except that a cluster must have a commitment level of at least 500 GB/day. There is no Pay-As-You-Go option for clusters. The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level using the configured commitment tier level. Learn more about [creating a Log Analytics Clusters](customer-managed-keys.md#create-cluster) and [associating workspaces to it](customer-managed-keys.md#link-workspace-to-cluster). For information about commitment tier pricing, see the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
-The cluster commitment tier level is programmatically configured with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 1000, 2000 or 5000 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. For more information, see [Azure Monitor customer-managed key](customer-managed-keys.md).
+The cluster commitment tier level is programmatically configured with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. For more information, see [Azure Monitor customer-managed key](customer-managed-keys.md).
There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when [creating a cluster](logs-dedicated-clusters.md#creating-a-cluster) or set after creation. The two modes are:
To set the pricing tier to other values such as Pay-As-You-Go (called `pergb2018
## Legacy pricing tiers
-Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the legacy pricing tiers: **Free Trial**, **Standalone (Per GB)**, and **Per Node (OMS)**. Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Azure Defender (Security Center)](../../security-center/index.yml)) and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days.
+Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the legacy pricing tiers: **Free Trial**, **Standalone (Per GB)**, and **Per Node (OMS)**. Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Azure Defender (Security Center)](../../security-center/index.yml)) and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free tier. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days.
Usage on the Standalone pricing tier is billed by the ingested data volume. It is reported in the **Log Analytics** service and the meter is named "Data Analyzed".
azure-monitor Query Packs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/query-packs.md
You can view and manage query packs in the Azure portal from the **Log Analytics
You can set the permissions on a query pack when you view it in the Azure portal. Users require the following permissions to use query packs: - **Reader** - User can see and run all queries in the query pack.-- **Contributer** - User can modify existing queries and add new queries to the query pack.
+- **Contributor** - User can modify existing queries and add new queries to the query pack.
## Default query pack A query pack, called **DefaultQueryPack** is automatically created in each subscription in a resource group called **LogAnalyticsDefaultResources** when the first query is saved. You can create queries in this query pack or create additional query packs depending on your requirements.
POST https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000
## Next steps -- See [Using queries in Azure Monitor Log Analytics](queries.md) to see how users interact with query packs in Log Analytics.
+- See [Using queries in Azure Monitor Log Analytics](queries.md) to see how users interact with query packs in Log Analytics.
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-troubleshoot.md
If you still see a message that the virtual machine needs to be onboarded, it m
| Operating system | Agents | |:|:| | Windows | MicrosoftMonitoringAgent<br>Microsoft.Azure.Monitoring.DependencyAgent |
-| Linux | OMSAgentForLinux<br>DependencyAgentForLinux |
+| Linux | OMSAgentForLinux<br>DependencyAgentLinux |
If you do not see the both extensions for your operating system in the list of installed extensions, then they need to be installed. If the extensions are listed but their status does not appear as *Provisioning succeeded*, then the extension should be removed and reinstalled.
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 07/12/2021 Last updated : 07/29/2021 # Azure subscription and service limits, quotas, and constraints
The latest values for Azure Machine Learning Compute quotas can be found in the
[!INCLUDE [monitoring-limits](../../../includes/application-insights-limits.md)]
+## Azure NetApp Files
++ ## Azure Policy limits [!INCLUDE [policy-limits](../../../includes/azure-policy-limits.md)]
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 07/15/2021 Last updated : 07/29/2021
resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
[!INCLUDE [resource-manager-tag-resource](../../../includes/resource-manager-tag-resources.md)]
-Some resources, such [IP Groups in Azure Firewall](../../firewall/ip-groups.md), don't currently support updating tags through the portal. Instead, use the update commands for those resources. For example, you can update tags for an IP group with the [az network ip-group update](/cli/azure/network/ip-group#az_network_ip_group_update) command.
- ## REST API To work with tags through the Azure REST API, use:
The following limitations apply to tags:
* Each resource, resource group, and subscription can have a maximum of 50 tag name/value pairs. If you need to apply more tags than the maximum allowed number, use a JSON string for the tag value. The JSON string can contain many values that are applied to a single tag name. A resource group or subscription can contain many resources that each have 50 tag name/value pairs. * The tag name is limited to 512 characters, and the tag value is limited to 256 characters. For storage accounts, the tag name is limited to 128 characters, and the tag value is limited to 256 characters. * Tags can't be applied to classic resources such as Cloud Services.
-* Azure IP Groups and Azure Firewall Policies do not support PATCH operation.
+* Azure IP Groups and Azure Firewall Policies don't support PATCH operations, which means they don't support updating tags through the portal. Instead, use the update commands for those resources. For example, you can update tags for an IP group with the [az network ip-group update](/cli/azure/network/ip-group#az_network_ip_group_update) command.
* Tag names can't contain these characters: `<`, `>`, `%`, `&`, `\`, `?`, `/` > [!NOTE]
azure-sql Automatic Tuning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automatic-tuning-overview.md
Last updated 03/23/2021
Azure SQL Database and Azure SQL Managed Instance automatic tuning provides peak performance and stable workloads through continuous performance tuning based on AI and machine learning.
-Automatic tuning is a fully managed intelligent performance service that uses built-in intelligence to continuously monitor queries executed on a database, and it automatically improves their performance. This is achieved through dynamically adapting database to the changing workloads and applying tuning recommendations. Automatic tuning learns horizontally from all databases on Azure through AI and it dynamically improves its tuning actions. The longer a database runs with automatic tuning on, the better it performs.
+Automatic tuning is a fully managed intelligent performance service that uses built-in intelligence to continuously monitor queries executed on a database, and it automatically improves their performance. This is achieved through dynamically adapting a database to changing workloads and applying tuning recommendations. Automatic tuning learns horizontally from all databases on Azure through AI and it dynamically improves its tuning actions. The longer a database runs with automatic tuning on, the better it performs.
Azure SQL Database and Azure SQL Managed Instance automatic tuning might be one of the most important features that you can enable to provide stable and peak performing database workloads.
Automatic tuning for SQL Managed Instance only supports **FORCE LAST GOOD PLAN**
## Next steps - To learn about built-in intelligence used in automatic tuning, see [Artificial Intelligence tunes Azure SQL Database](https://azure.microsoft.com/blog/artificial-intelligence-tunes-azure-sql-databases/).-- To learn how automatic tuning works under the hood, see [Automatically indexing millions of databases in Microsoft Azure SQL Database](https://www.microsoft.com/research/uploads/prod/2019/02/autoindexing_azuredb.pdf).
+- To learn how automatic tuning works under the hood, see [Automatically indexing millions of databases in Microsoft Azure SQL Database](https://www.microsoft.com/research/uploads/prod/2019/02/autoindexing_azuredb.pdf).
azure-video-analyzer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/considerations-when-use-at-scale.md
When you upload videos using URL, you just need to provide a path to the locatio
To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md#code-sample). Or, you can use [AzCopy](../../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Video Analyzer for Media using [SAS URL](../../storage/common/storage-sas-overview.md). Video Analyzer for Media recommends using *readonly* SAS URLs.
-## Increase media reserved units is no longer available through Video Analyzer for Media
+## Automatic Scaling of Media Reserved Units
-Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) does not expose the option to increase Media [Reserved Units](https://docs.microsoft.com/azure/media-services/latest/concept-media-reserved-units)(MRUs) any longer. From now on MRUs are being auto scaled by [Azure Media Services](https://docs.microsoft.com/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Analyzer for Media.
+Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Reserved Units](https://docs.microsoft.com/azure/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](https://docs.microsoft.com/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
## Respect throttling
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
Title: Concepts - Identity and access description: Learn about the identity and access concepts of Azure VMware Solution Previously updated : 05/13/2021 Last updated : 07/29/2021 # Azure VMware Solution identity concepts
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
## NSX-T Manager access and identity >[!NOTE]
->NSX-T 2.5 is currently supported for all new private clouds.
+>NSX-T 3.1.2 is currently supported for all new private clouds.
Use the *admin* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) Gateways, segments (logical switches), and all services. The privileges give you access to the NSX-T Tier-0 (T0) Gateway. A change to the T0 Gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 Gateway.
Now that you've covered Azure VMware Solution access and identity concepts, you
[VMware product documentation]: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html <!-- LINKS - internal -->
-[concepts-upgrades]: ./concepts-private-clouds-clusters#host-maintenance-and-lifecycle-management
+[concepts-upgrades]: ./concepts-private-clouds-clusters#host-maintenance-and-lifecycle-management
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 05/01/2021 Last updated : 07/27/2021 # How to restore Azure VM data in Azure portal
After the disk is restored, use the template that was generated as part of the r
1. In **Restore**, select **Deploy Template** to initiate template deployment. ![Restore job drill-down](./media/backup-azure-arm-restore-vms/restore-job-drill-down1.png)
+
+ >[!Note]
+ >For a shared access signature (SAS) that has **Allow storage account key access** set to disabled, the template won't deploy when you select **Deploy Template**.
1. To customize the VM setting provided in the template, select **Edit template**. If you want to add more customizations, select **Edit parameters**. - [Learn more](../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template) about deploying resources from a custom template.
backup Backup Azure Vms Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-encryption.md
Title: Back up and restore encrypted Azure VMs description: Describes how to back up and restore encrypted Azure VMs with the Azure Backup service. Previously updated : 06/24/2021 Last updated : 07/27/2021 # Back up and restore encrypted Azure virtual machines
Azure Backup can back up and restore Azure VMs using ADE with and without the Az
- You can back up and restore ADE encrypted VMs within the same subscription. - Azure Backup supports VMs encrypted using standalone keys. Any key that's a part of a certificate used to encrypt a VM isn't currently supported.-- You can back up and restore ADE encrypted VMs within the same subscription and region as the Recovery Services Backup vault.
+- Azure Backup supports Cross Region Restore of encrypted Azure VMs to the Azure paired regions. For more information, see [support matrix](/azure/backup/backup-support-matrix#cross-region-restore).
- ADE encrypted VMs canΓÇÖt be recovered at the file/folder level. You need to recover the entire VM to restore files and folders. - When restoring a VM, you can't use the [replace existing VM](backup-azure-arm-restore-vms.md#restore-options) option for ADE encrypted VMs. This option is only supported for unencrypted managed disks.
backup Backup Create Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-create-rs-vault.md
A vault created with GRS redundancy includes the option to configure the Cross R
![Backup Configuration banner](./media/backup-azure-arm-restore-vms/banner.png)
+>[!Note]
+>If you've access to restricted paired regions and still unable to view Cross Region Restore settings in **Backup Configuration** blade, then re-register the recovery services resource provider. <br><br> To re-register the provider, go to your subscription in the Azure portal, navigate to **Resource provider** on the left navigation bar, then select **Microsoft.RecoveryServices** and select **Re-register**.
+ 1. From the portal, go to your Recovery Services vault > **Properties** (under **Settings**). 1. Under **Backup Configuration**, select **Update**. 1. Select **Enable Cross Region Restore in this vault** to enable the functionality.
backup Offline Backup Azure Data Box Dpm Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/offline-backup-azure-data-box-dpm-mabs.md
Title: Offline Backup with Azure Data Box for DPM and MABS description: You can use Azure Data Box to seed initial Backup data offline from DPM and MABS. Previously updated : 07/28/2021 Last updated : 07/29/2021 # Offline seeding using Azure Data Box for DPM and MABS
To ensure that the failure is due to the [Issue](#issue) above, perform one of t
#### Step 1
-Check if you see the following error message in the DPM/MABS console at the time of configuring offline backup:
+Check if you see one of the following error messages in the DPM/MABS console at the time of configuring offline backup:
-![Azure recovery services agent](./media/offline-backup-azure-data-box-dpm-mabs/azure-recovery-services-agent.png)
+**Unable to create Offline Backup policy for the current Azure account as this serverΓÇÖs authentication information could not be uploaded to Azure. (ID: 100242)**
++
+**Unable to make service calls to Azure that are required for querying Import Job status and moving backup data into the recovery Services Vault. (ID:100230)**
+ #### Step 2
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
# Speech Service release notes
+## Text-to-speech 2021-July release
+
+**Neural TTS updates**
+- Reduced pronunciation errors in Hebrew by 20%.
+
+**Speech Studio updates**
+- **Custom Neural Voice**: Updated the training pipeline to UniTTSv3 with which the model quality is improved while training time is reduced by 50% for acoustic models.
+- **Audio Content Creation**: Fixed the "Export" performance issue and the bug on custom voice selection.
+ ## Speech SDK 1.18.0: 2021-July release **Note**: Get started with the Speech SDK [here](speech-sdk.md#get-the-speech-sdk).
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
az group create --name myResourceGroup --location westus2
Now create an AKS cluster, with the confidential computing add-on enabled, by using the [az aks create][az-aks-create] command: ```azurecli-interactive
-az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addon confcom
+az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addons confcom
``` ### Add a user node pool with confidential computing capabilities to the AKS cluster
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
There are two modes of schema representation in the analytical store. These mode
It is possible to use Full Fidelity Schema for SQL (Core) API accounts. Here are the considerations about this possibility: * This option is only valid for accounts that don't have Synapse Link enabled.
- * It is not possible to turn Synapse Link off to on again, to change from well-defined to full fidelity.
+ * It is not possible to turn Synapse Link off and on again, to reset the default option and change from well-defined to full fidelity.
* It is not possible to change from well-defined to full fidelity using any other process. * MongoDB accounts are not compatible with this possibility of changing the method of representation. * Currently this decision cannot be made through the Azure portal.
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/manage-automation.md
GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDe
For modern customers with a Microsoft Customer Agreement, use the following call: ```http
-GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?startDate=2020-08-01&endDate=2020-08-05$top=1000&api-version=2019-10-01
+GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?startDate=2020-08-01&endDate=2020-08-05&$top=1000&api-version=2019-10-01
``` ### Get amortized cost details
cost-management-billing Ea Transfers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-transfers.md
Previously updated : 01/27/2021 Last updated : 07/29/2021
This section is for informational purposes only as the action cannot be performe
When you request to transfer an entire enterprise enrollment to an enrollment, the following actions occur:
+- Usage transferred may take up to 72 hours to be reflected in the new enrollment.
+- If DA or AO view charges were enabled on the transferred enrollment, they must be enabled on the new enrollment.
+- If you are using API reports or Power BI, please generate a new API key under your new enrollment.
- All Azure services, subscriptions, accounts, departments, and the entire enrollment structure, including all EA department administrators, transfer to a new target enrollment. - The enrollment status is set to _Transferred_. The transferred enrollment is available for historic usage reporting purposes only. - You can't add roles or subscriptions to a transferred enrollment. Transferred status prevents more usage against the enrollment.
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
description: Learn how to copy data from a cloud or on-premises REST source to s
Previously updated : 07/19/2021- Last updated : 07/27/2021+ + # Copy data from and to a REST endpoint by using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
REST connector as sink works with the REST APIs that accept JSON. The data will
] ```
+## Mapping data flow properties
+
+REST is supported in data flows for both integration datasets and inline datasets.
+
+### Source transformation
+
+| Property | Description | Required |
+|: |: |: |
+| requestMethod | The HTTP method. Allowed values are **GET** and **POST**. | Yes |
+| relativeUrl | A relative URL to the resource that contains the data. When this property isn't specified, only the URL that's specified in the linked service definition is used. The HTTP connector copies data from the combined URL: `[URL specified in linked service]/[relative URL specified in dataset]`. | No |
+| additionalHeaders | Additional HTTP request headers. | No |
+| httpRequestTimeout | The timeout (the **TimeSpan** value) for the HTTP request to get a response. This value is the timeout to get a response, not the timeout to write the data. The default value is **00:01:40**. | No |
+| requestInterval | The interval time between different requests in millisecond. Request interval value should be a number between [10, 60000]. | No |
+| QueryParameters.*request_query_parameter* OR QueryParameters['request_query_parameter'] | "request_query_parameter" is user-defined, which references one query parameter name in the next HTTP request URL. | No |
+
+### Sink transformation
+
+| Property | Description | Required |
+|: |: |: |
+| additionalHeaders | Additional HTTP request headers. | No |
+| httpRequestTimeout | The timeout (the **TimeSpan** value) for the HTTP request to get a response. This value is the timeout to get a response, not the timeout to write the data. The default value is **00:01:40**. | No |
+| requestInterval | The interval time between different requests in millisecond. Request interval value should be a number between [10, 60000]. | No |
+| httpCompressionType | HTTP compression type to use while sending data with Optimal Compression Level. Allowed values are **none** and **gzip**. | No |
+| writeBatchSize | Number of records to write to the REST sink per batch. The default value is 10000. | No |
+
+You can set the delete, insert, update, and upsert methods as well as the row data to send to the REST sink.
+
+![Data flow REST sink](media/data-flow/data-flow-sink.png)
+
+## Sample data flow script
+
+```
+AlterRow1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:true,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ rowRelativeUrl: 'periods',
+ insertHttpMethod: 'PUT',
+ deleteHttpMethod: 'DELETE',
+ upsertHttpMethod: 'PUT',
+ updateHttpMethod: 'PATCH',
+ timeout: 30,
+ requestFormat: ['type' -> 'json'],
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink1
+```
+ ## Pagination support When copying data from REST APIs, normally, the REST API limits its response payload size of a single request under a reasonable number; while to return large amount of data, it splits the result into multiple pages and requires callers to send consecutive requests to get next page of the result. Usually, the request for one page is dynamic and composed by the information returned from the response of previous page.
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-ssis-integration-runtime.md
if(![string]::IsNullOrEmpty($ExpressCustomSetup))
-ExpressCustomSetup $setups }
-# Add self-hosted integration runtime parameters if you configure a proxy for on-premises data accesss
+# Add self-hosted integration runtime parameters if you configure a proxy for on-premises data access
if(![string]::IsNullOrEmpty($DataProxyIntegrationRuntimeName) -and ![string]::IsNullOrEmpty($DataProxyStagingLinkedServiceName)) { Set-AzDataFactoryV2IntegrationRuntime -ResourceGroupName $ResourceGroupName `
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-alter-row.md
Use the Alter Row transformation to set insert, delete, update, and upsert polic
![Alter row settings](media/data-flow/alter-row1.png "Alter Row Settings")
-Alter Row transformations will only operate on database or CosmosDB sinks in your data flow. The actions that you assign to rows (insert, update, delete, upsert) won't occur during debug sessions. Run an Execute Data Flow activity in a pipeline to enact the alter row policies on your database tables.
+Alter Row transformations will only operate on database, REST, or CosmosDB sinks in your data flow. The actions that you assign to rows (insert, update, delete, upsert) won't occur during debug sessions. Run an Execute Data Flow activity in a pipeline to enact the alter row policies on your database tables.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4vJYc]
databox-online Azure Stack Edge Gpu Collect Virtual Machine Guest Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md
Previously updated : 06/21/2021 Last updated : 07/19/2021 # Collect VM guest logs on an Azure Stack Edge Pro GPU device
To diagnose any VM provisioning failure on your Azure Stack Edge Pro GPU device, you'll review guest logs for the failed virtual machine. This article describes how to the collect guest logs for the VMs in a Support package.
+> [!NOTE]
+> You can also monitor activity logs for virtual machines in the Azure portal. For more information, see [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md).
++ ## Collect VM guest logs in Support package To collect guest logs for failed virtual machines on an Azure Stack Edge Pro GPU device, do these steps:
To collect guest logs for failed virtual machines on an Azure Stack Edge Pro GPU
## Next steps
+- [Monitor the VM activity log](azure-stack-edge-gpu-monitor-virtual-machine-activity.md)
- [Troubleshoot VM provisioning on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md)
databox-online Azure Stack Edge Gpu Deploy Gpu Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md
Title: Overview and deployment of GPU VMs on your Azure Stack Edge Pro GPU device
-description: Describes how to create and manage GPU virtual machines (VMs) on an Azure Stack Edge Pro GPU device using templates.
+ Title: Deploy GPU VMs on your Azure Stack Edge Pro GPU device
+description: Describes how to create and deploy GPU virtual machines (VMs) on Azure Stack Edge Pro GPU via the Azure portal or using templates.
Previously updated : 05/28/2021 Last updated : 07/28/2021
-#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro GPU device using APIs so that I can efficiently manage my VMs.
+#Customer intent: As an IT admin, I want the flexibility to deploy a single GPU virtual machine (VM) quickly in the portal or use templates to deploy and manage multiple GPU VMs efficiently on my Azure Stack Edge Pro GPU device.
+ # Deploy GPU VMs on your Azure Stack Edge Pro GPU device [!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
-This article provides an overview of GPU virtual machines (VMs) on your Azure Stack Edge Pro GPU device. The article also describes how to create a GPU VM by using the Azure Resource Manager templates.
+This article how to create a GPU VM in the Azure portal or by using the Azure Resource Manager templates.
+Use the Azure portal to quickly deploy a single GPU VM. You can install the GPU extension during or after VM creation. Or use Azure Resource Manager templates to efficiently deploy and manage multiple GPU VMs.
-## About GPU VMs
+## Create GPU VMs
-Your Azure Stack Edge devices may be equipped with 1 or 2 of Nvidia's Tesla T4 GPU. To deploy GPU-accelerated VM workloads on these devices, use GPU optimized VM sizes. For example, the NC T4 v3-series should be used to deploy inference workloads featuring T4 GPUs.
+You can deploy a GPU VM via the portal or using Azure Resource Manager templates.
-For more information, see [NC T4 v3-series VMs](../virtual-machines/nct4-v3-series.md).
+For a list of supported operating systems, drivers, and VM sizes for GPU VMs, see [What are GPU virtual machines?](azure-stack-edge-gpu-overview-gpu-virtual-machines.md). For deployment considerations, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
-## Supported OS and GPU drivers
-To take advantage of the GPU capabilities of Azure N-series VMs, Nvidia GPU drivers must be installed.
+> [!IMPORTANT]
+> If your device will be running Kubernetes, do not configure Kubernetes before you deploy your GPU VMs. If you configure Kubernetes first, it claims all the available GPU resources, and GPU VM creation will fail. For Kubernetes deployment considerations on 1-GPU and 2-GPU devices, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
-The Nvidia GPU driver extension installs appropriate Nvidia CUDA or GRID drivers. You can install or manage the extension using the Azure Resource Manager templates.
+### [Portal](#tab/portal)
-### Supported OS for GPU extension for Windows
+Follow these steps when deploying GPU VMs on your device via the Azure portal:
-This extension supports the following operating systems (OSs). Other versions may work but have not been tested in-house on GPU VMs running on Azure Stack Edge devices.
+1. To create GPU VMs, follow all the steps in [Deploy VM on your Azure Stack Edge using Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md), with these configuration requirements:
-| Distribution | Version |
-|||
-| Windows Server 2019 | Core |
-| Windows Server 2016 | Core |
+ - On the **Basics** tab, select a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
-### Supported OS for GPU extension for Linux
+ ![Screenshot of Basics tab with supported VM sizes for GPU VMs identified.](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/basics-vm-size-for-gpu.png)
-This extension supports the following OS distros, depending on the driver support for specific OS version. Other versions may work but have not been tested in-house on GPU VMs running on Azure Stack Edge devices.
+ - To install the GPU extension during deployment, on the **Advanced** tab, choose **Select an extension to install**. Then select a GPU extension to install. GPU extensions are only available for a virtual machine with a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
+
+ > [!NOTE]
+ > If you're using a Red Hat image, you'll need to install the GPU extension after VM deployment. Follow the steps in [Install GPU extension](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md).
+
+ ![Illustration showing how to add a GPU extension to a virtual machine during VM creation in the portal](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/add-extension-01.png)
+ The **Advanced** tab shows the extension you selected.
-| Distribution | Version |
-|||
-| Ubuntu | 18.04 LTS |
-| Red Hat Enterprise Linux | 7.4 |
+ ![Screenshot showing an extension added to the Advanced tab during VM creation](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/add-extension-02.png)
+1. Once the GPU VM is successfully created, you can view this VM in the list of virtual machines in your Azure Stack Edge resource in the Azure portal.
-## GPU VMs and Kubernetes
+ ![GPU VM in list of virtual machines in Azure portal - 1](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/list-virtual-machines-01.png)
-Before you deploy GPU VMs on your device, review the following considerations if Kubernetes is configured on the device.
+ Select the VM, and drill down to the details. Make sure the GPU extension has **Succeeded** status.
-#### For 1-GPU device:
+ ![Installed GPU extension shown on the Details pane for a virtual machine](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/vm-details-extension-installed.png)
-- **Create a GPU VM followed by Kubernetes configuration on your device**: In this scenario, the GPU VM creation and Kubernetes configuration will both be successful. Kubernetes will not have access to the GPU in this case. -- **Configure Kubernetes on your device followed by creation of a GPU VM**: In this scenario, the Kubernetes will claim the GPU on your device and the VM creation will fail as there are no GPU resources available.
+### [Templates](#tab/templates)
-#### For 2-GPU device
+Follow these steps when deploying GPU VMs on your device using Azure Resource Manager templates:
-- **Create a GPU VM followed by Kubernetes configuration on your device**: In this scenario, the GPU VM that you create will claim one GPU on your device and Kubernetes configuration will also be successful and claim the remaining one GPU.
+1. [Download the VM templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory.
-- **Create two GPU VMs followed by Kubernetes configuration on your device**: In this scenario, the two GPU VMs will claim the two GPUs on the device and the Kubernetes is configured successfully with no GPUs.
+1. Before you can deploy VMs on your Azure Stack Edge device, you must configure your client to connect to the device via Azure Resource Manager over Azure PowerShell. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
-- **Configure Kubernetes on your device followed by creation of a GPU VM**: In this scenario, the Kubernetes will claim both the GPUs on your device and the VM creation will fail as no GPU resources are available.
+1. To create GPU VMs, follow all the steps in [Deploy VM on your Azure Stack Edge using templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md), with these configuration requirements:
+
+ - When specifying GPU VM sizes, make sure to use the NCasT4-v3-series in the `CreateVM.parameters.json`, which are supported for GPU VMs. For more information, see [Supported VM sizes for GPU VMs](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
-<!--Li indicated that this is fixed. If you have GPU VMs running on your device and Kubernetes is also configured, then anytime the VM is deallocated (when you stop or remove a VM using Stop-AzureRmVM or Remove-AzureRmVM), there is a risk that the Kubernetes cluster will claim all the GPUs available on the device. In such an instance, you will not be able to restart the GPU VMs deployed on your device or create GPU VMs. -->
+ ```json
+ "vmSize": {
+ "value": "Standard_NC4as_T4_v3"
+ },
+ ```
+ Once the GPU VM is successfully created, you can view this VM in the list of virtual machines in your Azure Stack Edge resource in the Azure portal.
-## Create GPU VMs
+ ![GPU VM in list of virtual machines in Azure portal](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/list-virtual-machines-01.png)
-Follow these steps when deploying GPU VMs on your device:
+1. Select the VM, and drill down to the details. Copy the IP address allocated to the VM.
-1. Identify if your device will also be running Kubernetes. If the device will run Kubernetes, then you'll need to create the GPU VM first and then configure Kubernetes. If Kubernetes is configured first, then it will claim all the available GPU resources and the GPU VM creation will fail.
+ ![IP allocated to GPU VM in Azure portal](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/get-ip-of-virtual-machine.png)
-1. [Download the VM templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory.
+<!--1. If needed, you can switch the compute network back to whatever you need.-->
-1. Before you can deploy VMs on your Azure Stack Edge device, you must configure your client to connect to the device via Azure Resource Manager over Azure PowerShell. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
+After the VM is created, you can [deploy the GPU extension using the extension template](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md?tabs=linux).
-1. To create GPU VMs, follow all the steps in the [Deploy VM on your Azure Stack Edge using templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md) or [Deploy VM on your Azure Stack Edge using Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md) except for the following differences:
+
-
- 1. If you create a VM using the templates, when specifying GPU VM sizes, make sure to use the NCasT4-v3-series in the `CreateVM.parameters.json` as these are supported for GPU VMs. For more information, see [Supported VM sizes for GPU VMs](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
+> [!NOTE]
+> When updating your device software version from 2012 to later, you will need to manually stop the GPU VMs.
+
+## Install GPU extension after deployment
- ```json
- "vmSize": {
- "value": "Standard_NC4as_T4_v3"
- },
- ```
- If you use the Azure portal to create your VM, you can still select a VM size from NCasT4-v3-series.
+To take advantage of the GPU capabilities of Azure N-series VMs, Nvidia GPU drivers must be installed. From the Azure portal, you can install the GPU extension during or after VM deployment. If you're using templates, you'll install the GPU extension after you create the VM.
- 1. Once the GPU VM is successfully created, you can view this VM in the list of virtual machines in your Azure Stack Edge resource in the Azure portal.
+
- ![GPU VM in list of virtual machines in Azure portal](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/list-virtual-machine-1.png)
+### [Portal](#tab/portal)
-1. Select the VM and drill down to the details. Copy the IP allocated to the VM.
+If you didn't install the GPU extension when you created the VM, follow these steps to install it on the deployed VM:
- ![IP allocated to GPU VM in Azure portal](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/get-ip-gpu-virtual-machine-1.png)
+1. Go to the virtual machine you want to add the GPU extension to.
-1. If needed, you could switch the compute network back to whatever you need.
+ ![Screenshot that shows how to select a virtual machines from the Virtual machines Overview.](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/add-extension-after-deployment-01.png)
+
+1. In **Details**, select **+ Add extension**. Then select a GPU extension to install.
+ GPU extensions are only available for a virtual machine with a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview). If you prefer, you can [install the GPU extension after deployment](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#install-gpu-extension-after-deployment).
-After the VM is created, you can deploy GPU extension using the extension template.
+ ![Illustration showing how to use the + Add extension button in VM details to add a GPU extension to a VM.](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/add-extension-after-deployment-02.png)
+> [!Note]
+> You can't remove a GPU extension via the portal. Instead, use the [Remove-AzureRmVMExtension](/powershell/module/azurerm.compute/remove-azurermvmextension?view=azurermps-6.13.0&preserve-view=true) cmdlet in Azure PowerShell. For instructions, see [Remove GPU extension](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md#remove-gpu-extension)
-> [!NOTE]
-> When updating your device software version from 2012 to later, you will need to manually stop the GPU VMs.
+### [Templates](#tab/templates)
+When you create a GPU VM using templates, you install the GPU extension after deployment. For detailed steps for using templates to deploy a GPU extension on a Windows virtual machine or a Linux virtual machine, see [Install GPU extension](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md).
+ ## Next steps -- Learn how to [Install GPU extension](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md) on the GPU VMs running on your device.
+- [Troubleshoot VM deployment](azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md)
+- [Troubleshoot GPU extension issues](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md)
+- [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md)
+- [Monitor CPU and memory on a VM](azure-stack-edge-gpu-monitor-virtual-machine-metrics.md)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Custom Script Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md
If your script is on a local server, then you may still need additional firewall
In the following example, Port 2 was connected to the internet and was used to enable the compute network. If you identified that Kubernetes isn't needed in the earlier step, you can skip the Kubernetes node IP and external service IP assignment.
- ![Enable compute settings on port connected to internet](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/enable-compute-network-1.png)
+ ![Enable compute settings on port connected to internet](media/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension/enable-compute-network-1.png)
## Install Custom Script Extension
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
Previously updated : 05/27/2021 Last updated : 07/13/2021 #Customer intent: As an IT admin, I need to understand how install GPU extension on GPU virtual machines (VMs) on my Azure Stack Edge Pro GPU device.
[!INCLUDE [applies-to-GPU-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-sku.md)]
-This article describes how to install GPU driver extension to install appropriate Nvidia drivers on the GPU VMs running on your Azure Stack Edge device. The article covers installation steps for GPU extension on both Windows and Linux VMs.
+This article describes how to install GPU driver extension to install appropriate Nvidia drivers on the GPU VMs running on your Azure Stack Edge device. The article covers installation steps for installing a GPU extension using Azure Resource Manager templates on both Windows and Linux VMs.
+
+> [!NOTE]
+> In the Azure portal, you can install a GPU extension during VM creation or after the VM is deployed. For steps and requirements, see [Install a GPU virtual machine](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
## Prerequisites
Before you install GPU extension on the GPU VMs running on your device, make sur
Here is an example where Port 2 was connected to the internet and was used to enable the compute network. If Kubernetes is not deployed on your environment, you can skip the Kubernetes node IP and external service IP assignment.
- ![Enable compute settings on port connected to internet](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/enable-compute-network-1.png)
+ ![Enable compute settings on port connected to internet](media/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension/enable-compute-network-1.png)
1. [Download the GPU extension templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory. 1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute, will return error messages to the effect that you are not connected to Azure anymore. You will need to sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md). -- ## Edit parameters file Depending on the operating system for your VM, you could install GPU extension for Windows or for Linux.
PS C:\Program Files\NVIDIA Corporation\NVSMI>
For more information, see [Nvidia GPU driver extension for Windows](../virtual-machines/extensions/hpccompute-gpu-windows.md).
+> [!NOTE]
+> After you finish installing the GPU driver and GPU extension, you no longer need to use a port with Internet access for compute.
+ ### [Linux](#tab/linux) Follow these steps to verify the driver installation:
Follow these steps to verify the driver installation:
For more information, see [Nvidia GPU driver extension for Linux](../virtual-machines/extensions/hpccompute-gpu-linux.md).
+> [!NOTE]
+> After you finish installing the GPU driver and GPU extension, you no longer need to use a port with Internet access for compute.
++
Requestld IsSuccessStatusCode StatusCode ReasonPhrase
Learn how to:
+- [Troubleshoot GPU extension issues](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md)
+- [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md)
- [Manage VM disks](azure-stack-edge-gpu-manage-virtual-machine-disks-portal.md). - [Manage VM network interfaces](azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md). - [Manage VM sizes](azure-stack-edge-gpu-manage-virtual-machine-resize-portal.md).
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Previously updated : 05/14/2021 Last updated : 07/14/2021 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge Pro GPU device by using the Azure portal.
+You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge Pro GPU device by using the Azure portal.
> [!IMPORTANT] > We recommend that you enable multifactor authentication for the user who manages VMs that are deployed on your device from the cloud.
Follow these steps to create a VM on your Azure Stack Edge Pro GPU device.
For information about preparing the VHD, see [Prepare a generalized image from a Windows VHD](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md).
-1. In the Azure portal, go to the Azure Stack Edge resource for your device. Go to **Edge Services** > **Virtual machines**.
+ [Troubleshoot VM image uploads](azure-stack-edge-gpu-troubleshoot-virtual-machine-image-upload.md).
+
+1. In the Azure portal, go to the Azure Stack Edge resource for your device. Then go to **Edge services** > **Virtual machines**.
![Screenshot that shows Edge Services and Virtual machines.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-1.png)
-1. Select **Virtual Machines** to go to the **Overview** page. Select **Enable** to enable virtual machine cloud management.
+1. On the **Overview** page. Select **Enable** to enable virtual machine cloud management.
![Screenshot that shows the Overview page with the Enable button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-2.png) 1. The first step is to add a VM image. You've already uploaded a VHD into the storage account in the earlier step. You'll use this VHD to create a VM image.
- Select **Add** to download the VHD from the storage account and add it to the device. The download process takes several minutes depending on the size of the VHD and the internet bandwidth available for the download.
-
- ![Screenshot that shows the Overview page with the Add button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-3.png)
+ Select **+ Add image** to download the VHD from the storage account and add it to the device. The download process takes several minutes depending on the size of the VHD and the internet bandwidth available for the download.
-1. On the **Add image** pane, input the following parameters. Select **Add**.
+ ![Screenshot that shows the Overview page with the Add image button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-3.png)
+1. On the **Add image** pane, make the following field entries. Then select **Add**.
- |Parameter |Description |
+ |Field |Description |
||| |Download from storage blob |Browse to the location of the storage blob in the storage account where you uploaded the VHD. | |Download to | Automatically set to the current device where you're deploying the VM. |
+ |Edge resource group |Select the resource group to add the image to. |
|Save image as | The name for the VM image that you're creating from the VHD you uploaded to the storage account. | |OS type |Choose from Windows or Linux as the operating system of the VHD you'll use to create the VM image. | ![Screenshot that shows the Add image page with the Add button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-6.png)
-1. The VHD is downloaded, and the VM image is created. The image creation takes several minutes to complete. You'll see a notification for the successful completion of the VM image.
+1. The VHD is downloaded, and the VM image is created. Image creation takes several minutes to complete. You'll see a notification for the successful completion of the VM image.<!--There's a fleeting notification that image creation is in progress, but I didn't see any notification that image creation completed successfully.-->
![Screenshot that shows the notification for successful completion.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-8.png)
Follow these steps to create a VM on your Azure Stack Edge Pro GPU device.
Follow these steps to create a VM after you've created a VM image.
-1. On the **Overview** page, select **Add virtual machine**.
+1. On the **Overview** page for **Virtual machines**, select **+ Add virtual machine**.
![Screenshot that shows the Overview page and the Add virtual machine button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-1.png) 1. On the **Basics** tab, input the following parameters. - |Parameter |Description | |||
- |Virtual machine name | |
+ |Virtual machine name | Enter a name for the new virtual machine. |
|Edge resource group | Create a new resource group for all the resources associated with the VM. | |Image | Select from the VM images available on the device. |
- |Size | Choose from the [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md). |
+ |Size | Choose from the [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md).<br>For a GPU VM, select a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview). |
|Username | Use the default username **azureuser** for the admin to sign in to the VM. | |Authentication type | Choose from an SSH public key or a user-defined password. |
- |Password | Enter a password to sign in to the VM. The password must be at least 12 characters long and meet the defined [complexity requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-). |
+ |SSH public key | Displayed when you select the **SSH public key** authentication type. Paste in the SSH public key. |
+ |Password | Displayed when you select the **Password** authentication type. Enter a password to sign in to the VM. The password must be at least 12 characters long and meet the defined [complexity requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-). |
|Confirm password | Enter the password again. | -
- ![Screenshot that shows the Basics tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-basics-1.png)
+ ![Screenshot showing the Basics tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-basics-1.png)
Select **Next: Disks**.
Follow these steps to create a VM after you've created a VM image.
1. On the **Networking** tab, you'll configure the network connectivity for your VM.
-
|Parameter |Description | ||| |Virtual network | From the dropdown list, select the virtual switch created on your Azure Stack Edge device when you enabled compute on the network interface. |
Follow these steps to create a VM after you've created a VM image.
![Screenshot that shows the Networking tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-networking-1.png)
- Select **Next: Advanced**.
+ Select **Next: Advanced**. On the **Advanced** tab, you can select an extension to install during VM deployment, and you can specify a `cloud-init` script to customize your VM.
-1. On the **Advanced** tab, you can specify the custom data or the cloud-init to customize your VM.
+1. If you want to install an extension on your VM when you create it, choose **Select an extension to install**. Then select the extension on the **Add extension** screen.
- You can use cloud-init to customize a VM on its first boot. Use the cloud-init to install packages and write files, or to configure users and security. As cloud-init runs during the initial boot process, no other steps are required to apply your configuration. For more information on cloud-init, see [Cloud-init overview](../virtual-machines/linux/tutorial-automate-vm-deployment.md#cloud-init-overview).
+ For detailed steps to install a GPU extension during VM deployment, see [Deploy GPU VMs](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms).
- ![Screenshot that shows the Advanced tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-advanced-1.png)
+ ![Screenshot that shows an extension added to the Advanced tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-extension-01.png)
+
+1. If you want to use the `cloud-init` utility to customize the new VM on its first boot, on the **Advanced** tab, paste your `cloud-init` script into the **Custom data** box under **Custom data and cloud init**.
+
+ For more information about using `cloud-init`, see [Cloud-init overview](../virtual-machines/linux/tutorial-automate-vm-deployment.md#cloud-init-overview).
+
+ ![Screenshot that shows the Advanced tab with a cloud init script in the Custom data box.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-advanced-tab-with-cloud-init-script.png)
Select **Next: Review + Create**.
Follow these steps to create a VM after you've created a VM image.
![Screenshot that shows the Deployments page.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-deployments-page-1.png)
-
-1. After the VM is successfully created, the **Overview** page updates to display the new VM.
+1. After the VM is successfully created, you'll see your new VM on the **Overview** pane.
- ![Screenshot that shows the Overview page with the new VM listed.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-overview-page-1.png)
+ ![Screenshot that shows the Overview pane with a new VM identified.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-overview-page-1.png)
1. Select the newly created VM to go to **Virtual machines**.
Follow these steps to connect to a Windows VM.
## Next steps
-To learn how to administer your Azure Stack Edge Pro GPU device, see [Use local web UI to administer an Azure Stack Edge Pro GPU](azure-stack-edge-manage-access-power-connectivity-mode.md).
+- [Deploy a GPU VM](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md)
+- [Troubleshoot VM deployment](azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md)
+- [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md)
+- [Monitor CPU and memory on a VM](azure-stack-edge-gpu-monitor-virtual-machine-metrics.md)
+
databox-online Azure Stack Edge Gpu Manage Edge Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-edge-resource-groups-portal.md
+
+ Title: Manage Edge resource groups on your Azure Stack Edge Pro GPU device
+description: Learn how to manage Edge resource groups on your Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R device via the Azure portal.
++++++ Last updated : 07/23/2021+
+# Customer intent: As an IT admin, I need a quick way to get rid of resource groups no longer in use that were created for VMs on my Azure Stack Edge Pro GPU devices.
++
+# Manage Edge resource groups on Azure Stack Edge Pro GPU devices
++
+Edge resource groups contain resources that are created on the device via the local Azure Resource Manager during virtual machine creation and deployment. These local resources can include virtual machines, VM images, disks, network interfaces, and other resource types such as Edge storage accounts.
+
+This article describes how to view and delete Edge resource groups on an Azure Stack Edge Pro GPU device.
+
+## View Edge resource groups
+
+Follow these steps to view the Edge resource groups for the current subscription.
+
+1. Go to **Virtual machines** on your device, and go to the **Resources** pane. Select **Edge resource groups**.
+
+ ![Screenshot showing Edge resource groups for virtual machines deployed on an Azure Stack Edge device.-1](media/azure-stack-edge-gpu-manage-edge-resource-groups-portal/edge-resource-groups-01.png)
+
+ > [!NOTE]
+ > You can get the same listing by using [Get-AzResource](/powershell/module/az.resources/get-azresource?view=azps-6.1.0&preserve-view=true) in Azure Powershell after you set up the Azure Resource Manager environment on your device. For more information, see [Connect to Azure Resource Manager](azure-stack-edge-gpu-connect-resource-manager.md).
++
+## Delete an Edge resource group
+
+Follow these steps to delete an Edge resource group that's no longer in use.
+
+> [!NOTE]
+> - A resource group must be empty to be deleted.
+> - You can't delete the ASERG resource group. That resource group stores the ASEVNET virtual network, which is created automatically when you enable compute on your device.
+
+1. Go to **Virtual machines** on your device, and go to the **Resources** pane. Select **Edge resource groups**.
+
+ ![Screenshot showing Edge resource groups for virtual machines deployed on an Azure Stack Edge device.-2](media/azure-stack-edge-gpu-manage-edge-resource-groups-portal/edge-resource-groups-01.png)
+
+1. Select the resource group that you want to delete. In the far right of the resource group, select the delete icon (trashcan).
+
+ The delete icon is only displayed when a resource group doesn't contain any resources.
+
+ ![Screenshot showing an Edge resource group with the delete icon selected.](media/azure-stack-edge-gpu-manage-edge-resource-groups-portal/edge-resource-groups-02.png)
+
+1. You'll see a message asking you to confirm that you want to delete the Edge resource group. The operation can't be reversed. Select **Yes**.
+
+ ![To delete an Edge resource group, select the trashcan icon to the right of the entry in the list of resource groups](./media/azure-stack-edge-gpu-manage-edge-resource-groups-portal/edge-resource-groups-03.png)
+
+ When deletion is complete, the resource group is removed from the list.
+
+## Next steps
+
+- To learn how to administer your Azure Stack Edge Pro GPU device, see [Use local web UI to administer an Azure Stack Edge Pro GPU](azure-stack-edge-manage-access-power-connectivity-mode.md).
+
+- [Set up the Azure Resource Manager environment on your device](azure-stack-edge-gpu-connect-resource-manager.md).
databox-online Azure Stack Edge Gpu Manage Virtual Machine Disks Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-disks-portal.md
Title: Manage VMs disks on Azure Stack Edge Pro GPU, Pro R, Mini R via Azure portal
-description: Learn how to manage disks including add or detach a data disk on VMs that are deployed on your Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R via the Azure portal.
+description: Learn how to manage disks including add, resize, detach, and delete data disks for VMs deployed on your Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R via the Azure portal.
Previously updated : 03/30/2021 Last updated : 07/13/2021 Customer intent: As an IT admin, I need to understand how to manage disks on a VM running on an Azure Stack Edge Pro device so that I can use it to run applications using Edge compute before sending it to Azure.
Customer intent: As an IT admin, I need to understand how to manage disks on a V
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-You can provision disks on the virtual machines (VMs) deployed on your Azure Stack Edge Pro device using the Azure portal. The disks are provisioned on the device via the local Azure Resource Manager and consume the device capacity. The operations such as adding a disk, detaching a disk can be done via the Azure portal, which in turn makes calls to the local Azure Resource Manager to provision the storage.
+You can provision disks on the virtual machines (VMs) deployed on your Azure Stack Edge Pro device using the Azure portal. The disks are provisioned on the device via the local Azure Resource Manager and consume the device capacity. The operations such as adding, detaching, and deleting a disk can be done via the Azure portal, which in turn makes calls to the local Azure Resource Manager to provision the storage.
-This article explains how to add a data disk to an existing VM, detach a data disk, and finally resize the VM itself via the Azure portal.
+This article explains how to add, detach or remove, and delete data disks from an existing VM, and resize the VM itself via, the Azure portal.
## About disks on VMs
This article explains how to add a data disk to an existing VM, detach a data di
Your VM can have an OS disk and a data disk. Every virtual machine deployed on your device has one attached operating system disk. This OS disk has a pre-installed OS, which was selected when the VM was created. This disk contains the boot volume. > [!NOTE]
-> You cannot change the OS disk size for the VM on your device. The OS disk size is determined by the VM size that you have selected.
-
+> You cannot change the OS disk size for a VM deployed on your device. The OS disk size is determined by the VM size that you selected.
A data disk on the other hand, is a managed disk attached to the VM running on your device. A data disk is used to store application data. Data disks are typically SCSI drives. The size of the VM determines how many data disks you can attach to a VM. By default, premium storage is used to host the disks.
Before you begin to manage disks on the VMs running on your device via the Azure
1. You have at least one VM deployed on your device. To create this VM, see the instructions in [Deploy VM on your Azure Stack Edge Pro via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md). - ## Add a data disk
-Follow these steps to add a disk to a virtual machine deployed on your device.
+Follow these steps to add a disk to a virtual machine deployed on your device.
-1. Go to the virtual machine to which you want to add a data disk and then go to the **Overview** page. Select **Disks**.
+1. Go to the virtual machine to which you want to add a data disk, and select **Disks** in the virtual machine **Details**.
![Select Disks on Overview page](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/add-data-disk-1.png)
Follow these steps to add a disk to a virtual machine deployed on your device.
|Field |Description | ||| |Name | A unique name within the resource group. The name cannot be changed after the data disk is created. |
+ |Edge resource group |Enter the Edge resource group in which to store the new disk.|
|Size| The size of your data disk in GiB. The maximum size of a data disk is determined by the VM size that you have selected. When provisioning a disk, you should also consider the actual space on your device and other VM workloads that are running that consume capacity. | ![Create a new disk blade](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/add-data-disk-3.png) Select **OK** and proceed.
-1. In the **Overview** page, under **Disks**, you'll see an entry corresponding to the new disk. Accept the default or assign a valid Logical Unit Number (LUN) to the disk and select **Save**. A LUN is a unique identifier for a SCSI disk. For more information, see [What is a LUN?](../virtual-machines/linux/azure-to-guest-disk-mapping.md#what-is-a-lun).
+1. In the **Disks** display, you'll see an entry corresponding to the new disk. Accept the default or assign a valid Logical Unit Number (LUN) to the disk, and select **Save**. A LUN is a unique identifier for a SCSI disk. For more information, see [What is a LUN?](../virtual-machines/linux/azure-to-guest-disk-mapping.md#what-is-a-lun).
![New disk on Overview page](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/add-data-disk-4.png)
Follow these steps to add a disk to a virtual machine deployed on your device.
![Notification for disk creation](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/add-data-disk-5.png)
-1. Navigate back to the **Overview** page. The list of disks updates to display the newly created data disk.
+1. Navigate back to the virtual machine **Details** page. The list of disks updates to display the newly created data disk.
![Updated list of data disks](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/add-data-disk-6.png)
Follow these steps to add a disk to a virtual machine deployed on your device.
Follow these steps to change a disk associated with a virtual machine deployed on your device.
-1. Go to the virtual machine which has the data disk to change and go to the **Overview** page. Select **Disks**.
+1. Go to the virtual machine that has the data disk to change, and select **Disks** in the virtual machine **Details**.
1. In the list of data disks, select the disk that you wish to change. In the far right of the disk selected, select the edit icon (pencil). ![Select a disk to change](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/edit-data-disk-1.png)
-1. In the **Change disk** blade, you can only change the size of the disk. The name associated with the disk can't be changed once it is created. Change the **Size** and save the changes.
+1. In the **Change disk** blade, you can only change the size of the disk. You can't change the name of a disk once it's created. Change the **Size** of the disk, and save the change.
![Change size of the data disk](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/edit-data-disk-2.png) > [!NOTE]
- > You can only expand a data disk, you can't shrink the disk.
+ > You can only expand a data disk. You can't shrink the disk.
-1. On the **Overview** page, the list of disks refreshes to display the updated disk.
+1. In the **Disks** display, the list of disks refreshes to display the updated disk.
## Attach an existing disk Follow these steps to attach an existing disk to the virtual machine deployed on your device.
-1. Go to the virtual machine to which you wish to attach the existing disk and then go to the **Overview** page. Select **Disks**.
-
- ![Select Disks ](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/list-data-disks-1.png)
+1. Go to the virtual machine to which you wish to attach the existing disk, and select **Disks** in the virtual machine **Details**.
1. In the **Disks** blade, under **Data Disks**, select **Attach an existing disk**. ![Select attach an existing disk](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/attach-existing-data-disk-1.png)
-1. Accept default LUN or assign a valid LUN. Choose an existing data disk from the dropdown list. Select Save.
+1. Accept default LUN or assign a valid LUN. Choose an existing data disk from the dropdown list. Select **Save**.
![Select an existing disk](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/attach-existing-data-disk-2.png) Select **Save** and proceed.
-1. You'll see a notification that the virtual machine is updated. After the VM is updated, navigate back to the **Overview** page. Refresh the page to view the newly attached disk in the list of data disks.
+1. You'll see a notification that the virtual machine is updated. After the VM is updated, navigate back to the virtual machine **Details** page. Refresh the page to view the newly attached disk in the list of data disks.
![View updated list of data disks on Overview page](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/list-data-disks-2.png)
Follow these steps to detach or remove a data disk associated with a virtual mac
> [!NOTE] > - You can remove a data disk while the VM is running. Make sure that nothing is actively using the disk before detaching it from the VM.
-> - If you detach a disk, it is not automatically deleted.
+> - If you detach a disk, it is not automatically deleted. Follow the steps in [Delete a disk](#delete-a-data-disk), below.
-1. Go to the virtual machine from which you wish to detach a data disk and go to the **Overview** page. Select **Disks**.
+1. Go to the virtual machine from which you wish to detach a data disk, and select **Disks** in the virtual machine **Details**.
![Select Disks](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/list-data-disks-1.png)
-1. In the list of disks, select the disk that you wish to detach. In the far right of the disk selected, select the detach icon (cross). The selected entry will be detached. Select **Save**.
+1. In the list of disks, select the disk that you wish to detach. In the far right of the disk selected, select the detach icon ("X"). The selected disk will be detached. Select **Save**.
![Select a disk to detach](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/detach-data-disk-1.png)
-1. After the disk is detached, the virtual machine is updated. Refresh the **Overview** page to view the updated list of data disks.
+1. After the disk is detached, the virtual machine is updated. Refresh the page to view the updated list of data disks.
![Select save](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/list-data-disks-2.png)
+## Delete a data disk
+
+Follow these steps to delete a data disk that's not attached to a VM deployed on your device:
+
+> [!NOTE]
+> Before deleting a data disk, you must [detach the data disk from the VM](#detach-a-data-disk) if the disk is in use.
+
+1. Go to **Virtual machines** on your device, and go to the **Resources** pane. Select **Disks**.
+
+ ![In Resources for virtual machines, display Disks](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/delete-disk-1.png)
+
+1. In the list of disks, select the disk that you wish to delete. In the far right of the disk selected, select the delete icon (trashcan).
+
+ ![To delete an unattached disk, selecting the trashcan icon at the right end of the disk entry in the list](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/delete-disk-2.png)
+
+ If you don't see the delete icon, you can select the VM name in the **Attached VM** column and [detach the disk from the VM](#detach-a-data-disk).
+
+1. You'll see a message asking you to confirm that you want to delete the disk. The operation can't be reversed. Select **Yes**.
+
+ ![To delete an unattached disk, selecting the trashcan icon to the right of the entry in the list of disks](./media/azure-stack-edge-gpu-manage-virtual-machine-disks-portal/delete-disk-3.png)
+
+ When deletion is complete, the disk is removed from the list.
++ ## Next steps To learn how to deploy virtual machines on your Azure Stack Edge Pro device, see [Deploy virtual machines via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
databox-online Azure Stack Edge Gpu Manage Virtual Machine Network Interfaces Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md
Previously updated : 03/30/2021 Last updated : 07/26/2021
-# Customer intent: As an IT admin, I need to understand how to manage network interfaces on an Azure Stack Edge Pro device so that I can use it to run applications using Edge compute before sending it to Azure.
+# Customer intent: As an IT admin, I need to understand how to manage network interfaces on an Azure Stack Edge Pro device so that I can use it to run applications using Edge compute before sending it to Azure.<!--Does "it" refer to the device or to the virtual NICs?-->
# Use the Azure portal to manage network interfaces on the VMs on your Azure Stack Edge Pro GPU [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-You can create and manage virtual machines (VMs) on an Azure Stack Edge device using Azure portal, templates, Azure PowerShell cmdlets and via Azure CLI/Python scripts. This article describes how to manage the network interfaces on a VM running on your Azure Stack Edge device using the Azure portal.
+You can create and manage virtual machines (VMs) on an Azure Stack Edge device using Azure portal, templates, Azure PowerShell cmdlets and via Azure CLI/Python scripts. This article describes how to manage the network interfaces on a VM running on your Azure Stack Edge device using the Azure portal.
When you create a VM, you specify one virtual network interface to be created. You may want to add one or more network interfaces to the virtual machine after it is created. You may also want to change the default network interface settings for an existing network interface.
-This article explains how to add a network interface to an existing VM, change existing settings such as IP type (static vs. dynamic), and finally remove or detach an existing interface.
+This article explains how to add a network interface to an existing VM, change existing settings such as IP type (static vs. dynamic), and detach or delete an existing interface.
## About network interfaces on VMs
Before you begin to manage VMs on your device via the Azure portal, make sure th
1. Enable compute on the network interface. Azure Stack Edge Pro GPU creates and manages a virtual switch corresponding to that network interface.
-1. You have atleast one VM deployed on your device. To create this VM, see the instructions in [Deploy VM on your Azure Stack Edge Pro via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+1. You have at least one VM deployed on your device. To create this VM, see the instructions in [Deploy VM on your Azure Stack Edge Pro via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
-1. Your VM should be in **Stopped** state. To stop your VM, go to **Virtual machines > Overview** and select the VM you want to stop. In the VM properties page, select **Stop** and then select **Yes** when prompted for confirmation. Before you add, edit, or delete network interfaces, you must stop the VM.
+1. Your VM should be in **Stopped** state. To stop your VM, go to **Virtual machines** and select the VM you want to stop. In the VM **Details** page, select **Stop** and then select **Yes** when prompted for confirmation. Before you add, edit, or delete network interfaces, you must stop the VM.
![Stop VM from VM properties page](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/stop-vm-2.png) ## Add a network interface
-Follow these steps to add a network interface to a virtual machine deployed on your device.
+Follow these steps to add a network interface to a virtual machine deployed on your device.<!--There's no obvious way to add a new NIC to a VM or to an Edge resource group in the portal. To update these procedures, I need to create my own test VM, which I can start and stop, create a new NIC for, detach a NIC from the stopped VM, etc.-->
-1. Go to the virtual machine that you have stopped and then go to the **VM Properties** page. Select **Networking**.
+1. Go to the virtual machine that you have stopped, and select **Networking**.
- ![Select Networking on VM properties page](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-1.png)
+ ![Select Resources and then Networking on the Virtual machines page for a device](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-1.png)
2. In the **Networking** blade, from the command bar, select **+ Add network interface**.
Follow these steps to add a network interface to a virtual machine deployed on y
3. In the **Add network interface** blade, enter the following parameters:
-
- |Column1 |Column2 |
- |||
- |Name | A unique name within the resource group. The name cannot be changed after the network interface is created. To manage multiple network interfaces easily, use the suggestions provided in the [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). |
- |Virtual network| The virtual network associated with the virtual switch created on your device when you enabled compute on the network interface. There is only one virtual network associated with your device. |
- |Subnet | A subnet within the selected virtual network. This field is automatically populated with the subnet associated with the network interface on which you enabled compute. |
- |IP assignment | A static or a dynamic IP for your network interface. The static IP should be an available, free IP from the specified subnet range. Choose dynamic if a DHCP server exists in the environment. |
+ |Field |Description |
+ ||-|
+ |Name | A unique name within the edge resource group. The name cannot be changed after the network interface is created. To manage multiple network interfaces easily, use the suggestions provided in the [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). |
+ |Select an edge resource group |Select the edge resource group to add the network interface to.|
+ |Virtual network| The virtual network associated with the virtual switch created on your device when you enabled compute on the network interface. There is only one virtual network associated with your device. |
+ |Subnet | A subnet within the selected virtual network. This field is automatically populated with the subnet associated with the network interface on which you enabled compute. |
+ |IP address assignment | A static or a dynamic IP for your network interface. The static IP should be an available, free IP from the specified subnet range. Choose dynamic if a DHCP server exists in the environment. |
- ![Add a network interface blade](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-3.png)
+ ![Screenshot showing theh options for adding a new network interface](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-3.png)
4. You'll see a notification that the network interface creation is in progress.
- ![Notification when network interface is getting created](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-4.png)
+ ![Notification when a network interface is getting created](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/add-nic-4.png)
5. After the network interface is successfully created, the list of network interfaces refreshes to display the newly created interface.
Follow these steps to add a network interface to a virtual machine deployed on y
Follow these steps to edit a network interface associated with a virtual machine deployed on your device.
-1. Go to the virtual machine that you have stopped and go to the **VM Properties** page. Select **Networking**.
+1. Go to the virtual machine that you have stopped, and select **Networking** in the virtual machine **Details**.
1. In the list of network interfaces, select the interface that you wish to edit. In the far right of the network interface selected, select the edit icon (pencil).
- ![Select a network interface to edit](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/edit-nic-1.png)
+ ![Screenshot showing the edit icon selected for a virtual network](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/edit-nic-1.png)
-1. In the **Edit network interface** blade, you can only change the IP assignment of the network interface. The name, virtual network, and subnet associated with the network interface can't be changed once it is created. Change the **IP assignment** to static and save the changes.
+1. In the **Edit network interface** blade, you can only change the IP assignment of the network interface. The name, edge resource group, virtual network, and subnet associated with the network interface can't be changed once it is created. Change the **IP assignment** to static, and save the changes.
- ![Change IP assignment for the network interface](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/edit-nic-2.png)
+ ![Screenshot showing how to change the IP assignment for a network interface](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/edit-nic-2.png)
1. The list of network interface refreshes to display the updated network interface.
Follow these steps to edit a network interface associated with a virtual machine
Follow these steps to detach or remove a network interface associated with a virtual machine deployed on your device.
-1. Go to the virtual machine that you have stopped and go to the **VM Properties** page. Select **Networking**.
+1. Go to the virtual machine that you have stopped, and select **Networking** in the virtual machine **Details**.
1. In the list of network interfaces, select the interface that you wish to edit. In the far right of the network interface selected, select the detach icon (unplug).
- ![Select a network interface to detach](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/detach-nic-1.png)
+ ![Screenshot showing the detach icon selected for a network interface attached to a virtual machine](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/detach-nic-1.png)
+
+1. You'll see a message asking you to confirm that you want to detach the network interface. Select **Yes**.
+
+ ![Screenshot showing the detach icon - a pencil - for detaching a virtual network from a virtual machine](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/detach-nic-2.png)
+
+ After the interface is completely detached, the list of network interfaces is refreshed to display the remaining interfaces.
++
+## Delete a network interface
+
+Follow these steps to delete a network interface that isn't attached to a virtual machine.
+
+1. Go to **Virtual machines**, and then to the **Resources** page. Select **Networking**.
+
+ ![Screenshot showing the Networking tab on the Resources page for virtual machines](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/delete-nic-1.png)
+
+1. On the **Networking** blade, select the delete icon (trashcan) by the network interface you want to delete. The delete icon is only displayed for network interfaces that aren't attached to a VM.
+
+ ![Screenshot showing the delete icon selected for an unattached network interface](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/delete-nic-2.png)
+
+1. You'll see a message asking you to confirm that you want to delete the network interface. The operation can't be reversed. Select **Yes**.
-1. After the interface is completely detached, the list of network interfaces is refreshed to display the remaining interfaces.
+ ![To delete an unattached disk, selecting the trashcan icon to the right of the entry in the list of disks](./media/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal/delete-nic-3.png)
+
+ After deletion of the network interface completes, the network interface is removed from the list.
## Next steps
databox-online Azure Stack Edge Gpu Manage Virtual Machine Resize Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-resize-portal.md
Previously updated : 03/30/2021 Last updated : 07/08/2021 Customer intent: As an IT admin, I need to understand how to resize VMs running on an Azure Stack Edge Pro device so that I can use it to run applications using Edge compute before sending it to Azure.
Before you resize a VM running on your device via the Azure portal, make sure th
Follow these steps to resize a virtual machine deployed on your device.
-1. Go to the virtual machine that you have stopped and then go to the **Overview** page. Select **VM size (change)**.
+1. Go to the virtual machine that you have stopped, and select **VM size (change)** in the virtual machine **Details**.
- ![Select VM Size Change on Overview page](./media/azure-stack-edge-gpu-manage-virtual-machine-resize-portal/change-vm-size-1.png)
+ ![Select VM Size Change in Details for the virtual machine](./media/azure-stack-edge-gpu-manage-virtual-machine-resize-portal/change-vm-size-1.png)
2. In the **Change VM size** blade, from the command bar, select the **VM size** and then select **Change**.
databox-online Azure Stack Edge Gpu Monitor Virtual Machine Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-monitor-virtual-machine-activity.md
+
+ Title: Monitor VM activity on Azure Stack Edge Pro GPU device
+description: Learn how to monitor VM activity on an Azure Stack Edge Pro GPU device in the Azure portal.
++++++ Last updated : 07/19/2021+
+# Customer intent: As an IT admin, I need to be able to review VM activity in real-time for the compute workloads on my Azure Stack Edge Pro GPU device.
++
+# Monitor VM activity on your Azure Stack Edge Pro GPU device
++
+This article describes how to view activity logs in the Azure portal for virtual machines on your Azure Stack Edge Pro GPU device.
+
+> [!NOTE]
+> You can zoom in on a VM's CPU and memory usage during periods of activity on the **Metrics** tab for the virtual machine. For more information, see [Monitor VM metrics](azure-stack-edge-gpu-monitor-virtual-machine-metrics.md).
+
+## View activity logs
+
+To view activity logs for the virtual machines on your Azure Stack Edge Pro GPU device, do these steps:
+
+1. Go to the device and then to **Virtual Machines**. Select **Activity log**.
+
+ ![Screenshot showing the Activity logs view for virtual machines on an Azure Stack Edge device in the Azure portal](./media/azure-stack-edge-gpu-monitor-virtual-machine-activity/activity-log-01.png)<!--Shoot a new screen: Larger text; clearer. Lightbox treatment? Remove all MS info.-->
+
+ You'll see the VM guest logs for virtual machines on the device.
+
+1. Use filters above the list to target the activity you need to see.
+
+ ![Screenshot showing the Timespan filter for Activity for virtual machines on an Azure Stack Edge device in the Azure portal](./media/azure-stack-edge-gpu-monitor-virtual-machine-activity/activity-log-02.png)<!--Reshoot to remove pointer. Lightbox treatment?-->
+
+1. Click the down arrow by an operation name to view the associated activity.
+
+ ![Screenshot showing Activity logs view for virtual machines on an Azure Stack Edge device](./media/azure-stack-edge-gpu-monitor-virtual-machine-activity/activity-log-03.png)<!--Reshoot to remove pointer. May be able to replace drop-down only.-->
+
+On any **Activity log** pane in Azure, you can filter and sort activities, select columns to display, drill down to details for a specific activity, and get **Quick Insights** into errors, failed deployments, alerts, service health, and security changes over the last 24 hours. For more information about the logs and the filtering options, see [View activity logs](/azure/azure-resource-manager/management/view-activity-logs).
+
+## Next steps
+
+- [Troubleshoot VM deployment](azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md)
+- [Collect VM guest logs in a Support package](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)
databox-online Azure Stack Edge Gpu Monitor Virtual Machine Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-monitor-virtual-machine-metrics.md
+
+ Title: Monitor CPU, memory for VM on Azure Stack Edge Pro GPU device
+description: Learn to monitor CPU, memory metrics for VMs on Azure Stack Edge Pro GPU devices in Azure portal.
++++++ Last updated : 07/19/2021+
+# Customer intent: As an IT admin, I need to be able to get a quick read of CPU and memory usage by a virtual machine on my Azure Stack Edge Pro GPU device.
++
+# Monitor VM metrics for CPU, memory on Azure Stack Edge Pro GPU
++
+This article describes how to monitor CPU and memory metrics for a virtual machine on your Azure Stack Edge Pro GPU device.
+
+## About VM metrics
+
+The **Metrics** tab for a virtual machine lets you view CPU and memory metrics, adjusting the time period and zooming in on periods of interest.
+
+The VM metrics are based on CPU and memory usage data collected from the VM's guest operating system. Resource usage is sampled once per minute.
+
+If a device is disconnected, metrics are cached on the device. When the device is reconnected, the metrics are pushed from the cache, and the VM **Metrics** are updated.
+
+## Monitor CPU and memory metrics
+
+1. Open the device in the Azure portal, and go to **Virtual Machines**. Select the virtual machine, and select **Metrics**.
+
+ ![Metrics tab on the dashboard for a virtual machine](media/azure-stack-edge-gpu-monitor-virtual-machine-metrics/metrics-01.png)
+
+2. By default, the graphs show average CPU and memory usage for the previous hour. To see data for a different time period, select a different option beside **Show data for last**.
+
+ ![Metrics tab showing average CPU and memory usage for the last 12 hours](./media/azure-stack-edge-gpu-monitor-virtual-machine-metrics/metrics-02.png)
+
+3. Point anywhere in either chart with your mouse to display a vertical line with a hand that you can move left or right to view an earlier or later data sample. Click to open a detail view for that time period.
+
+ ![Screenshot showing how to hover over the 0 percent data line to view an earlier or later data sample.](./media/azure-stack-edge-gpu-monitor-virtual-machine-metrics/metrics-03.png)
++
+## Next steps
+
+- [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md)
+- [Collect VM guest logs in a Support package](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)
databox-online Azure Stack Edge Gpu Overview Gpu Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-overview-gpu-virtual-machines.md
+
+ Title: Overview of GPU VMs on your Azure Stack Edge Pro GPU device
+description: Describes use of virtual machines optimized for GPU-accelerated workloads on Azure Stack Edge Pro with GPU.
++++++ Last updated : 07/13/2021+
+#Customer intent: As an IT admin, I need to understand how to deploy and manage GPU-accelerated VM workloads on my Azure Stack Edge Pro GPU devices.
++
+# GPU virtual machines for Azure Stack Edge Pro GPU devices
++
+GPU-accelerated workloads on an Azure Stack Edge Pro GPU device require a GPU virtual machine. This article provides an overview of GPU VMs, including supported OSs, GPU drivers, and VM sizes. Deployment options for GPU VMs used with Kubernetes clusters also are discussed.
+
+## About GPU VMs
+
+Your Azure Stack Edge devices may be equipped with 1 or 2 of Nvidia's Tesla T4 GPU. To deploy GPU-accelerated VM workloads on these devices, use GPU-optimized VM sizes. For example, the NC T4 v3-series should be used to deploy inference workloads featuring T4 GPUs. For more information, see [NC T4 v3-series VMs](../virtual-machines/nct4-v3-series.md).
+
+To take advantage of the GPU capabilities of Azure N-series VMs, Nvidia GPU drivers must be installed. The Nvidia GPU driver extension installs appropriate Nvidia CUDA or GRID drivers. You can [install the GPU extensions using templates or via the Azure portal](#gpu-vm-deployment).
+
+You can [install and manage the extension using the Azure Resource Manager templates](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md) after VM deployment. In the Azure portal, you can install the GPU extension during or after you deploy a VM; for instructions, see [Deploy GPU VMs on your Azure Stack Edge device](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
+
+If your device will have a Kubernetes cluster configured, be sure to review [deployment considerations for Kubernetes clusters](#gpu-vms-and-kubernetes) before you deploy GPU VMs.
+
+## Supported OS and GPU drivers
+
+The Nvidia GPU driver extensions for Windows and Linux support the following OS versions.
+
+### Supported OS for GPU extension for Windows
+
+This extension supports the following operating systems (OSs). Other versions may work but have not been tested in-house on GPU VMs running on Azure Stack Edge devices.
+
+| Distribution | Version |
+|||
+| Windows Server 2019 | Core |
+| Windows Server 2016 | Core |
+
+### Supported OS for GPU extension for Linux
+
+This extension supports the following OS distros, depending on the driver support for specific OS version. Other versions may work but have not been tested in-house on GPU VMs running on Azure Stack Edge devices.
+
+| Distribution | Version |
+|||
+| Ubuntu | 18.04 LTS |
+| Red Hat Enterprise Linux | 7.4 |
+
+## GPU VM deployment
+
+You can deploy a GPU VM via the Azure portal or using Azure Resource Manager templates. The GPU extension is installed after VM creation.<!--Wording still needs work!-->
+
+- **Portal:** In the Azure portal, you can quickly [install the GPU extension when you create a VM](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms) or [after VM deployment]().<!--Can they remove the GPU extension. Tomorrow, create a new GPU VM to test.-->
+
+- **Templates:** Using Azure Resource Manager templates, [you create a VM](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#install-gpu-extension-after-deployment) and then [install the GPU extension](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md).
++
+## GPU VMs and Kubernetes
+
+Before you deploy GPU VMs on your device, review the following considerations if Kubernetes is configured on the device.
+
+#### For 1-GPU device:
+
+- **Create a GPU VM followed by Kubernetes configuration on your device**: In this scenario, the GPU VM creation and Kubernetes configuration will both be successful. Kubernetes will not have access to the GPU in this case.
+
+- **Configure Kubernetes on your device followed by creation of a GPU VM**: In this scenario, the Kubernetes will claim the GPU on your device and the VM creation will fail as there are no GPU resources available.
+
+#### For 2-GPU device
+
+- **Create a GPU VM followed by Kubernetes configuration on your device**: In this scenario, the GPU VM that you create will claim one GPU on your device and Kubernetes configuration will also be successful and claim the remaining one GPU.
+
+- **Create two GPU VMs followed by Kubernetes configuration on your device**: In this scenario, the two GPU VMs will claim the two GPUs on the device and the Kubernetes is configured successfully with no GPUs.
+
+- **Configure Kubernetes on your device followed by creation of a GPU VM**: In this scenario, the Kubernetes will claim both the GPUs on your device and the VM creation will fail as no GPU resources are available.
+
+<!--Li indicated that this is fixed. If you have GPU VMs running on your device and Kubernetes is also configured, then anytime the VM is deallocated (when you stop or remove a VM using Stop-AzureRmVM or Remove-AzureRmVM), there is a risk that the Kubernetes cluster will claim all the GPUs available on the device. In such an instance, you will not be able to restart the GPU VMs deployed on your device or create GPU VMs. -->
+
+## Next steps
+- Learn how to [Deploy GPU VMs](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
+- Learn how to [Install GPU extension](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md) on the GPU VMs running on your device.
databox-online Azure Stack Edge Gpu Sharing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-sharing.md
Previously updated : 03/05/2021 Last updated : 07/01/2021
Many machine learning or other compute workloads may not need a dedicated GPU. G
## Using GPU with VMs
-On your Azure Stack Edge Pro device, a GPU can't be shared when deploying VM workloads. A GPU can only be mapped to one VM. This implies that you can only have one GPU VM on a device with one GPU and two VMs on a device that is equipped with two GPUs. There are other factors that must also be considered when using GPU VMs on a device that has Kubernetes configured for containerized workloads. For more information, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#gpu-vms-and-kubernetes).
+On your Azure Stack Edge Pro device, a GPU can't be shared when deploying VM workloads. A GPU can only be mapped to one VM. This implies that you can only have one GPU VM on a device with one GPU and two VMs on a device that is equipped with two GPUs. There are other factors that must also be considered when using GPU VMs on a device that has Kubernetes configured for containerized workloads. For more information, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
## Using GPU with containers
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Gpu Extension Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md
For installation steps, see [Install GPU extension](./azure-stack-edge-gpu-deplo
**Suggested solution:** Prepare a new VM image that has an operating system that the GPU extension supports.
-* For a list of supported operating systems, see [Supported OS and GPU drivers for GPU VMs](./azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#supported-os-and-gpu-drivers).
+* For a list of supported operating systems, see [Supported OS and GPU drivers for GPU VMs](./azure-stack-edge-gpu-overview-gpu-virtual-machines.md#supported-os-and-gpu-drivers).
* For image preparation requirements for a GPU VM, see [Create GPU VMs](./azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms).
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md
Previously updated : 06/21/2021 Last updated : 07/19/2021 # Troubleshoot VM deployment in Azure Stack Edge Pro GPU
If you try to deploy a VM on a GPU device that already has Kubernetes enabled, n
**Possible causes:** If Kubernetes is enabled before the VM is created, Kubernetes will use all the available GPUs, and you wonΓÇÖt be able to create any GPU-size VMs. You can create as many GPU-size VMs as the number of available GPUs. Your Azure Stack Edge device can be equipped with 1 or 2 GPUs.
-**Suggested solution:** For VM deployment options on a 1-GPU or 2-GPU device with Kubernetes configured, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#gpu-vms-and-kubernetes).
+**Suggested solution:** For VM deployment options on a 1-GPU or 2-GPU device with Kubernetes configured, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
## Next steps
-* [Collect a Support package that includes guest logs for a failed VM](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)
-* [Troubleshoot issues with a failed GPU extension installation](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)
-* [Troubleshoot issues with Azure Resource Manager](azure-stack-edge-gpu-troubleshoot-azure-resource-manager.md)
-
+- [Collect a Support package that includes guest logs for a failed VM](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)<!--Does a failed VM have a guest log? Does it have GPU and memory metrics?-->
+- [Troubleshoot issues with a failed GPU extension installation](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md)
+- [Troubleshoot issues with Azure Resource Manager](azure-stack-edge-gpu-troubleshoot-azure-resource-manager.md)
databox-online Azure Stack Edge Gpu Virtual Machine Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-virtual-machine-overview.md
- Previously updated : 04/28/2021+ Last updated : 07/09/2021
To figure out the size and the number of VMs that you can deploy on your device,
|Master VM|4 cores, 4-GB RAM| |Worker VM|12 cores, 32-GB RAM| - For the usable compute and memory on your device, see the [Compute and memory specifications](azure-stack-edge-gpu-technical-specifications-compliance.md#compute-and-memory-specifications) for your device model.
+For a GPU virtual machine, you must use a [VM size from the NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
+ ### VM limits
-You can run a maximum of up to 24 VMs on your device. This is another factor to consider when deploying your workload.
+You can run a maximum of 24 VMs on your device. This is another factor to consider when deploying your workload.
### Operating system disks and images
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
To complete all the database objects like table schemas, indexes and stored proc
FROM information_schema.triggers ```
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-azure-postgresql-to-azure-postgresql-online-portal/portal-select-subscriptions.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-azure-postgresql-to-azure-postgresql-online-portal/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-azure-postgresql-to-azure-postgresql-online-portal/portal-register-resource-provider.png)
## Create a DMS instance
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mongodb-cosmos-db-online.md
And if it is *Disabled*, then we recommend you enable it as shown below
![Screenshot of MongoDB Server-Side Retry enable.](media/tutorial-mongodb-to-cosmosdb-online/mongo-server-side-retry-enable.png)
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-mongodb-to-cosmosdb-online/portal-select-subscription1.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-mongodb-to-cosmosdb-online/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-mongodb-to-cosmosdb-online/portal-register-resource-provider.png)
## Create an instance
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mongodb-cosmos-db.md
If the feature is disabled, select **Enable**.
![Screenshot that shows how to enable Server Side Retry.](media/tutorial-mongodb-to-cosmosdb/mongo-server-side-retry-enable.png)
-## Register the resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Screenshot that shows portal subscriptions.](media/tutorial-mongodb-to-cosmosdb/portal-select-subscription1.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Screenshot that shows resource providers.](media/tutorial-mongodb-to-cosmosdb/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Screenshot that show how to register the resource provider.](media/tutorial-mongodb-to-cosmosdb/portal-register-resource-provider.png)
## Create an instance
dms Tutorial Mysql Azure Mysql Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md
GROUP BY SchemaName
Run the generated drop trigger query (DropQuery column) in the result to drop triggers in the target database. The add trigger query can be saved, to be used post data migration completion.
-## Register the Microsoft.DataMigration resource provider
-
-Registration of the resource provider needs to be done on each Azure subscription only once. Without the registration, you will not be able to create an instance of **Azure Database Migration Service**.
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-mysql-to-azure-mysql-offline-portal/01-dms-portal-select-subscription.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-mysql-to-azure-mysql-offline-portal/02-dms-portal-register-rp.png)
## Create a Database Migration Service instance
After the service is created, locate it within the Azure portal, open it, and th
![Add target details screen](media/tutorial-mysql-to-azure-mysql-offline-portal/11-dms-portal-project-mysql-target.png)
-3. On the **Select databases** screen, map the source and the target database for migration, and select **Next : Configure migration settings>>**. You can select the **Make Source Server Readonly** option to make the source as read-only, but be cautious that this is a server level setting. If selected, it sets the entire server to read-only, not just the selected databases.
+3. On the **Select databases** screen, map the source and the target database for migration, and select **Next : Configure migration settings>>**. You can select the **Make Source Server Read Only** option to make the source as read-only, but be cautious that this is a server level setting. If selected, it sets the entire server to read-only, not just the selected databases.
If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default. ![Select database details screen](media/tutorial-mysql-to-azure-mysql-offline-portal/12-dms-portal-project-mysql-select-db.png)
If you're not going to continue to use the Database Migration Service, then you
* For troubleshooting source database connectivity issues while using DMS, see the article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md). * For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md). * For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
-* For guidance about using DMS via PowerShell, see the article [PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS](./migrate-mysql-to-azure-mysql-powershell.md)
+* For guidance about using DMS via PowerShell, see the article [PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS](./migrate-mysql-to-azure-mysql-powershell.md)
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
To complete all the database objects like table schemas, indexes and stored proc
> [!NOTE] > The migration service internally handles the enable/disable of foreign keys and triggers to ensure a reliable and robust data migration. As a result, you do not have to worry about making any modifications to the target database schema. -
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-postgresql-to-azure-postgresql-online-portal/portal-select-subscriptions.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-postgresql-to-azure-postgresql-online-portal/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-postgresql-to-azure-postgresql-online-portal/portal-register-resource-provider.png)
## Create a DMS instance
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
To complete this tutorial, you need to:
> [!NOTE] > The migration service internally handles the enable/disable of foreign keys and triggers to ensure a reliable and robust data migration. As a result, you do not have to worry about making any modifications to the target database schema.
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-rds-postgresql-server-azure-db-for-postgresql-online/portal-select-subscription1.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-rds-postgresql-server-azure-db-for-postgresql-online/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-rds-postgresql-server-azure-db-for-postgresql-online/portal-register-resource-provider.png)
## Create an instance of Azure Database Migration Service
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online.md
To complete this tutorial, you need to:
> [!NOTE] > When you migrate a database that's protected by [Transparent Data Encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) to a managed instance by using online migration, the corresponding certificate from the on-premises or Azure VM SQL Server instance must be migrated before the database restore. For detailed steps, see [Migrate a TDE cert to a managed instance](../azure-sql/database/transparent-data-encryption-tde-overview.md).
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-sql-server-to-managed-instance-online/portal-select-subscriptions.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-sql-server-to-managed-instance-online/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-sql-server-to-managed-instance-online/portal-register-resource-provider.png)
## Create an Azure Database Migration Service instance
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-azure-sql.md
To migrate the **Adventureworks2016** schema to a single database or pooled data
![Deploy Schema](media/tutorial-sql-server-to-azure-sql/dma-schema-deploy.png)
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal. Search for and select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-sql-server-to-azure-sql/portal-select-subscription-1.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-sql-server-to-azure-sql/portal-select-resource-provider.png)
-
-3. Search for migration, and then select **Register** for **Microsoft.DataMigration**.
-
- ![Register resource provider](media/tutorial-sql-server-to-azure-sql/portal-register-resource-provider.png)
## Create an instance
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-managed-instance.md
To complete this tutorial, you need to:
> [!NOTE] > Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-sql-server-to-managed-instance/portal-select-subscriptions.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-sql-server-to-managed-instance/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-sql-server-to-managed-instance/portal-register-resource-provider.png)
## Create an Azure Database Migration Service instance
event-grid Security Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-authorization.md
The Event Grid Contributor role allows you to create and manage Event Grid resou
| [Event Grid Subscription Reader](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-reader) | Lets you read Event Grid event subscriptions. | | [Event Grid Subscription Contributor](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) | Lets you manage Event Grid event subscription operations. | | [Event Grid Contributor](../role-based-access-control/built-in-roles.md#eventgrid-contributor) | Lets you create and manage Event Grid resources. |
+| [Event Grid Data Sender](../role-based-access-control/built-in-roles.md#eventgrid-data-sender) | Lets you send events to Event Grid topics. |
> [!NOTE]
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
This section shows you how to create a .NET Core console application to send eve
{ // Use the producer client to send the batch of events to the event hub await producerClient.SendAsync(eventBatch);
- Console.WriteLine($"A batch of {numEvents} events has been published.");
+ Console.WriteLine($"A batch of {numOfEvents} events has been published.");
} finally {
expressroute How To Configure Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/how-to-configure-connection-monitor.md
+
+ Title: 'Configure Connection Monitor for Azure ExpressRoute'
+description: Configure cloud-based network connectivity monitoring for Azure ExpressRoute circuits. This covers monitoring over ExpressRoute private peering and Microsoft peering.
++++ Last updated : 07/28/2021++++
+# Configure Connection Monitor for ExpressRoute
+
+This article helps you configure a Connection Monitor extension to monitor ExpressRoute. Connection Monitor is a cloud-based network monitoring solution that monitors connectivity between Azure cloud deployments and on-premises locations (Branch offices, etc.). Connection Monitor is part of Azure Monitor logs. The extension also lets you monitor network connectivity for your private and Microsoft peering connections. When you configure Connection Monitor for ExpressRoute, you can detect network issues to identify and eliminate.
++
+With Connection Monitor for ExpressRoute you can:
+
+* Monitor loss and latency across various VNets and set alerts.
+
+* Monitor all paths (including redundant paths) on the network.
+
+* Troubleshoot transient and point-in-time network issues that are difficult to replicate.
+
+* Help determine a specific segment on the network that is responsible for degraded performance.
+
+## <a name="workflow"></a> Workflow
+
+Monitoring agents are installed on multiple servers, both on-premises and in Azure. The agents communicate with each other by sending TCP handshake packets. The communication between the agents allows Azure to map the network topology and path the traffic could take.
+
+1. Create a Log Analytics workspace.
+
+1. Install and configure software agents. (If you only want to monitor over Microsoft Peering, you don't need to install and configure software agents.):
+
+ * Install monitoring agents on the on-premises servers and the Azure VMs (for private peering).
+ * Configure settings on the monitoring agent servers to allow the monitoring agents to communicate. (Open firewall ports, etc.)
+
+1. Configure network security group (NSG) rules to allow the monitoring agent installed on Azure VMs to communicate with on-premises monitoring agents.
+
+1. Enable Network Watcher on your subscription.
+
+1. Set up monitoring: Create connection monitors with test groups to monitor source and destination endpoints across your network.
+
+If you're already using Network Performance Monitor (deprecated) or Connection Monitor to monitor other objects or services, and you already have a Log Analytics workspace in one of the supported regions. You may skip step 1 and step 2, and begin your configuration on step 3.
+
+## <a name="configure"></a> Create a Workspace
+
+Create a workspace in the subscription that has the VNets link to the ExpressRoute circuit(s).
+
+1. Sign in to the [Azure portal](https://portal.azure.com). From the subscription that has the virtual networks connected to your ExpressRoute circuit, select **+ Create a resource**. Search for *Log Analytics Workspace*, then select **Create**.
+
+ >[!NOTE]
+ >You can create a new workspace, or use an existing workspace. If you want to use an existing workspace, you must make sure that the workspace has been migrated to the new query language. [More information...](../azure-monitor/logs/log-query-overview.md)
+ >
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/search-log-analytics.png" alt-text="Screenshot of searching for Log Analytics in create a resource.":::
+
+1. Create a workspace by entering or selecting the following information.
+
+ | Settings | Value |
+ | -- | -- |
+ | Subscription | Select the subscription with the ExpressRoute circuit. |
+ | Resource Group | Create a new or select an existing resource group. |
+ | Name | Enter a name to identify this workspace. |
+ | Region | Select a region where this workspace will be created in. |
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/create-workspace-basic.png" alt-text="Screenshot of basic tab for create Log Analytics workspace.":::
+
+ >[!NOTE]
+ >The ExpressRoute circuit can be anywhere in the world. It doesn't have to be in the same region as the workspace.
+ >
+
+1. Select **Review + Create** to validate and then **Create** to deploy the workspace. Once the workspace has been deployed continue to the next section to configure the monitoring solution.
+
+## <a name="npm"></a>Configure monitoring solution
+
+Complete the Azure PowerShell script below by replacing the values for *$SubscriptionId*, *$location*, *$resourceGroup*, and *$workspaceName*. Then run the script to configure the monitoring solution.
+
+```azurepowershell-interactive
+$subscriptionId = "Subscription ID should come here"
+Select-AzSubscription -SubscriptionId $subscriptionId
+
+$location = "Workspace location should come here"
+$resourceGroup = "Resource group name should come here"
+$workspaceName = "Workspace name should come here"
+
+$solution = @{
+ Location = $location
+ Properties = @{
+ workspaceResourceId = "/subscriptions/$($subscriptionId)/resourcegroups/$($resourceGroup)/providers/Microsoft.OperationalInsights/workspaces/$($workspaceName)"
+ }
+ Plan = @{
+ Name = "NetworkMonitoring($($workspaceName))"
+ Publisher = "Microsoft"
+ Product = "OMSGallery/NetworkMonitoring"
+ PromotionCode = ""
+ }
+ ResourceName = "NetworkMonitoring($($workspaceName))"
+ ResourceType = "Microsoft.OperationsManagement/solutions"
+ ResourceGroupName = $resourceGroup
+}
+
+New-AzureRmResource @solution -Force
+```
+
+Once you've configured the monitoring solution. Continue to the next step of installing and configuring the monitoring agents on your servers.
+
+## <a name="agents"></a>Install and configure agents on-premises
+
+### <a name="download"></a>Download the agent setup file
+
+1. Navigate to the **Log Analytics workspace** and select **Agents management** under *Settings*. Download the agent that corresponds to your machine's operating system.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/download-agent.png" alt-text="Screenshot of agent management page in workspace.":::
+
+1. Next, copy the **Workspace ID** and **Primary Key** to Notepad.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/copy-id-key.png" alt-text="Screenshot of workspace id and primary key.":::
+
+1. For Windows machines, download and run this PowerShell script [*EnableRules.ps1*](https://aka.ms/npmpowershellscript) in a PowerShell window with Administrator privileges. The PowerShell script will open the relevant firewall port for the TCP transactions.
+
+ For Linux machines, the port number needs to be changed manually with the follow steps:
+
+ * Navigate to path: /var/opt/microsoft/omsagent/npm_state.
+ * Open file: npmdregistry
+ * Change the value for Port Number `PortNumber:<port of your choice>`
+
+### <a name="installagentonprem"></a>Install Log Analytics agent on each monitoring server
+
+It's recommended that you install the Log Analytics agent on at least two servers on both sides of the ExpressRoute connection for redundancy. For example, your on-premises and Azure virtual network. Use the following steps to install agents:
+
+1. Select the appropriate operating system below for the steps to install the Log Analytics agent on your servers.
+
+ * [Windows](../azure-monitor/agents/agent-windows.md#install-agent-using-setup-wizard)
+ * [Linux](../azure-monitor/agents/agent-linux.md)
+
+1. When complete, the Microsoft Monitoring Agent appears in the Control Panel. You can review your configuration, and [verify the agent connectivity](../azure-monitor/agents/agent-windows.md#verify-agent-connectivity-to-azure-monitor) to Azure Monitor logs.
+
+1. Repeat steps 1 and 2 for the other on-premises machines that you wish to use for monitoring.
+
+### <a name="installagentazure"></a>Install Network Watcher agent on each monitoring server
+
+#### New Azure virtual machine
+
+If you're creating a new Azure VM for monitoring connectivity your VNet, you can install the Network Watcher agent when [creating the VM](../network-watcher/connection-monitor.md#create-the-first-vm).
+
+#### Existing Azure virtual machine
+
+If you're using an existing VM to monitor connectivity, you can install the Network Agent separately for [Linux](../virtual-machines/extensions/network-watcher-linux.md) and [Windows](../virtual-machines/extensions/network-watcher-windows.md).
+
+### <a name="firewall"></a>Open the firewall ports on the monitoring agent servers
+
+Rules for a firewall can block communication between the source and destination servers. Connection Monitor detects this issue and displays it as a diagnostic message in the topology. To enable connection monitoring, ensure that firewall rules allow packets over TCP or ICMP between the source and destination.
+
+#### Windows
+
+For Windows machines, you can run a PowerShell script to create the registry keys that are required by the Connection Monitor. This script also creates the Windows Firewall rules to allow monitoring agents to create TCP connections with each other. The registry keys created by the script specify whether to log the debug logs, and the path for the logs file. It also defines the agent TCP port used for communication. The values for these keys are automatically set by the script. You shouldn't manually change these keys.
+
+Port 8084 is opened by default. You can use a custom port by providing the parameter 'portNumber' to the script. However, if you do so, you must specify the same port for all the servers on which you run the script.
+
+>[!NOTE]
+>The 'EnableRules' PowerShell script configures Windows Firewall rules only on the server where the script is run. If you have a network firewall, you should make sure that it allows traffic destined for the TCP port being used by Connection Monitor.
+>
+
+On the agent servers, open a PowerShell window with administrative privileges. Run the [EnableRules](https://aka.ms/npmpowershellscript) PowerShell script (which you downloaded earlier). Don't use any parameters.
++
+#### Linux
+
+For Linux machines, the port numbers used needs to be changed manually:
+
+1. Navigate to path: /var/opt/microsoft/omsagent/npm_state.
+1. Open file: npmdregistry
+1. Change the value for Port Number `PortNumber:\<port of your choice\>`. The port numbers being used should be same across all the agents used in a workspace
+
+## <a name="opennsg"></a>Configure network security group rules
+
+To monitor servers that are in Azure, you must configure network security group (NSG) rules to allow TCP or ICMP traffic from Connection Monitor. The default port is **8084, which allows the monitoring agent installed on the Azure VM to communicate with an on-premises monitoring agent.
+
+For more information about NSG, see tutorial on [filtering network traffic](../virtual-network/tutorial-filter-network-traffic.md).
+
+> [!NOTE]
+> Make sure that you have installed the agents (both the on-premises server agent and the Azure server agent), and have run the PowerShell script before proceeding with this step.
+>
+
+## <a name="enablenetworkwatcher"></a>Enable Network Watcher
+
+All subscriptions that have a virtual network are enabled with Network Watcher. Ensure that Network Watcher isn't explicitly disabled for your subscription. For more information, see [Enable Network Watcher](../network-watcher/network-watcher-create.md).
+
+## <a name="createcm"></a> Create a connection monitor
+
+For a high-level overview of how to create a connection monitor, tests, and test groups across source and destination endpoints in your network, see [Create a connection monitor](../network-watcher/connection-monitor-create-using-portal.md). Use the following steps to configure connection monitoring for Private Peering and Microsoft Peering.
+
+1. In the Azure portal, navigate to your **Network Watcher** resource and select **Connection monitor** under *Monitoring*. Then select **Create** to create a new connection monitor.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/create-connection-monitor.png" alt-text="Screenshot of connection monitor in Network Watcher.":::
+
+1. On the **Basics** tab of the creation workflow, select the same region where you deployed your Log Analytics workspace for the *Region* field. For *Workspace configuration*, select the existing Log Analytics workspace that you created earlier. Then select **Next: Test groups >>**.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/connection-monitor-basic.png" alt-text="Screenshot of basic tab for creating Connection Monitor.":::
+
+1. On the Add test group details page, you'll add the source and destination endpoints for your test group. You 'll also set up the test configurations between them. Enter a **Name** for this test group.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/add-test-group-details.png" alt-text="Screenshot of add test group details page.":::
+
+1. Select **Add source** and navigate to the **Non-Azure endpoints** tab. Choose the on-premises resources that have Log Analytics agent installed that you want to monitor connectivity, then select **Add endpoints**.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/add-source-endpoints.png" alt-text="Screenshot of adding source endpoints.":::
+
+1. Next, select **Add destinations**.
+
+ To monitor connectivity over ExpressRoute **private peering**, navigate to the **Azure endpoints** tab. Choose the Azure resources with the Network Watcher agent installed that you want to monitor connectivity to your virtual networks in Azure. Make sure to select the private IP address of each of these resources in the *IP* column. Select **Add endpoints** to add these endpoints to your list of destinations for the test group.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/add-destination-endpoints.png" alt-text="Screenshot of adding Azure destination endpoints.":::
+
+ To monitor connectivity over ExpressRoute **Microsoft peering**, navigate to the **External Addresses** tab. Select the Microsoft services endpoints that you wish to monitor connectivity to over Microsoft Peering. Select **Add endpoints** to add these endpoints to your list of destinations for the test group.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/add-external-destination-endpoints.png" alt-text="Screenshot of adding external destination endpoints.":::
+
+1. Now select **Add test configuration**. Select **TCP** for the protocol, and input the **destination port** you opened on your servers. Then configure your **test frequency** and **thresholds for failed checks and round trip time**. Then select **Add Test configuration**.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/add-test-configuration.png" alt-text="Screenshot of add test configuration page.":::
+
+1. Select **Add Test Group** once you've added your sources, destinations, and test configuration.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/add-test-group-details-configured.png" alt-text="Screenshot of add test group detail configured." lightbox="./media/how-to-configure-connection-monitor/add-test-group-details-configured-expanded.png":::
+
+1. Select the **Next : Create alert >>** if you want to create alerting. Once completed, select **Review + create** and then **Create**.
+
+## View results
+
+1. Go to your **Network Watcher** resource and select **Connection monitor** under *Monitoring*. You should see your new connection monitor after 5 minutes. To view the connection monitor's network topology and performance charts, select the test from the test group dropdown.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/overview.png" alt-text="Screenshot of connection monitor overview page." lightbox="./media/how-to-configure-connection-monitor/overview-expanded.png":::
+
+1. In the **Performance analysis** panel, you can view the percentage of check failed and each test's results for round-trip time. You can adjust the time frame for the data displayed by selecting the dropdown at the top of the panel.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/performance-analysis.png" alt-text="Screenshot of performance analysis panel." lightbox="./media/how-to-configure-connection-monitor/performance-analysis-expanded.png":::
+
+1. Closing the **Performance analysis** panel reveals the network topology detected by the connection monitor between the source and destination endpoints you selected. This view shows you the bi-directional paths of traffic between your source and destination endpoints. You can also see the hop-by-hop latency of packets before they reach the Microsoft's edge network.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/topology.png" alt-text="Screenshot of network topology in connection monitor." lightbox="./media/how-to-configure-connection-monitor/topology-expanded.png":::
+
+ Selecting any hop in the topology view will display additional information about the hop. Any issues detected by the connection monitor about the hop will also be displayed here.
+
+ :::image type="content" source="./media/how-to-configure-connection-monitor/hop-details.png" alt-text="Screenshot of more information for a network hop.":::
+
+## Next steps
+
+Learn more about [Monitoring Azure ExpressRoute](monitor-expressroute.md)
expressroute How To Npm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/how-to-npm.md
Previously updated : 01/25/2019 Last updated : 07/28/2019
-# Configure Network Performance Monitor for ExpressRoute
+# Configure Network Performance Monitor for ExpressRoute (deprecated)
This article helps you configure a Network Performance Monitor extension to monitor ExpressRoute. Network Performance Monitor (NPM) is a cloud-based network monitoring solution that monitors connectivity between Azure cloud deployments and on-premises locations (Branch offices, etc.). NPM is part of Azure Monitor logs. NPM offers an extension for ExpressRoute that lets you monitor network performance over ExpressRoute circuits that are configured to use private peering or Microsoft peering. When you configure NPM for ExpressRoute, you can detect network issues to identify and eliminate. This service is also available for Azure Government Cloud.
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor](../network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](../network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../includes/azure-monitor-log-analytics-rebrand.md)] You can:
Create a workspace in the subscription that has the VNets link to the ExpressRou
1. Go to the **Common Settings** tab of the **Network Performance Monitor Configuration** page for your resource. Click the agent that corresponds to your server's processor from the **Install Log Analytics Agents** section, and download the setup file. 2. Next, copy the **Workspace ID** and **Primary Key** to Notepad.
-3. From the **Configure Log Analytics Agents for monitoring using TCP protocol** section, download the Powershell Script. The PowerShell script helps you open the relevant firewall port for the TCP transactions.
+3. From the **Configure Log Analytics Agents for monitoring using TCP protocol** section, download the PowerShell Script. The PowerShell script helps you open the relevant firewall port for the TCP transactions.
![PowerShell script](./media/how-to-npm/7.png)
You can increase the level of visibility to include on-premises hops by moving t
#### Detailed Topology view of a circuit This view shows VNet connections.
-![detailed topology](./media/how-to-npm/17.png)
+![detailed topology](./media/how-to-npm/17.png)
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Previously updated : 07/19/2021 Last updated : 07/29/2021
Azure Firewall Premium is supported in the following regions:
- Australia East (Public / Australia) - Australia Southeast (Public / Australia) - Brazil South (Public / Brazil)
+- Brazil Southeast (Public / Brazil)
- Canada Central (Public / Canada) - Canada East (Public / Canada) - Central India (Public / India)
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/guest-configuration-baseline-linux.md
Title: Reference - Azure Policy Guest Configuration baseline for Linux
-description: Details of the Linux baseline on Azure implemented through Azure Policy Guest Configuration.
Previously updated : 07/07/2021
+ Title: Reference - Azure Policy guest configuration baseline for Linux
+description: Details of the Linux baseline on Azure implemented through Azure Policy guest configuration.
Last updated : 07/29/2021
-# Azure Policy Guest Configuration baseline for Linux
+# Linux security baseline
-The following article details what the **\[Preview\] Linux machines should meet requirements for the
-Azure security baseline** Guest Configuration policy definition audits. For more information, see
-[Azure Policy Guest Configuration](../concepts/guest-configuration.md) and
+This article details the configuration settings for Linux guests as applicable in the following
+implementations:
+
+- **\[Preview\] Linux machines should meet requirements for the Azure compute security baseline**
+ Azure Policy guest configuration definition
+- **Vulnerabilities in security configuration on your machines should be remediated** in Azure
+ Security Center
+
+For more information, see [Azure Policy guest configuration](../concepts/guest-configuration.md) and
[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md). ## General security controls
-|Name<br /><sub>(ID)</sub> |Details |Remediation check |
+|Name<br /><sub>(CCEID)</sub> |Details |Remediation check |
||||
-|Ensure nodev option set on /home partition.<br /><sub>(1.1.4)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /home partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /home partition. For more information, see the fstab(5) manual pages. |
-|Ensure nodev option set on /tmp partition.<br /><sub>(1.1.5)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /tmp partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure nodev option set on /var/tmp partition.<br /><sub>(1.1.6)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /var/tmp partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure nosuid option set on /tmp partition.<br /><sub>(1.1.7)</sub> |Description: Since the /tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create setuid files in /var/tmp. |Edit the /etc/fstab file and add nosuid to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure nosuid option set on /var/tmp partition.<br /><sub>(1.1.8)</sub> |Description: Since the /var/tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create setuid files in /var/tmp. |Edit the /etc/fstab file and add nosuid to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure noexec option set on /var/tmp partition.<br /><sub>(1.1.9)</sub> |Description: Since the `/var/tmp` filesystem is only intended for temporary file storage, set this option to ensure that users cannot run executable binaries from `/var/tmp` . |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure noexec option set on /dev/shm partition.<br /><sub>(1.1.16)</sub> |Description: Setting this option on a file system prevents users from executing programs from shared memory. This deters users from introducing potentially malicious software on the system. |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /dev/shm partition. For more information, see the fstab(5) manual pages. |
-|Disable automounting<br /><sub>(1.1.21)</sub> |Description: With automounting enabled, anyone with physical access could attach a USB drive or disc and have its contents available in system even if they lack permissions to mount it themselves. |Disable the autofs service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-autofs' |
|Ensure mounting of USB storage devices is disabled<br /><sub>(1.1.21.1)</sub> |Description: Removing support for USB storage devices reduces the local attack surface of the server. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install usb-storage /bin/true` then unload the usb-storage module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Ensure core dumps are restricted.<br /><sub>(1.5.1)</sub> |Description: Setting a hard limit on core dumps prevents users from overriding the soft variable. If core dumps are required, consider setting limits for user groups (see `limits.conf(5)` ). In addition, setting the `fs.suid_dumpable` variable to 0 will prevent setuid programs from dumping core. |Add `hard core 0` to /etc/security/limits.conf or a file in the limits.d directory and set `fs.suid_dumpable = 0` in sysctl or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-core-dumps' |
-|Ensure prelink is disabled.<br /><sub>(1.5.4)</sub> |Description: The prelinking feature can interfere with the operation of AIDE, because it changes binaries. Prelinking can also increase the vulnerability of the system if a malicious user is able to compromise a common library such as libc. |uninstall `prelink` using your package manager or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-prelink' |
-|Ensure permissions on /etc/motd are configured.<br /><sub>(1.7.1.4)</sub> |Description: If the `/etc/motd` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/motd to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
-|Ensure permissions on /etc/issue are configured.<br /><sub>(1.7.1.5)</sub> |Description: If the `/etc/issue` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
-|Ensure permissions on /etc/issue.net are configured.<br /><sub>(1.7.1.6)</sub> |Description: If the `/etc/issue.net` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue.net to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
|The nodev option should be enabled for all removable media.<br /><sub>(2.1)</sub> |Description: An attacker could mount a special device (for example, block or character device) via removable media |Add the nodev option to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. | |The noexec option should be enabled for all removable media.<br /><sub>(2.2)</sub> |Description: An attacker could load executable file via removable media |Add the noexec option to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. | |The nosuid option should be enabled for all removable media.<br /><sub>(2.3)</sub> |Description: An attacker could load files that run with an elevated security context via removable media |Add the nosuid option to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. |
-|Ensure talk client is not installed.<br /><sub>(2.3.3)</sub> |Description: The software presents a security risk as it uses unencrypted protocols for communication. |Uninstall `talk` or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-talk' |
-|Ensure permissions on /etc/hosts.allow are configured.<br /><sub>(3.4.4)</sub> |Description: It is critical to ensure that the `/etc/hosts.allow` file is protected from unauthorized write access. Although it is protected by default, the file permissions could be changed either inadvertently or through malicious actions. |Set the owner and group of /etc/hosts.allow to root and the permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
-|Ensure permissions on /etc/hosts.deny are configured.<br /><sub>(3.4.5)</sub> |Description: It is critical to ensure that the `/etc/hosts.deny` file is protected from unauthorized write access. Although it is protected by default, the file permissions could be changed either inadvertently or through malicious actions. |Set the owner and group of /etc/hosts.deny to root and the permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
-|Ensure default deny firewall policy<br /><sub>(3.6.2)</sub> |Description: With a default accept policy the firewall will accept any packet that is not configured to be denied. It is easier to maintain a secure firewall with a default DROP policy than it is with a default ALLOW policy. |Set the default policy for incoming, outgoing, and routed traffic to `deny` or `reject` as appropriate using your firewall software |
|The nodev/nosuid option should be enabled for all NFS mounts.<br /><sub>(5)</sub> |Description: An attacker could load files that run with an elevated security context or special devices via remote file system |Add the nosuid and nodev options to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. |
-|Ensure password creation requirements are configured.<br /><sub>(5.3.1)</sub> |Description: Strong passwords protect systems from being hacked through brute force methods. |Set the following key/value pairs in the appropriate PAM for your distro: minlen=14, minclass = 4, dcredit = -1, ucredit = -1, ocredit = -1, lcredit = -1, or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-password-requirements' |
-|Ensure lockout for failed password attempts is configured.<br /><sub>(5.3.2)</sub> |Description: Locking out user IDs after `n` unsuccessful consecutive login attempts mitigates brute force password attacks against your systems. |for Ubuntu and Debian, add the pam_tally and pam_deny modules as appropriate. For all other distros, refer to your distro's documentation |
|Disable the installation and use of file systems that are not required (cramfs)<br /><sub>(6.1)</sub> |Description: An attacker could use a vulnerability in cramfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables cramfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Disable the installation and use of file systems that are not required (freevxfs)<br /><sub>(6.2)</sub> |Description: An attacker could use a vulnerability in freevxfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables freevxfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Ensure all users' home directories exist<br /><sub>(6.2.7)</sub> |Description: If the user's home directory does not exist or is unassigned, the user will be placed in '/' and will not be able to write any files or have local environment variables set. |If any users' home directories do not exist, create them and make sure the respective user owns the directory. Users without an assigned home directory should be removed or assigned a home directory as appropriate. |
-|Ensure users own their home directories<br /><sub>(6.2.9)</sub> |Description: Since the user is accountable for files stored in the user home directory, the user must be the owner of the directory. |Change the ownership of any home directories that are not owned by the defined user to the correct user. |
-|Ensure users' dot files are not group or world writable.<br /><sub>(6.2.10)</sub> |Description: Group or world-writable user configuration files may enable malicious users to steal or modify other users' data or to gain another user's system privileges. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user dot file permissions and determine the action to be taken in accordance with site policy. |
-|Ensure no users have .forward files<br /><sub>(6.2.11)</sub> |Description: Use of the `.forward` file poses a security risk in that sensitive data may be inadvertently transferred outside the organization. The `.forward` file also poses a risk as it can be used to execute commands that may perform unintended actions. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.forward` files and determine the action to be taken in accordance with site policy. |
-|Ensure no users have .netrc files<br /><sub>(6.2.12)</sub> |Description: The `.netrc` file presents a significant security risk since it stores passwords in unencrypted form. Even if FTP is disabled, user accounts may have brought over `.netrc` files from other systems which could pose a risk to those systems |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.netrc` files and determine the action to be taken in accordance with site policy. |
-|Ensure no users have .rhosts files<br /><sub>(6.2.14)</sub> |Description: This action is only meaningful if `.rhosts` support is permitted in the file `/etc/pam.conf` . Even though the `.rhosts` files are ineffective if support is disabled in `/etc/pam.conf` , they may have been brought over from other systems and could contain information useful to an attacker for those other systems. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.rhosts` files and determine the action to be taken in accordance with site policy. |
-|Ensure all groups in /etc/passwd exist in /etc/group<br /><sub>(6.2.15)</sub> |Description: Groups which are defined in the /etc/passwd file but not in the /etc/group file poses a threat to system security since group permissions are not properly managed. |For each group defined in /etc/passwd, ensure there is a corresponding group in /etc/group |
-|Ensure no duplicate UIDs exist<br /><sub>(6.2.16)</sub> |Description: Users must be assigned unique UIDs for accountability and to ensure appropriate access protections. |Establish unique UIDs and review all files owned by the shared UIDs to determine which UID they are supposed to belong to. |
-|Ensure no duplicate GIDs exist<br /><sub>(6.2.17)</sub> |Description: Groups must be assigned unique GIDs for accountability and to ensure appropriate access protections. |Establish unique GIDs and review all files owned by the shared GIDs to determine which GID they are supposed to belong to. |
-|Ensure no duplicate user names exist<br /><sub>(6.2.18)</sub> |Description: If a user is assigned a duplicate user name, it will create and have access to files with the first UID for that username in `/etc/passwd` . For example, if 'test4' has a UID of 1000 and a subsequent 'test4' entry has a UID of 2000, logging in as 'test4' will use UID 1000. Effectively, the UID is shared, which is a security problem. |Establish unique user names for all users. File ownerships will automatically reflect the change as long as the users have unique UIDs. |
-|Ensure no duplicate groups exist<br /><sub>(6.2.19)</sub> |Description: If a group is assigned a duplicate group name, it will create and have access to files with the first GID for that group in `/etc/group` . Effectively, the GID is shared, which is a security problem. |Establish unique names for all user groups. File group ownerships will automatically reflect the change as long as the groups have unique GIDs. |
-|Ensure shadow group is empty<br /><sub>(6.2.20)</sub> |Description: Any users assigned to the shadow group would be granted read access to the /etc/shadow file. If attackers can gain read access to the `/etc/shadow` file, they can easily run a password cracking program against the hashed passwords to break them. Other security information that is stored in the `/etc/shadow` file (such as expiration) could also be useful to subvert additional user accounts. |Remove all users form the shadow group |
|Disable the installation and use of file systems that are not required (hfs)<br /><sub>(6.3)</sub> |Description: An attacker could use a vulnerability in hfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables hfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Disable the installation and use of file systems that are not required (hfsplus)<br /><sub>(6.4)</sub> |Description: An attacker could use a vulnerability in hfsplus to elevate privileges |Add a file to the /etc/modprob.d directory that disables hfsplus or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Disable the installation and use of file systems that are not required (jffs2)<br /><sub>(6.5)</sub> |Description: An attacker could use a vulnerability in jffs2 to elevate privileges |Add a file to the /etc/modprob.d directory that disables jffs2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
Azure security baseline** Guest Configuration policy definition audits. For more
|Ensure packet redirect sending is disabled.<br /><sub>(38.3)</sub> |Description: An attacker could use a compromised host to send invalid ICMP redirects to other router devices in an attempt to corrupt routing and have users access a system set up by the attacker as opposed to a valid system. |set the following parameters in /etc/sysctl.conf: 'net.ipv4.conf.all.send_redirects = 0' and 'net.ipv4.conf.default.send_redirects = 0' or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-send-redirects | |Sending ICMP redirects should be disabled for all interfaces. (net.ipv4.conf.default.accept_redirects = 0)<br /><sub>(38.4)</sub> |Description: An attacker could alter this system's routing table, redirecting traffic to an alternate destination |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-accept-redirects'. | |Sending ICMP redirects should be disabled for all interfaces. (net.ipv4.conf.default.secure_redirects = 0)<br /><sub>(38.5)</sub> |Description: An attacker could alter this system's routing table, redirecting traffic to an alternate destination |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-secure-redirects' |
-|Accepting source routed packets should be disabled for all interfaces. (net.ipv4.conf.all.accept_source_route = 0)<br /><sub>(40.1)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-accept-source-route' |
-|Accepting source routed packets should be disabled for all interfaces. (net.ipv6.conf.all.accept_source_route = 0) or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-accept-source-route'<br /><sub>(40.2)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value. |
+|Accepting source routed packets should be disabled for all interfaces. (net.ipv4.conf.all.accept_source_route = 0)<br /><sub>(40.1)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value. |
+|Accepting source routed packets should be disabled for all interfaces. (net.ipv6.conf.all.accept_source_route = 0)<br /><sub>(40.2)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value. |
+|The default setting for accepting source routed packets should be disabled for network interfaces. (net.ipv4.conf.default.accept_source_route = 0)<br /><sub>(42.1)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value. |
+|The default setting for accepting source routed packets should be disabled for network interfaces. (net.ipv6.conf.default.accept_source_route = 0)<br /><sub>(42.2)</sub> |Description: An attcker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value. |
|Ignoring bogus ICMP responses to broadcasts should be enabled. (net.ipv4.icmp_ignore_bogus_error_responses = 1)<br /><sub>(43)</sub> |Description: An attacker could perform an ICMP attack resulting in DoS |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-icmp-ignore-bogus-error-responses' | |Ignoring ICMP echo requests (pings) sent to broadcast / multicast addresses should be enabled. (net.ipv4.icmp_echo_ignore_broadcasts = 1)<br /><sub>(44)</sub> |Description: An attacker could perform an ICMP attack resulting in DoS |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-icmp-echo-ignore-broadcasts' | |Logging of martian packets (those with impossible addresses) should be enabled for all interfaces. (net.ipv4.conf.all.log_martians = 1)<br /><sub>(45.1)</sub> |Description: An attacker could send traffic from spoofed addresses without being detected |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-log-martians' |
Azure security baseline** Guest Configuration policy definition audits. For more
|Ensure SCTP is disabled<br /><sub>(55)</sub> |Description: If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install sctp /bin/true` then unload the sctp module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Disable support for RDS.<br /><sub>(56)</sub> |Description: An attacker could use a vulnerability in RDS to compromise the system |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install rds /bin/true` then unload the rds module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Ensure TIPC is disabled<br /><sub>(57)</sub> |Description: If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install tipc /bin/true` then unload the tipc module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
-|Ensure logging is configured<br /><sub>(60)</sub> |Description: A great deal of important security-related information is sent via `rsyslog` (for example, successful and failed su attempts, failed login attempts, root login attempts, etc.). |Configure syslog, rsyslog or syslog-ng as appropriate |
|The syslog, rsyslog, or syslog-ng package should be installed.<br /><sub>(61)</sub> |Description: Reliability and security issues will not be logged, preventing proper diagnosis. |Install the rsyslog package, or run '/opt/microsoft/omsagent/plugin/omsremediate -r install-rsyslog' |
+|The systemd-journald service should be configured to persists log messages<br /><sub>(61.1)</sub> |Description: Reliability and security issues will not be logged, preventing proper diagnosis. |Create /var/log/journal and ensure that Storage in journald.conf is auto or persistent |
|Ensure a logging service is enabled<br /><sub>(62)</sub> |Description: It is imperative to have the ability to log events on a node. |Enable the rsyslog package or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-rsyslog' | |File permissions for all rsyslog log files should be set to 640 or 600.<br /><sub>(63)</sub> |Description: An attacker could hide activity by manipulating logs |Add the line '$FileCreateMode 0640' to the file '/etc/rsyslog.conf' |
-|Ensure logger configuration files are restricted.<br /><sub>(63.1)</sub> |Description: It is important to ensure that log files exist and have the correct permissions to ensure that sensitive syslog data is archived and protected. |Set your logger's configuration files to 0640 or run '/opt/microsoft/omsagent/plugin/omsremediate -r logger-config-file-permissions' |
|All rsyslog log files should be owned by the adm group.<br /><sub>(64)</sub> |Description: An attacker could hide activity by manipulating logs |Add the line '$FileGroup adm' to the file '/etc/rsyslog.conf' | |All rsyslog log files should be owned by the syslog user.<br /><sub>(65)</sub> |Description: An attacker could hide activity by manipulating logs |Add the line '$FileOwner syslog' to the file '/etc/rsyslog.conf' or run '/opt/microsoft/omsagent/plugin/omsremediate -r syslog-owner | |Rsyslog should not accept remote messages.<br /><sub>(67)</sub> |Description: An attacker could inject messages into syslog, causing a DoS or a distraction from other activity |Remove the lines '$ModLoad imudp' and '$ModLoad imtcp' from the file '/etc/rsyslog.conf' |
Azure security baseline** Guest Configuration policy definition audits. For more
|Ensure permissions on /etc/cron.hourly are configured.<br /><sub>(95)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.hourly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms | |Ensure permissions on /etc/cron.monthly are configured.<br /><sub>(96)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.monthly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms | |Ensure permissions on /etc/cron.weekly are configured.<br /><sub>(97)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.weekly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms |
-|Ensure at/cron is restricted to authorized users<br /><sub>(98)</sub> |Description: On many systems, only the system administrator is authorized to schedule `cron` jobs. Using the `cron.allow` file to control who can run `cron` jobs enforces this policy. It is easier to manage an allow list than a deny list. In a deny list, you could potentially add a user ID to the system and forget to add it to the deny files. |replace /etc/cron.deny and /etc/at.deny with their respective `allow` files |
-|Ensure remote login warning banner is configured properly.<br /><sub>(111)</sub> |Description: Warning messages inform users who are attempting to login to the system of their legal status regarding the system and must include the name of the organization that owns the system and any monitoring policies that are in place. Displaying OS and patch level information in login banners also has the side effect of providing detailed system information to attackers attempting to target specific exploits of a system. Authorized users can easily get this information by running the `uname -a`command once they have logged in. |Remove any instances of \m \r \s and \v from the /etc/issue.net file |
-|Ensure local login warning banner is configured properly.<br /><sub>(111.1)</sub> |Description: Warning messages inform users who are attempting to login to the system of their legal status regarding the system and must include the name of the organization that owns the system and any monitoring policies that are in place. Displaying OS and patch level information in login banners also has the side effect of providing detailed system information to attackers attempting to target specific exploits of a system. Authorized users can easily get this information by running the `uname -a`command once they have logged in. |Remove any instances of \m \r \s and \v from the /etc/issue file |
|The avahi-daemon service should be disabled.<br /><sub>(114)</sub> |Description: An attacker could use a vulnerability in the avahi daemon to gain access |Disable the avahi-daemon service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-avahi-daemon' | |The cups service should be disabled.<br /><sub>(115)</sub> |Description: An attacker could use a flaw in the cups service to elevate privileges |Disable the cups service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-cups' | |The isc-dhcpd service should be disabled.<br /><sub>(116)</sub> |Description: An attacker could use dhcpd to provide faulty information to clients, interfering with normal operation. |Remove the isc-dhcp-server package (apt-get remove isc-dhcp-server) |
Azure security baseline** Guest Configuration policy definition audits. For more
|Ensure root is the only UID 0 account<br /><sub>(157.18)</sub> |Description: This access must be limited to only the default `root `account and only from the system console. Administrative access must be through an unprivileged account using an approved mechanism. |Remove any users other than `root` with UID `0` or assign them a new UID if appropriate. | |Remove unnecessary packages<br /><sub>(158)</sub> |Description: |Run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-landscape-common | |Remove unnecessary accounts<br /><sub>(159)</sub> |Description: For compliance |Remove the unnecessary accounts |
-|Ensure auditd service is enabled<br /><sub>(162)</sub> |Description: The capturing of system events provides system administrators with information to allow them to determine if unauthorized access to their system is occurring. |Install audit package (systemctl enable auditd) |
-|Run AuditD service<br /><sub>(163)</sub> |Description: The capturing of system events provides system administrators with information to allow them to determine if unauthorized access to their system is occurring. |Run AuditD service (systemctl start auditd) |
|Ensure SNMP Server is not enabled<br /><sub>(179)</sub> |Description: The SNMP server can communicate using SNMP v1, which transmits data in the clear and does not require authentication to execute commands. Unless absolutely necessary, it is recommended that the SNMP service not be used. If SNMP is required the server should be configured to disallow SNMP v1. |Run one of the following commands to disable `snmpd`: ``` # chkconfig snmpd off ``` ``` # systemctl disable snmpd ``` ``` # update-rc.d snmpd disable ``` | |Ensure rsync service is not enabled<br /><sub>(181)</sub> |Description: The `rsyncd` service presents a security risk as it uses unencrypted protocols for communication. |Run one of the following commands to disable `rsyncd` : `chkconfig rsyncd off`, `systemctl disable rsyncd`, `update-rc.d rsyncd disable` or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rsysnc' | |Ensure NIS server is not enabled<br /><sub>(182)</sub> |Description: The NIS service is an inherently insecure system that has been vulnerable to DOS attacks, buffer overflows and has poor authentication for querying NIS maps. NIS is generally replaced by protocols like Lightweight Directory Access Protocol (LDAP). It is recommended that the service be disabled and more secure services be used |Run one of the following commands to disable `ypserv` : ``` # chkconfig ypserv off ``` ``` # systemctl disable ypserv ``` ``` # update-rc.d ypserv disable ``` |
Azure security baseline** Guest Configuration policy definition audits. For more
|Disable SMB V1 with Samba<br /><sub>(185)</sub> |Description: SMB v1 has well-known, serious vulnerabilities and does not encrypt data in transit. If it must be used for business reasons, it is strongly recommended that additional steps be taken to mitigate the risks inherent to this protcol. |If Samba is not running, remove package, otherwise there should be a line in the [global] section of /etc/samba/smb.conf: min protocol = SMB2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-smb-min-version | > [!NOTE]
-> Availability of specific Azure Policy Guest Configuration settings may vary in Azure Government
+> Availability of specific Azure Policy guest configuration settings may vary in Azure Government
> and other national clouds. ## Next steps
-Additional articles about Azure Policy and Guest Configuration:
+Additional articles about Azure Policy and guest configuration:
-- [Azure Policy Guest Configuration](../concepts/guest-configuration.md).
+- [Azure Policy guest configuration](../concepts/guest-configuration.md).
- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/guest-configuration-baseline-windows.md
Title: Reference - Azure Policy Guest Configuration baseline for Windows
-description: Details of the Windows baseline on Azure implemented through Azure Policy Guest Configuration.
Previously updated : 07/07/2021
+ Title: Reference - Azure Policy guest configuration baseline for Windows
+description: Details of the Windows baseline on Azure implemented through Azure Policy guest configuration.
Last updated : 07/29/2021
-# Azure Policy Guest Configuration baseline for Windows
+# Windows security baseline
-The following article details what the **\[Preview\] Windows machines should meet requirements for
-the Azure security baseline** Guest Configuration policy initiative audits. For more information,
-see [Azure Policy Guest Configuration](../concepts/guest-configuration.md) and
+This article details the configuration settings for Windows guests as applicable in the following
+implementations:
+
+- **\[Preview\] Windows machines should meet requirements for the Azure compute security baseline**
+ Azure Policy guest configuration definition
+- **Vulnerabilities in security configuration on your machines should be remediated** in Azure
+ Security Center
+
+For more information, see [Azure Policy guest configuration](../concepts/guest-configuration.md) and
[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md). ## Administrative Templates - Control Panel
see [Azure Policy Guest Configuration](../concepts/guest-configuration.md) and
|Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning | |Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to controls user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_ShowSharedAccessUI<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - System
see [Azure Policy Guest Configuration](../concepts/guest-configuration.md) and
|Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | > [!NOTE]
-> Availability of specific Azure Policy Guest Configuration settings may vary in Azure Government
+> Availability of specific Azure Policy guest configuration settings may vary in Azure Government
> and other national clouds. ## Next steps
-Additional articles about Azure Policy and Guest Configuration:
+Additional articles about Azure Policy and guest configuration:
-- [Azure Policy Guest Configuration](../concepts/guest-configuration.md).
+- [Azure Policy guest configuration](../concepts/guest-configuration.md).
- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes-archive.md
Title: Archived release notes for Azure HDInsight
description: Archived release notes for Azure HDInsight. Get development tips and details for Hadoop, Spark, R Server, Hive and more. - Previously updated : 02/08/2021+
+ - hdinsightactive
+ - references_regions
Last updated : 07/27/2021 # Archived release notes
Starting on April 25, 2021, the corrected amount for the Dv2 VMs will be on your
No other action is needed from you. The price correction will only apply for usage on or after April 25, 2021 in the specified regions, and not to any usage prior to this date. To ensure you have the most performant and cost-effective solution, we recommended that you review the pricing, VCPU, and RAM for your Dv2 clusters and compare the Dv2 specifications to the Ev3 VMs to see if your solution would benefit from utilizing one of the newer VM series.
+## Release date: 06/02/2021
+
+This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days.
+
+The OS versions for this release are:
+- HDInsight 3.6: Ubuntu 16.04.7 LTS
+- HDInsight 4.0: Ubuntu 18.04.5 LTS
+
+### New features
+#### OS version upgrade
+As referenced in [Ubuntu's release cycle](https://ubuntu.com/about/release-cycle), the Ubuntu 16.04 kernel will reach End of Life (EOL) in April 2021. We started rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 with this release. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
+
+HDInsight 3.6 will continue to run on Ubuntu 16.04. It will change to Basic support (from Standard support) beginning 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 will not be supported for HDInsight 3.6. If you'd like to use Ubuntu 18.04, you'll need to migrate your clusters to HDInsight 4.0.
+
+You need to drop and recreate your clusters if you'd like to move existing HDInsight 4.0 clusters to Ubuntu 18.04. Plan to create or recreate your clusters after Ubuntu 18.04 support becomes available.
+
+After creating the new cluster, you can SSH to your cluster and run `sudo lsb_release -a` to verify that it runs on Ubuntu 18.04. We recommend that you test your applications in your test subscriptions first before moving to production. [Learn more about the HDInsight Ubuntu 18.04 update](./hdinsight-ubuntu-1804-qa.md).
+
+#### Scaling optimizations on HBase accelerated writes clusters
+HDInsight made some improvements and optimizations on scaling for HBase accelerated write enabled clusters. [Learn more about HBase accelerated write](./hbase/apache-hbase-accelerated-writes.md).
+
+### Deprecation
+No deprecation in this release.
+
+### Behavior changes
+#### Disable Stardard_A5 VM size as Head Node for HDInsight 4.0
+HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from this release, customers will not be able to create new clusters with Standard_A5 VM size as Head Node. You can use other two-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A four-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
+
+#### Network interface resource not visible for clusters running on Azure virtual machine scale sets
+HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
+
+### Upcoming changes
+The following changes will happen in upcoming releases.
+
+#### HDInsight Interactive Query only supports schedule-based Autoscale
+
+As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The affect on performance can outweigh the cost benefits of Autoscale.
+
+Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
+
+Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You can analyze your cluster's current usage pattern through the Grafana Hive dashboard. For more information, see [Automatically scale Azure HDInsight clusters](hdinsight-autoscale-clusters.md).
+
+#### Basic support for HDInsight 3.6 starting July 1, 2021
+Starting July 1, 2021, Microsoft will offer [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You'll automatically be enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
+
+We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
+
+#### VM host naming will be changed on July 1, 2021
+HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). This migration will change the cluster host name FQDN name format, and the numbers in the host name will not be guarantee in sequence. If you want to get the FQDN names for each node, refer to [Find the Host names of Cluster Nodes](./find-host-name.md).
+
+#### Move to Azure virtual machine scale sets
+HDInsight now uses Azure virtual machines to provision the cluster. The service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+ ## Release date: 03/24/2021 ### New features
You need to drop and recreate your clusters if youΓÇÖd like to move existing clu
ItΓÇÖs highly recommended that you test your script actions and custom applications deployed on edge nodes on an Ubuntu 18.04 virtual machine (VM) in advance. You can [create a simple Ubuntu Linux VM on 18.04-LTS](https://azure.microsoft.com/resources/templates/vm-simple-linux/), then create and use a [secure shell (SSH) key pair](../virtual-machines/linux/mac-create-ssh-keys.md#ssh-into-your-vm) on your VM to run and test your script actions and custom applications deployed on edge nodes.
-#### Disable Stardard_A5 VM size as Head Node for HDInsgiht 4.0
+#### Disable Stardard_A5 VM size as Head Node for HDInsight 4.0
HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from the next release in May 2021, customers will not be able to create new clusters with Standard_A5 VM size as Head Node. You can use other 2-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A 4-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters. #### Basic support for HDInsight 3.6 starting July 1, 2021
Fixed issues represent selected issues that were previously logged via Hortonwor
All of these features are available in HDInsight 3.6. To get the latest version of Spark, Kafka and R Server (Machine Learning Services), please choose the Spark, Kafka, ML Services version when you [create a HDInsight 3.6 cluster](./hdinsight-hadoop-provision-linux-clusters.md). To get support for ADLS, you can choose the ADLS storage type as an option. Existing clusters won't be upgraded to these versions automatically.
-All new clusters created after June 2018 will automatically get the 1000+ bug fixes across all the open-source projects. Please follow [this](./hdinsight-upgrade-cluster.md) guide for best practices around upgrading to a newer HDInsight version.
+All new clusters created after June 2018 will automatically get the 1000+ bug fixes across all the open-source projects. Please follow [this](./hdinsight-upgrade-cluster.md) guide for best practices around upgrading to a newer HDInsight version.
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 06/02/2021 Last updated : 07/27/2021 # Azure HDInsight release notes
Azure HDInsight is one of the most popular services among enterprise customers f
If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/hdinsight/release-notes/releases). -
-## Release date: 06/02/2021
+## Release date: 07/27/2021
This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days.
The OS versions for this release are:
- HDInsight 4.0: Ubuntu 18.04.5 LTS ## New features
-### OS version upgrade
-As referenced in [Ubuntu's release cycle](https://ubuntu.com/about/release-cycle), the Ubuntu 16.04 kernel will reach End of Life (EOL) in April 2021. We started rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 with this release. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
-
-HDInsight 3.6 will continue to run on Ubuntu 16.04. It will change to Basic support (from Standard support) beginning 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 will not be supported for HDInsight 3.6. If you'd like to use Ubuntu 18.04, you'll need to migrate your clusters to HDInsight 4.0.
-
-You need to drop and recreate your clusters if you'd like to move existing HDInsight 4.0 clusters to Ubuntu 18.04. Plan to create or recreate your clusters after Ubuntu 18.04 support becomes available.
-
-After creating the new cluster, you can SSH to your cluster and run `sudo lsb_release -a` to verify that it runs on Ubuntu 18.04. We recommend that you test your applications in your test subscriptions first before moving to production. [Learn more about the HDInsight Ubuntu 18.04 update](./hdinsight-ubuntu-1804-qa.md).
-
-### Scaling optimizations on HBase accelerated writes clusters
-HDInsight made some improvements and optimizations on scaling for HBase accelerated write enabled clusters. [Learn more about HBase accelerated write](./hbase/apache-hbase-accelerated-writes.md).
+### New Azure Monitor integration experience (Preview)
+The new Azure monitor integration experience will be Preview in East US and West Europe with this release. Learn more details about the new Azure monitor experience [here](./log-analytics-migration.md#migrate-to-the-new-azure-monitor-integration).
## Deprecation
-No deprecation in this release.
-
-## Behavior changes
-### Disable Stardard_A5 VM size as Head Node for HDInsight 4.0
-HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from this release, customers will not be able to create new clusters with Standard_A5 VM size as Head Node. You can use other two-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A four-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
-
-### Network interface resource not visible for clusters running on Azure virtual machine scale sets
-HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
+### Basic support for HDInsight 3.6 starting July 1, 2021
+Starting July 1, 2021, Microsoft offers [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You are automatically enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
-## Upcoming changes
-The following changes will happen in upcoming releases.
+We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
+## Behavior changes
### HDInsight Interactive Query only supports schedule-based Autoscale
+As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
-As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The affect on performance can outweigh the cost benefits of Autoscale.
-
-Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
+Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable load-based autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You can analyze your cluster's current usage pattern through the Grafana Hive dashboard. For more information, see [Automatically scale Azure HDInsight clusters](hdinsight-autoscale-clusters.md).
-### Basic support for HDInsight 3.6 starting July 1, 2021
-Starting July 1, 2021, Microsoft will offer [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You'll automatically be enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
-
-We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
-
-### VM host naming will be changed on July 1, 2021
-HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). This migration will change the cluster host name FQDN name format, and the numbers in the host name will not be guarantee in sequence. If you want to get the FQDN names for each node, refer to [Find the Host names of Cluster Nodes](./find-host-name.md).
+## Upcoming changes
+The following changes will happen in upcoming releases.
-### Move to Azure virtual machine scale sets
-HDInsight now uses Azure virtual machines to provision the cluster. The service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+### Built-in LLAP component in ESP Spark cluster will be removed
+HDInsight 4.0 ESP Spark cluster has built-in LLAP components running on both head nodes. The LLAP components in ESP Spark cluster were originally added for HDInsight 3.6 ESP Spark, but has no real user case for HDInsight 4.0 ESP Spark. In the next release scheduled in Sep 2021, HDInsight will remove the built-in LLAP component from HDInsight 4.0 ESP Spark cluster. This change will help to offload head node workload and avoid confusion between ESP Spark and ESP Interactive Hive cluster type.
-## Bug fixes
-HDInsight continues to make cluster reliability and performance improvements.
+## New region
+- West US 3
+- Jio India West
+- Australia Central
## Component version change
+The following component version has been changed with this release:
+- ORC version from 1.5.1 to 1.5.9
+ You can find the current component versions for HDInsight 4.0 and HDInsight 3.6 in [this doc](./hdinsight-component-versioning.md).+
+## Back ported JIRAs
+Here are the back ported Apache JIRAs for this release:
+
+| Impacted Feature | Apache JIRA |
+||--|
+| Date / Timestamp | [HIVE-25104](https://issues.apache.org/jira/browse/HIVE-25104) |
+| | [HIVE-24074](https://issues.apache.org/jira/browse/HIVE-24074) |
+| | [HIVE-22840](https://issues.apache.org/jira/browse/HIVE-22840) |
+| | [HIVE-22589](https://issues.apache.org/jira/browse/HIVE-22589) |
+| | [HIVE-22405](https://issues.apache.org/jira/browse/HIVE-22405) |
+| | [HIVE-21729](https://issues.apache.org/jira/browse/HIVE-21729) |
+| | [HIVE-21291](https://issues.apache.org/jira/browse/HIVE-21291) |
+| | [HIVE-21290](https://issues.apache.org/jira/browse/HIVE-21290) |
+| UDF | [HIVE-25268](https://issues.apache.org/jira/browse/HIVE-25268) |
+| | [HIVE-25093](https://issues.apache.org/jira/browse/HIVE-25093) |
+| | [HIVE-22099](https://issues.apache.org/jira/browse/HIVE-22099) |
+| | [HIVE-24113](https://issues.apache.org/jira/browse/HIVE-24113) |
+| | [HIVE-22170](https://issues.apache.org/jira/browse/HIVE-22170) |
+| | [HIVE-22331](https://issues.apache.org/jira/browse/HIVE-22331) |
+| ORC | [HIVE-21991](https://issues.apache.org/jira/browse/HIVE-21991) |
+| | [HIVE-21815](https://issues.apache.org/jira/browse/HIVE-21815) |
+| | [HIVE-21862](https://issues.apache.org/jira/browse/HIVE-21862) |
+| Table Schema | [HIVE-20437](https://issues.apache.org/jira/browse/HIVE-20437) |
+| | [HIVE-22941](https://issues.apache.org/jira/browse/HIVE-22941) |
+| | [HIVE-21784](https://issues.apache.org/jira/browse/HIVE-21784) |
+| | [HIVE-21714](https://issues.apache.org/jira/browse/HIVE-21714) |
+| | [HIVE-18702](https://issues.apache.org/jira/browse/HIVE-18702) |
+| | [HIVE-21799](https://issues.apache.org/jira/browse/HIVE-21799) |
+| | [HIVE-21296](https://issues.apache.org/jira/browse/HIVE-21296) |
+| Workload Management | [HIVE-24201](https://issues.apache.org/jira/browse/HIVE-24201) |
+| Compaction | [HIVE-24882](https://issues.apache.org/jira/browse/HIVE-24882) |
+| | [HIVE-23058](https://issues.apache.org/jira/browse/HIVE-23058) |
+| | [HIVE-23046](https://issues.apache.org/jira/browse/HIVE-23046) |
+| Materialized view | [HIVE-22566](https://issues.apache.org/jira/browse/HIVE-22566) |
hdinsight Hdinsight Service Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-service-tags.md
If your cluster is located in a region listed in this table, you only need to ad
| Australia | Australia East | HDInsight.AustraliaEast | | &nbsp; | Australia Southeast | HDInsight.AustraliaSoutheast | | &nbsp; | Australia Central | HDInsight.AustraliaCentral |
-| &nbsp; | Australia Central2 | HDInsight.AustraliaCentral2 |
| Brazil | Brazil South | HDInsight.BrazilSouth | | &nbsp; | Brazil Southeast | HDInsight.BrazilSoutheast | | China | China East 2 | HDInsight.ChinaEast2 |
If your cluster is located in a region listed in this table, you only need to ad
| France | France Central| HDInsight.FranceCentral | | Germany | Germany West Central| HDInsight.GermanyWestCentral | | Norway | Norway East | HDInsight.NorwayEast |
-| &nbsp; | Norway West | HDInsight.NorwayWest |
| Switzerland | Switzerland North | HDInsight.SwitzerlandNorth | | &nbsp; | Switzerland West | HDInsight.SwitzerlandWest | | UK | UK South | HDInsight.UKSouth |
logic-apps Business Continuity Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/business-continuity-disaster-recovery-guidance.md
You can set up logging for your logic app runs and send the resulting diagnostic
## Next steps
-* [Resiliency overview for Azure](/azure/architecture/framework/resiliency/overview)
+* [Design reliable Azure applications](/azure/architecture/framework/resiliency/app-design)
* [Resiliency checklist for specific Azure services](/azure/architecture/checklist/resiliency-per-service) * [Data management for resiliency in Azure](/azure/architecture/framework/resiliency/data-management) * [Backup and disaster recovery for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery)
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 07/20/2021 Last updated : 07/28/2021 # Secure access and data in Azure Logic Apps
If your organization doesn't permit connecting to specific resources by using th
## Isolation guidance for logic apps
-You can use Azure Logic Apps in [Azure Government](../azure-government/documentation-government-welcome.md) supporting all impact levels in the regions described by the [Azure Government Impact Level 5 Isolation Guidance](../azure-government/documentation-government-impact-level-5.md#azure-logic-apps) and the [US Department of Defense Cloud Computing Security Requirements Guide (SRG)](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html). To meet these requirements, Logic Apps supports the capability for you to create and run workflows in an environment with dedicated resources so that you can reduce the performance impact by other Azure tenants on your logic apps and avoid sharing computing resources with other tenants.
+You can use Azure Logic Apps in [Azure Government](../azure-government/documentation-government-welcome.md) supporting all impact levels in the regions described by the [Azure Government Impact Level 5 Isolation Guidance](../azure-government/documentation-government-impact-level-5.md). To meet these requirements, Logic Apps supports the capability for you to create and run workflows in an environment with dedicated resources so that you can reduce the performance impact by other Azure tenants on your logic apps and avoid sharing computing resources with other tenants.
* To run your own code or perform XML transformation, [create and call an Azure function](../logic-apps/logic-apps-azure-functions.md), rather than use the [inline code capability](../logic-apps/logic-apps-add-run-inline-code.md) or provide [assemblies to use as maps](../logic-apps/logic-apps-enterprise-integration-maps.md), respectively. Also, set up the hosting environment for your function app to comply with your isolation requirements.
- For example, to meet Impact Level 5 requirements, create your function app with the [App Service plan](../azure-functions/dedicated-plan.md) using the [**Isolated** pricing tier](../app-service/overview-hosting-plans.md) along with an [App Service Environment (ASE)](../app-service/environment/intro.md) that also uses the **Isolated** pricing tier. In this environment, function apps run on dedicated Azure virtual machines and dedicated Azure virtual networks, which provide network isolation on top of compute isolation for your apps and maximum scale-out capabilities. For more information, see [Azure Government Impact Level 5 Isolation Guidance - Azure Functions](../azure-government/documentation-government-impact-level-5.md#azure-functions).
+ For example, to meet Impact Level 5 requirements, create your function app with the [App Service plan](../azure-functions/dedicated-plan.md) using the [**Isolated** pricing tier](../app-service/overview-hosting-plans.md) along with an [App Service Environment (ASE)](../app-service/environment/intro.md) that also uses the **Isolated** pricing tier. In this environment, function apps run on dedicated Azure virtual machines and dedicated Azure virtual networks, which provide network isolation on top of compute isolation for your apps and maximum scale-out capabilities.
For more information, review the following documentation:
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 07/21/2021 Last updated : 07/29/2021
For information on configuring UDR, see [Route network traffic with a routing ta
For more information on configuring application rules, see [Deploy and configure Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md#configure-an-application-rule).
-1. To restrict access to models deployed to Azure Kubernetes Service (AKS), see [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
+1. To restrict outbound traffic for models deployed to Azure Kubernetes Service (AKS), see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Deploy ML models to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md#connectivity) articles.
### Diagnostics for support
The hosts in this section are used to install R packages, and are required durin
| - | - | | **cloud.r-project.org** | Used when installing CRAN packages. |
+### Azure Kubernetes Services hosts
+
+For information on the hosts that AKS needs to communicate with, see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Deploy ML models to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md#connectivity) articles.
+ ### Visual Studio Code hosts The hosts in this section are used to install Visual Studio Code packages to establish a remote connection between Visual Studio Code and compute instances in your Azure Machine Learning workspace.
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-compute-targets.md
In this article, learn how to set up your workspace to use these compute resourc
* Apache Spark pools (powered by Azure Synapse Analytics) * Azure HDInsight * Azure Batch
-* Azure Databricks
+* Azure Databricks - used as a training compute target only in [machine learning pipelines](how-to-create-machine-learning-pipelines.md)
* Azure Data Lake Analytics * Azure Container Instance * Azure Kubernetes Service & Azure Arc enabled Kubernetes (preview) - To use compute targets managed by Azure Machine Learning, see: * [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md)
See these notebooks for examples of training with various compute targets:
* [Tutorial: Train a model](tutorial-train-models-with-aml.md) uses a managed compute target to train a model. * Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models. * Once you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md).
-* [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md)
+* [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md)
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
Azureml-fe scales both up (vertically) to use more cores, and out (horizontally)
When scaling down and in, CPU usage is used. If the CPU usage threshold is met, the front end will first be scaled down. If the CPU usage drops to the scale-in threshold, a scale-in operation happens. Scaling up and out will only occur if there are enough cluster resources available.
+<a id="connectivity"></a>
+ ## Understand connectivity requirements for AKS inferencing cluster When Azure Machine Learning creates or attaches an AKS cluster, AKS cluster is deployed with one of the following two network models:
The following diagram shows the connectivity requirements for AKS inferencing. B
![Connectivity Requirements for AKS Inferencing](./media/how-to-deploy-aks/aks-network.png)
+For general AKS connectivity requirements, see [Control egress traffic for cluster nodes in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
+ ### Overall DNS resolution requirements DNS resolution within an existing VNet is under your control. For example, a firewall or custom DNS server. The following hosts must be reachable:
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-automl.md
In this article, you learn how to:
Retrieve the explanation from the `best_run`, which includes explanations for both raw and engineered features. > [!NOTE]
-> Interpretability, best model explanation, is not available for the TCNForecaster model if it's recommended as the best model by the Auto ML forecasting experiments.
+> Interpretability, model explanation, is not available for the TCNForecaster model recommended by Auto ML forecasting experiments.
### Download the engineered feature importances from the best run
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-run-jupyter-notebooks.md
Previously updated : 01/19/2021 Last updated : 07/22/2021 #Customer intent: As a data scientist, I want to run Jupyter notebooks in my workspace in Azure Machine Learning studio.
From the snippets panel, you can also submit a request to add new snippets.
:::image type="content" source="media/how-to-run-jupyter-notebooks/propose-new-snippet.png" alt-text="Snippet panel allows you to propose a new snippet":::
+## Collaborate with notebook comments (preview)
+
+Use a notebook comment to collaborate with others who have access to your notebook.
+
+Toggle the comments pane on and off with the **Notebook comments** tool at the top of the notebook. If your screen isn't wide enough, find this tool by first selecting the **...** at the end of the set of tools.
++
+Whether the comments pane is visible or not, you can add a comment into any code cell:
+
+1. Select some text in the code cell. You can only comment on text in a code cell.
+1. Use the **New comment thread** tool to create your comment.
+ :::image type="content" source="media/how-to-run-jupyter-notebooks/comment-from-code.png" alt-text="Screenshot of add a comment to a code cell tool.":::
+1. If the comments pane was previously hidden, it will now open.
+1. Type your comment and post it with the tool or use **Ctrl+Enter**.
+1. Once a comment is posted, select **...** in the top right to:
+ * Edit the comment
+ * Resolve the thread
+ * Delete the thread
+
+Text that has been commented will appear with a purple highlight in the code. When you select a comment in the comments pane, your notebook will scroll to the cell that contains the highlighted text.
+
+> [!NOTE]
+> Comments are saved into the code cell's metadata.
+ ## Clean your notebook (preview) Over the course of creating a notebook, you typically end up with cells you used for data exploration or debugging. The *gather* feature will help you produce a clean notebook without these extraneous cells.
marketplace Azure Consumption Commitment Enrollment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-consumption-commitment-enrollment.md
Title: Azure Consumption Commitment enrollment - Azure Marketplace description: This article includes an overview of the Microsoft Azure Consumption Commitment (MACC) program, how to see if your offer is enrolled in the MACC program, and the requirements for MACC. --++ Previously updated : 06/03/2021 Last updated : 07/27/2021 # Azure Consumption Commitment enrollment
This article is for commercial marketplace publishers and describes Microsoft Az
## MACC program
-The _Microsoft Azure Consumption Commitment (MACC)_ program is for [transactable offers](marketplace-commercial-transaction-capabilities-and-considerations.md#transact-overview) that are published to Azure Marketplace. An Azure customer's cost of transactable offers enrolled into this program contribute towards their organizationΓÇÖs Microsoft Azure Consumption Commitment.
+The _Microsoft Azure Consumption Commitment (MACC)_ program is for [transactable offers](marketplace-commercial-transaction-capabilities-and-considerations.md#transact-overview) that are published to Azure Marketplace. Azure Marketplace purchases of transactable offers that are enrolled in this program contribute towards an organizationΓÇÖs Microsoft Azure Consumption Commitment.
### Requirements for an offer to be enrolled in MACC
-An offer must meet the following requirements to be enrolled in the MACC program. Requests for an exception to these requirements will not be entertained.
-
-To be enrolled in MACC, an offer must be:
+An offer must meet the following requirements to be enrolled in the MACC program:
- Transactable with a pricing plan greater than $0 > [!NOTE]
To be enrolled in MACC, an offer must be:
***Figure 1: Offer that is enrolled in the MACC program*** > [!NOTE]
-> MACC program status for offers published to Azure Marketplace is updated weekly on Mondays. This means that if you publish an offer that meets the eligibility requirements for the MACC program, the status in Partner Center will not show the Enabled status until the following Monday.
+> MACC program status for offers published to Azure Marketplace is updated weekly on Mondays. This means that if you publish an offer that meets the eligibility requirements for the MACC program, the status in Partner Center will not show the Enrolled status until the following Monday.
## Next steps
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/gtm-your-marketplace-benefits.md
Each time you publish on Microsoft AppSource or Azure Marketplace, you will have
The table below summarizes the eligibility requirements for list, trial, and consulting offers:
-![Go-To-Market benefits](./media/marketplace-publishers-guide/go-to-market-gtm-eligibility-requirements.png)
+[![Go-To-Market benefits](media/marketplace-publishers-guide/go-to-market-gtm-eligibility-requirements.png)](media/marketplace-publishers-guide/go-to-market-gtm-eligibility-requirements.png#lightbox)
Detailed descriptions for all these benefits can be found in the [Marketplace Rewards program deck](https://aka.ms/marketplacerewards).
All partners who have a live transactable offer get to work with a dedicated eng
### Marketing benefits for transact offers
-![Marketing benefits](./media/marketplace-publishers-guide/marketing-benefit.png)
+[![Marketing benefits](media/marketplace-publishers-guide/marketing-benefit.png)](media/marketplace-publishers-guide/marketing-benefit.png#lightbox)
### Sales benefits for transact offers
-![Sales benefits](./media/marketplace-publishers-guide/sales-benefit.png)
+[![Sales benefits](media/marketplace-publishers-guide/sales-benefit.png)](media/marketplace-publishers-guide/sales-benefit.png#lightbox)
### Technical benefits for transact offers
-![Technical benefits](./media/marketplace-publishers-guide/technical-benefit.png)
+[![Technical benefits](media/marketplace-publishers-guide/technical-benefit.png)](media/marketplace-publishers-guide/technical-benefit.png#lightbox)
Detailed descriptions for all these benefits can be found in the [Marketplace Rewards program deck](https://aka.ms/marketplacerewards).
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-backup.md
Last updated 3/27/2020+ # Backup and restore in Azure Database for MySQL
The General purpose storage is the backend storage supporting [General Purpose](
#### General purpose storage v2 servers (supports up to 16-TB storage)
-In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only.
-
-Differential snapshot backups occur at least once a day. Differential snapshot backups do not occur on a fixed schedule. Differential snapshot backups occur every 24 hours unless the transaction log (binlog in MySQL) exceeds 50 GB since the last differential backup. In a day, a maximum of six differential snapshots are allowed.
-
-Transaction log backups occur every five minutes.
+In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support general purpose storage up to 16-TB storage. In other words, storage up to 16-TB storage is the default general purpose storage for all the [regions](concepts-pricing-tiers.md#storage) where it is supported. Backups on these 16-TB storage servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes.
For more information of Basic and General purpose storage, refer [storage documentation](./concepts-pricing-tiers.md#storage).
Backups are retained based on the backup retention period setting on the server.
The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example, if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days: -- Servers with up to 4-TB storage will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.-- Servers with up to 16-TB storage will retain the full database snapshot, all the differential snapshots and transaction log backups in last 8 days.
+- General purpose storage v1 servers (supporting up to 4-TB storage) will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.
+- General purpose storage v2 servers (supporting up to 16-TB storage) will retain the full database snapshots and transaction log backups in last 8 days.
#### Long-term retention
Long-term retention of backups beyond 35 days is currently not natively supporte
Azure Database for MySQL provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a [paired data center](../best-practices-availability-paired-regions.md). This geo-redundancy provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage.
+> [!NOTE]
+>For the following regions - Central India, France Central, UAE North, South Africa North; General purpose storage v2 storage is in Public Preview. If you create a source server in General purpose storage v2 (Supporting up to 16-TB storage) in the above mentioned regions then enabling Geo-Redundant Backup is not supported.
+ #### Moving from locally redundant to geo-redundant backup storage Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](concepts-migrate-dump-restore.md) is the only supported option.
You may need to wait for the next transaction log backup to be taken before you
### Geo-restore
-You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups. Servers that support up to 4 TB of storage can be restored to the geo-paired region, or to any region that supports up to 16 TB of storage. For servers that support up to 16 TB of storage, geo-backups can be restored in any region that support 16-TB servers as well. Review [Azure Database for MySQL pricing tiers](concepts-pricing-tiers.md) for the list of supported regions.
+You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups.
+- General purpose storage v1 servers (supporting up to 4-TB storage) can be restored to the geo-paired region, or to any Azure region that supports Azure Database for MySQL Single Server service.
+- General purpose storage v2 servers (supporting up to 16-TB storage) can only be restored to Azure regions that support General purpose storage v2 servers infrastructure.
+Review [Azure Database for MySQL pricing tiers](./concepts-pricing-tiers.md#storage) for the list of supported regions.
Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-backup-restore.md
These backup files cannot be exported. The backups can only be used for restore
## Backup frequency
-Backups on flexible servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only.
-
-Differential snapshot backups occur at least once a day. Differential snapshot backups do not occur on a fixed schedule. Differential snapshot backups occur every 24 hours unless the binary logs in MySQL exceeds 50-GB since the last differential backup. In a day, a maximum of six differential snapshots are allowed. Transaction log backups occur every five minutes.
+Backups on flexible servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes.
## Backup retention
Point-in-time restore is useful in multiple scenarios. Some of the use cases tha
You can choose between a latest restore point and a custom restore point via [Azure portal](how-to-restore-server-portal.md). -- **Latest restore point**: The latest restore point helps you to restore the server to the last backup performed on the source server. The timestamp for restore will also displayed on the portal. This option is useful to quickly restore the server to the most updated state.
+- **Latest restore point**: The latest restore point helps you to restore the server to the last backup performed on the source server. The timestamp for restore will also be displayed on the portal. This option is useful to quickly restore the server to the most updated state.
- **Custom restore point**: This will allow you to choose any point-in-time within the retention period defined for this flexible server. This option is useful to restore the server at the precise point in time to recover from a user error.
-The estimated time of recovery depends on several factors including the database sizes, the transaction log backup size, the compute size of the SKU, and the time of the restore as well. The transaction log recovery is the most time consuming part of the restore process. If the restore time is chosen closer to the full or differential snapshot backup schedule, the restores are faster since transaction log application is minimal. To estimate the accurate recovery time for your server, we highly recommend to test it in your environment as it has too many environment specific variables.
+The estimated time of recovery depends on several factors including the database sizes, the transaction log backup size, the compute size of the SKU, and the time of the restore as well. The transaction log recovery is the most time consuming part of the restore process. If the restore time is chosen closer to the snapshot backup schedule, the restore operations are faster since transaction log application is minimal. To estimate the accurate recovery time for your server, we highly recommend testing it in your environment as it has too many environment specific variables.
> [!IMPORTANT] > If you are restoring a flexible server configured with zone redundant high availability, the restored server will be configured in the same region and zone as your primary server, and deployed as a single flexible server in a non-HA mode. Refer to [zone redundant high availability](concepts-high-availability.md) for flexible server.
After a restore from either **latest restore point** or **custom restore point**
- Ensure appropriate logins and database level permissions are in place. - Configure alerts, as appropriate.
+## Frequently Asked Questions (FAQs)
+
+### Backup related questions
+
+- **How do I backup my server?**
+By default, Azure Database for MySQL enables automated backups of your entire server (encompassing all databases created) with a default 7 day retention period. The only way to manually take a backup is by using community tools such as mysqldump as documented [here](../concepts-migrate-dump-restore.md#dump-and-restore-using-mysqldump-utility) or mydumper as documented [here](../concepts-migrate-mydumper-myloader.md#create-a-backup-using-mydumper). If you wish to backup Azure Database for MySQL to a Blob storage, refer to our tech community blog [Backup Azure Database for MySQL to a Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/backup-azure-database-for-mysql-to-a-blob-storage/ba-p/803830).
+
+- **Can I configure automatic backups to be retained for long term?**
+No, currently we only support a maximum of 35 days of automated backup retention. You can take manual backups and use that for long-term retention requirement.
+
+- **What are the backup windows for my server? Can I customize it?**
+The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes. Backup windows are inherently managed by Azure and cannot be customized.
+
+- **Are my backups encrypted?**
+All Azure Database for MySQL data, backups and temporary files created during query execution are encrypted using AES 256-bit encryption. The storage encryption is always on and cannot be disabled.
+
+- **Can I restore a single/few database(s)?**
+Restoring a single/few database(s) or tables is not supported. In case you want to restore specific databases, perform a Point in Time Restore and then extract the table(s) or database(s) needed.
+
+- **Is my server available during the backup window?**
+Yes. Backups are online operations and are snapshot-based. The snapshot operation only takes few seconds and doesnΓÇÖt interfere with production workloads ensuring high availability of the server.
+
+- **When setting up the maintenance window for the server do we need to account for backup window?**
+No, backups are triggered internally as part of the managed service and have no bearing to the Managed Maintenance Window.
+
+- **Where are my automated backups stored and how do I manage their retention?**
+Azure Database for MySQL automatically creates server backups and stores them in user-configured, locally redundant storage or in geo-redundant storage. These backup files can't be exported. The default backup retention period is seven days. You can optionally configure the database backup from 1 to 35 days.
+
+- **How can I validate my backups?**
+The best way to validate availability of valid backups is performing periodic point in time restores and ensuring backups are valid and restorable. Backup operations or files are not exposed to the end users.
+
+- **Where can I see the backup usage?**
+In the Azure portal, under Monitoring tab - Metrics section, you can find the [Backup Storage Used](../concepts-monitoring.md) metric which can help you monitor the total backup usage.
+
+- **What happens to my backups if I delete my server?**
+If you delete the server, all backups that belong to the server are also deleted and cannot be recovered. To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md).
+
+- **How will I be charged and billed for my use of backups?**
+Flexible server provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/server/). Backup storage billing is also governed by the backup retention period selected and backup redundancy option chosen apart from the transactional activity on the server which impacts the total backup storage used directly.
+
+- **How are backups retained for stopped servers?**
+No new backups are performed for stopped servers. All older backups (within the retention window) at the time of stopping the server are retained until the server is restarted post which backup retention for the active server is governed by itΓÇÖs backup retention window.
+
+- **How will I be billed for backups for a stopped server?**
+While your server instance is stopped, you are charged for provisioned storage (including Provisioned IOPS) and backup storage (backups stored within your specified retention window). Free backup storage is limited to the size of your provisioned database and only applies to active servers.
+
+### Restore related questions
+
+- **How do I restore my server?**
+Azure portal supports Point In Time Restore (for all servers) allowing users to restore to latest or custom restore point. To manually restore your server from the backups taken by mysqldump/myDumper read [Restore your database using myLoader](../concepts-migrate-mydumper-myloader.md#restore-your-database-using-myloader).
+
+- **Why is my restore taking so much time?**
+The estimated time for the recovery of the server depends on several factors:
+ - The size of the databases. As a part of the recovery process, the database needs to be hydrated from the last physical backup and hence the time taken to recover will be proportional to the size of the database.
+ - The active portion of transaction activity that needs to be replayed to recover. Recovery can take longer depending on the additional transaction activity from the last successful checkpoint.
+ - The network bandwidth if the restore is to a different region
+ - The number of concurrent restore requests being processed in the target region
+ - The presence of primary key in the tables in the database. For faster recovery, consider adding primary key for all the tables in your database.
++ ## Next steps - Learn about [business continuity](./concepts-business-continuity.md)
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-cloud-services-dotnet-phone-call-web-role.md
- Title: How to make a phone call from Twilio (.NET) | Microsoft Docs
-description: Learn how to make a phone call with the Twilio API service on Azure. Code samples written in .NET.
------ Previously updated : 05/04/2016---
-# How to make a phone call using Twilio in a web role on Azure
-This guide demonstrates how to use Twilio to make a call from a web page hosted in Azure. The resulting application prompts the user to make a call with the given number and message, as shown in the following screenshot.
-
-![Azure call form using Twilio and ASP.NET][twilio_dotnet_basic_form]
-
-## <a name="twilio-prereqs"></a>Prerequisites
-You will need to do the following to use the code in this topic:
-
-1. Acquire a Twilio account and authentication token from the [Twilio Console][twilio_console]. To get started with Twilio, sign up at [https://www.twilio.com/try-twilio][try_twilio]. You can evaluate pricing at [https://www.twilio.com/pricing][twilio_pricing]. For information about the API provided by Twilio, see [https://www.twilio.com/voice/api][twilio_api].
-2. Add the *Twilio .NET library* to your web role. See **To add the Twilio libraries to your web role project**, later in this topic.
-
-You should be familiar with creating a basic [Web Role on Azure][azure_webroles_get_started].
-
-## <a name="howtocreateform"></a>How to: Create a web form for making a call
-<a id="use_nuget"></a>To add the Twilio libraries to your web role project:
-
-1. Open your solution in Visual Studio.
-2. Right-click **References**.
-3. Click **Manage NuGet Packages**.
-4. Click **Online**.
-5. In the search online box, type *twilio*.
-6. Click **Install** on the Twilio package.
-
-The following code shows how to create a web form to retrieve user data for making a call. In this example, an ASP.NET Web Role named **TwilioCloud** is created.
-
-```aspx
-<%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master"
- AutoEventWireup="true" CodeBehind="Default.aspx.cs"
- Inherits="WebRole1._Default" %>
-
-<asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent">
-</asp:Content>
-<asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
- <div>
- <asp:BulletedList ID="varDisplay" runat="server" BulletStyle="NotSet">
- </asp:BulletedList>
- </div>
- <div>
- <p>Fill in all fields and click <b>Make this call</b>.</p>
- <div>
- To:<br /><asp:TextBox ID="toNumber" runat="server" /><br /><br />
- Message:<br /><asp:TextBox ID="message" runat="server" /><br /><br />
- <asp:Button ID="callpage" runat="server" Text="Make this call"
- onclick="callpage_Click" />
- </div>
- </div>
-</asp:Content>
-```
-
-## <a id="howtocreatecode"></a>How to: Create the code to make the call
-The following code, which is called when the user completes the form, creates the call message and generates the call. In this example, the code is run in the onclick event handler of the button on the form. (Use your Twilio account and authentication token instead of the placeholder values assigned to `accountSID` and `authToken` in the code below.)
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Web;
-using System.Web.UI;
-using System.Web.UI.WebControls;
-using Twilio;
-using Twilio.Http;
-using Twilio.Types;
-using Twilio.Rest.Api.V2010;
-
-namespace WebRole1
-{
- public partial class _Default : System.Web.UI.Page
- {
- protected void Page_Load(object sender, EventArgs e)
- {
-
- }
-
- protected void callpage_Click(object sender, EventArgs e)
- {
- // Call processing happens here.
-
- // Use your account SID and authentication token instead of
- // the placeholders shown here.
- var accountSID = "ACNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN";
- var authToken = "NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN";
-
- // Instantiate an instance of the Twilio client.
- TwilioClient.Init(accountSID, authToken);
-
- // Retrieve the account, used later to retrieve the
- var account = AccountResource.Fetch(accountSID);
-
- this.varDisplay.Items.Clear();
-
- if (this.toNumber.Text == "" || this.message.Text == "")
- {
- this.varDisplay.Items.Add(
- "You must enter a phone number and a message.");
- }
- else
- {
- // Retrieve the values entered by the user.
- var to = PhoneNumber(this.toNumber.Text);
- var from = new PhoneNumber("+14155992671");
- var myMessage = this.message.Text;
-
- // Create a URL using the Twilio message and the user-entered
- // text. You must replace spaces in the user's text with '%20'
- // to make the text suitable for a URL.
- var url = $"https://twimlets.com/message?Message%5B0%5D={myMessage.Replace(" ", "%20")}";
- var twimlUri = new Uri(url);
-
- // Display the endpoint, API version, and the URL for the message.
- this.varDisplay.Items.Add($"Using Twilio endpoint {
- }");
- this.varDisplay.Items.Add($"Twilioclient API Version is {apiVersion}");
- this.varDisplay.Items.Add($"The URL is {url}");
-
- // Place the call.
- var call = CallResource.create(to, from, url: twimlUri);
- this.varDisplay.Items.Add("Call status: " + call.Status);
- }
- }
- }
-}
-```
-
-The call is made, and the Twilio endpoint, API version, and the call status are displayed. The following screenshot shows output from a sample run.
-
-![Azure call response using Twilio and ASP.NET][twilio_dotnet_basic_form_output]
-
-More information about TwiML can be found at [https://www.twilio.com/docs/api/twiml][twiml]. More information about &lt;Say&gt; and other Twilio verbs can be found at [https://www.twilio.com/docs/api/twiml/say][twilio_say].
-
-## <a id="nextsteps"></a>Next steps
-This code was provided to show you basic functionality using Twilio in an ASP.NET web role on Azure. Before deploying to Azure in production, you may want to add more error handling or other features. For example:
-
-* Instead of using a web form, you could use Azure Blob storage or an Azure SQL Database instance to store phone numbers and call text. For information about using Blobs in Azure, see [How to use the Azure Blob storage service in .NET][howto_blob_storage_dotnet]. For information about using SQL Database, see [How to use Azure SQL Database in .NET applications][howto_sql_azure_dotnet].
-* You could use `RoleEnvironment.getConfigurationSettings` to retrieve the Twilio account ID and authentication token from your deployment's configuration settings, instead of hard-coding the values in your form. For information about the `RoleEnvironment` class, see [Microsoft.WindowsAzure.ServiceRuntime Namespace][azure_runtime_ref_dotnet].
-* Read the Twilio Security Guidelines at [https://www.twilio.com/docs/security][twilio_docs_security].
-* Learn more about Twilio at [https://www.twilio.com/docs][twilio_docs].
-
-## <a name="seealso"></a>See also
-* [How to use Twilio for Voice and SMS capabilities from Azure](twilio-dotnet-how-to-use-for-voice-sms.md)
-
-[twilio_console]: https://www.twilio.com/console
-[twilio_pricing]: https://www.twilio.com/pricing
-[try_twilio]: https://www.twilio.com/try-twilio
-[twilio_api]: https://www.twilio.com/voice/api
-[verify_phone]: https://www.twilio.com/console/phone-numbers/verified
-
-[twilio_dotnet_basic_form]: ./media/partner-twilio-cloud-services-dotnet-phone-call-web-role/WA_twilio_dotnet_basic_form.png
-[twilio_dotnet_basic_form_output]: ./media/partner-twilio-cloud-services-dotnet-phone-call-web-role/WA_twilio_dotnet_basic_form_output.png
-
-[twiml]: https://www.twilio.com/docs/api/twiml
---
-[howto_twilio_voice_sms_dotnet]: /develop/net/how-to-guides/twilio/
-
-[howto_blob_storage_dotnet]: https://www.windowsazure.com/develop/net/how-to-guides/blob-storage/
-
-[howto_sql_azure_dotnet]: https://www.windowsazure.com/develop/net/how-to-guides/sql-database/
--
-[twilio_docs_security]: https://www.twilio.com/docs/security
-[twilio_docs]: https://www.twilio.com/docs
-[twilio_say]: https://www.twilio.com/docs/api/twiml/say
--
-[azure_runtime_ref_dotnet]: /previous-versions/azure/reference/ee741722(v=azure.100)
-[azure_webroles_get_started]: ./cloud-services/cloud-services-dotnet-get-started.md
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-java-how-to-use-voice-sms.md
- Title: How to Use Twilio for Voice and SMS (Java) | Microsoft Docs
-description: Learn how to make a phone call and send a SMS message with the Twilio API service on Azure. Code samples written in Java.
------ Previously updated : 11/25/2014----
-# How to Use Twilio for Voice and SMS Capabilities in Java
-This guide demonstrates how to perform common programming tasks with the Twilio API service on Azure. The scenarios covered include making a phone call and sending a Short Message Service (SMS) message. For more information on Twilio and using voice and SMS in your applications, see the [Next Steps](#NextSteps) section.
-
-## <a id="WhatIs"></a>What is Twilio?
-Twilio is a telephony web-service API that lets you use your existing web languages and skills to build voice and SMS applications. Twilio is a third-party service (not an Azure feature and not a Microsoft product).
-
-**Twilio Voice** allows your applications to make and receive phone calls. **Twilio SMS** allows your applications to make and receive SMS messages. **Twilio Client** allows your applications to enable voice communication using existing Internet connections, including mobile connections.
-
-## <a id="Pricing"></a>Twilio Pricing and Special Offers
-Information about Twilio pricing is available at [Twilio Pricing][twilio_pricing]. Azure customers receive a [special offer][special_offer]: a free credit of 1000 texts or 1000 inbound minutes. To sign up for this offer or get more information, please visit [https://ahoy.twilio.com/azure][special_offer].
-
-## <a id="Concepts"></a>Concepts
-The Twilio API is a RESTful API that provides voice and SMS functionality for applications. Client libraries are available in multiple languages; for a list, see [Twilio API Libraries][twilio_libraries].
-
-Key aspects of the Twilio API are Twilio verbs and Twilio Markup Language (TwiML).
-
-### <a id="Verbs"></a>Twilio Verbs
-The API makes use of Twilio verbs; for example, the **&lt;Say&gt;** verb instructs Twilio to audibly deliver a message on a call.
-
-The following is a list of Twilio verbs.
-
-* **&lt;Dial&gt;**: Connects the caller to another phone.
-* **&lt;Gather&gt;**: Collects numeric digits entered on the telephone keypad.
-* **&lt;Hangup&gt;**: Ends a call.
-* **&lt;Play&gt;**: Plays an audio file.
-* **&lt;Queue&gt;**: Add the to a queue of callers.
-* **&lt;Pause&gt;**: Waits silently for a specified number of seconds.
-* **&lt;Record&gt;**: Records the caller's voice and returns a URL of a file that contains the recording.
-* **&lt;Redirect&gt;**: Transfers control of a call or SMS to the TwiML at a different URL.
-* **&lt;Reject&gt;**: Rejects an incoming call to your Twilio number without billing you.
-* **&lt;Say&gt;**: Converts text to speech that is made on a call.
-* **&lt;Sms&gt;**: Sends an SMS message.
-
-### <a id="TwiML"></a>TwiML
-TwiML is a set of XML-based instructions based on the Twilio verbs that inform Twilio of how to process a call or SMS.
-
-As an example, the following TwiML would convert the text **Hello World!** to speech.
-
-```xml
- <?xml version="1.0" encoding="UTF-8" ?>
- <Response>
- <Say>Hello World!</Say>
- </Response>
-```
-
-When your application calls the Twilio API, one of the API parameters is the URL that returns the TwiML response. For development purposes, you can use Twilio-provided URLs to provide the TwiML responses used by your applications. You could also host your own URLs to produce the TwiML responses, and another option is to use the **TwiMLResponse** object.
-
-For more information about Twilio verbs, their attributes, and TwiML, see [TwiML][twiml]. For additional information about the Twilio API, see [Twilio API][twilio_api].
-
-## <a id="CreateAccount"></a>Create a Twilio Account
-When you're ready to get a Twilio account, sign up at [Try Twilio][try_twilio]. You can start with a free account, and upgrade your account later.
-
-When you sign up for a Twilio account, you'll receive an account ID and an authentication token. Both will be needed to make Twilio API calls. To prevent unauthorized access to your account, keep your authentication token secure. Your account ID and authentication token are viewable at the [Twilio Console][twilio_console], in the fields labeled **ACCOUNT SID** and **AUTH TOKEN**, respectively.
-
-## <a id="create_app"></a>Create a Java Application
-1. Obtain the Twilio JAR and add it to your Java build path and your WAR deployment assembly. At [https://github.com/twilio/twilio-java][twilio_java], you can download the GitHub sources and create your own JAR, or download a pre-built JAR (with or without dependencies).
-2. Ensure your JDK's **cacerts** keystore contains the Equifax Secure Certificate Authority certificate with MD5 fingerprint 67:CB:9D:C0:13:24:8A:82:9B:B2:17:1E:D1:1B:EC:D4 (the serial number is 35:DE:F4:CF and the SHA1 fingerprint is D2:32:09:AD:23:D3:14:23:21:74:E4:0D:7F:9D:62:13:97:86:63:3A). This is the certificate authority (CA) certificate for the [https://api.twilio.com][twilio_api_service] service, which is called when you use Twilio APIs.
-
-Detailed instructions for using the Twilio client library for Java are available at [How to Make a Phone Call Using Twilio in a Java Application on Azure][howto_phonecall_java].
-
-## <a id="configure_app"></a>Configure Your Application to Use Twilio Libraries
-Within your code, you can add **import** statements at the top of your source files for the Twilio packages or classes you want to use in your application.
-
-For Java source files:
-
-```java
- import com.twilio.*;
- import com.twilio.rest.api.*;
- import com.twilio.type.*;
- import com.twilio.twiml.*;
-```
-
-For Java Server Page (JSP) source files:
-
-```java
- import="com.twilio.*"
- import="com.twilio.rest.api.*"
- import="com.twilio.type.*"
- import="com.twilio.twiml.*"
-```
-
-Depending on which Twilio packages or classes you want to use, your **import** statements may be different.
-
-## <a id="howto_make_call"></a>How to: Make an outgoing call
-The following shows how to make an outgoing call using the **Call** class. This code also uses a Twilio-provided site to return the Twilio Markup Language (TwiML) response. Substitute your values for the **from** and **to** phone numbers, and ensure that you verify the **from** phone number for your Twilio account prior to running the code.
-
-```java
- // Use your account SID and authentication token instead
- // of the placeholders shown here.
- String accountSID = "your_twilio_account_SID";
- String authToken = "your_twilio_authentication_token";
-
- // Initialize the Twilio client.
- Twilio.init(accountSID, authToken);
-
- // Use the Twilio-provided site for the TwiML response.
- URI uri = new URI("https://twimlets.com/message" +
- "?Message%5B0%5D=Hello%20World%21");
-
- // Declare To and From numbers
- PhoneNumber to = new PhoneNumber("NNNNNNNNNN");
- PhoneNumber from = new PhoneNumber("NNNNNNNNNN");
-
- // Create a Call creator passing From, To and URL values
- // then make the call by executing the create() method
- Call.creator(to, from, uri).create();
-```
-
-For more information about the parameters passed in to the **Call.creator** method, see [https://www.twilio.com/docs/api/rest/making-calls][twilio_rest_making_calls].
-
-As mentioned, this code uses a Twilio-provided site to return the TwiML response. You could instead use your own site to provide the TwiML response; for more information, see [How to Provide TwiML Responses in a Java Application on Azure](#howto_provide_twiml_responses).
-
-## <a id="howto_send_sms"></a>How to: Send an SMS message
-The following shows how to send an SMS message using the **Message** class. The **from** number, **4155992671**, is provided by Twilio for trial accounts to send SMS messages. The **to** number must be verified for your Twilio account prior to running the code.
-
-```java
- // Use your account SID and authentication token instead
- // of the placeholders shown here.
- String accountSID = "your_twilio_account_SID";
- String authToken = "your_twilio_authentication_token";
-
- // Initialize the Twilio client.
- Twilio.init(accountSID, authToken);
-
- // Declare To and From numbers and the Body of the SMS message
- PhoneNumber to = new PhoneNumber("+14159352345"); // Replace with a valid phone number for your account.
- PhoneNumber from = new PhoneNumber("+14158141829"); // Replace with a valid phone number for your account.
- String body = "Where's Wallace?";
-
- // Create a Message creator passing From, To and Body values
- // then send the SMS message by calling the create() method
- Message sms = Message.creator(to, from, body).create();
-```
-
-For more information about the parameters passed in to the **Message.creator** method, see [https://www.twilio.com/docs/api/rest/sending-sms][twilio_rest_sending_sms].
-
-## <a id="howto_provide_twiml_responses"></a>How to: Provide TwiML Responses from your own Website
-When your application initiates a call to the Twilio API, for example via the **CallCreator.create** method, Twilio will send your request to a URL that is expected to return a TwiML response. The example above uses the Twilio-provided URL [https://twimlets.com/message][twimlet_message_url]. (While TwiML is designed for use by Web services, you can view the TwiML in your browser. For example, click [https://twimlets.com/message][twimlet_message_url] to see an empty **&lt;Response&gt;** element; as another example, click [https://twimlets.com/message?Message%5B0%5D=Hello%20World%21][twimlet_message_url_hello_world] to see a **&lt;Response&gt;** element that contains a **&lt;Say&gt;** element.)
-
-Instead of relying on the Twilio-provided URL, you can create your own URL site that returns HTTP responses. You can create the site in any language that returns HTTP responses; this topic assumes you'll be hosting the URL in a JSP page.
-
-The following JSP page results in a TwiML response that says **Hello World!** on the call.
-
-```xml
- <%@ page contentType="text/xml" %>
- <Response>
- <Say>Hello World!</Say>
- </Response>
-```
-
-The following JSP page results in a TwiML response that says some text, has several pauses, and says information about the Twilio API version and the Azure role name.
-
-```xml
- <%@ page contentType="text/xml" %>
- <Response>
- <Say>Hello from Azure!</Say>
- <Pause></Pause>
- <Say>The Twilio API version is <%= request.getParameter("ApiVersion") %>.</Say>
- <Say>The Azure role name is <%= System.getenv("RoleName") %>.</Say>
- <Pause></Pause>
- <Say>Good bye.</Say>
- </Response>
-```
-
-The **ApiVersion** parameter is available in Twilio voice requests (not SMS requests). To see the available request parameters for Twilio voice and SMS requests, see <https://www.twilio.com/docs/api/twiml/twilio_request> and <https://www.twilio.com/docs/api/twiml/sms/twilio_request>, respectively. The **RoleName** environment variable is available as part of an Azure deployment. (If you want to add custom environment variables so they could be picked up from **System.getenv**, see the environment variables section at [Miscellaneous Role Configuration Settings][misc_role_config_settings].)
-
-Once you have your JSP page set up to provide TwiML responses, use the URL of the JSP page as the URL passed into the **Call.creator** method. For example, if you have a Web application named MyTwiML deployed to an Azure hosted service, and the name of the JSP page is mytwiml.jsp, the URL can be passed to **Call.creator** as shown in the following:
-
-```java
- // Declare To and From numbers and the URL of your JSP page
- PhoneNumber to = new PhoneNumber("NNNNNNNNNN");
- PhoneNumber from = new PhoneNumber("NNNNNNNNNN");
- URI uri = new URI("http://<your_hosted_service>.cloudapp.net/MyTwiML/mytwiml.jsp");
-
- // Create a Call creator passing From, To and URL values
- // then make the call by executing the create() method
- Call.creator(to, from, uri).create();
-```
-
-Another option for responding with TwiML is via the **VoiceResponse** class, which is available in the **com.twilio.twiml** package.
-
-For additional information about using Twilio in Azure with Java, see [How to Make a Phone Call Using Twilio in a Java Application on Azure][howto_phonecall_java].
-
-## <a id="AdditionalServices"></a>How to: Use Additional Twilio Services
-In addition to the examples shown here, Twilio offers web-based APIs that you can use to leverage additional Twilio functionality from your Azure application. For full details, see the [Twilio API documentation][twilio_api_documentation].
-
-## <a id="NextSteps"></a>Next Steps
-Now that you've learned the basics of the Twilio service, follow these links to learn more:
-
-* [Twilio Security Guidelines][twilio_security_guidelines]
-* [Twilio HowTo's and Example Code][twilio_howtos]
-* [Twilio Quickstart Tutorials][twilio_quickstarts]
-* [Twilio on GitHub][twilio_on_github]
-* [Talk to Twilio Support][twilio_support]
-
-[twilio_java]: https://github.com/twilio/twilio-java
-[twilio_api_service]: https://api.twilio.com
-[howto_phonecall_java]: partner-twilio-java-phone-call-example.md
-[misc_role_config_settings]: /previous-versions/azure/hh690945(v=azure.100)
-[twimlet_message_url]: https://twimlets.com/message
-[twimlet_message_url_hello_world]: https://twimlets.com/message?Message%5B0%5D=Hello%20World%21
-[twilio_rest_making_calls]: https://www.twilio.com/docs/api/rest/making-calls
-[twilio_rest_sending_sms]: https://www.twilio.com/docs/api/rest/sending-sms
-[twilio_pricing]: https://www.twilio.com/pricing
-[special_offer]: https://ahoy.twilio.com/azure
-[twilio_libraries]: https://www.twilio.com/docs/libraries
-[twiml]: https://www.twilio.com/docs/api/twiml
-[twilio_api]: https://www.twilio.com/docs/api
-[try_twilio]: https://www.twilio.com/try-twilio
-[twilio_console]: https://www.twilio.com/console
-[verify_phone]: https://www.twilio.com/console/phone-numbers/verified#
-[twilio_api_documentation]: https://www.twilio.com/docs
-[twilio_security_guidelines]: https://www.twilio.com/docs/security
-[twilio_howtos]: https://www.twilio.com/docs/all
-[twilio_on_github]: https://github.com/twilio
-[twilio_support]: https://www.twilio.com/help/contact
-[twilio_quickstarts]: https://www.twilio.com/docs/quickstart
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-java-phone-call-example.md
- Title: How to Make a phone call from Twilio (Java) | Microsoft Docs
-description: Learn how to make a phone call from a web page using Twilio in a Java application on Azure.
------ Previously updated : 11/25/2014----
-# How to Make a Phone Call Using Twilio in a Java Application on Azure
-The following example shows you how you can use Twilio to make a call from a web page hosted in Azure. The resulting application will prompt the user for phone call values, as shown in the following screenshot.
-
-![Azure Call Form Using Twilio and Java][twilio_java]
-
-You'll need to do the following to use the code in this topic:
-
-1. Acquire a Twilio account and authentication token. To get started with Twilio, evaluate pricing at [https://www.twilio.com/pricing][twilio_pricing]. You can sign up at [https://www.twilio.com/try-twilio][try_twilio]. For information about the API provided by Twilio, see [https://www.twilio.com/api][twilio_api].
-2. Obtain the Twilio JAR. At [https://github.com/twilio/twilio-java][twilio_java_github], you can download the GitHub sources and create your own JAR, or download a pre-built JAR (with or without dependencies).
- The code in this topic was written using the pre-built TwilioJava-3.3.8-with-dependencies JAR.
-3. Add the JAR to your Java build path.
-4. If you are using Eclipse to create this Java application, include the Twilio JAR in your application deployment file (WAR) using Eclipse's deployment assembly feature. If you are not using Eclipse to create this Java application, ensure the Twilio JAR is included within the same Azure role as your Java application, and added to the class path of your application.
-5. Ensure your cacerts keystore contains the Equifax Secure Certificate Authority certificate with MD5 fingerprint 67:CB:9D:C0:13:24:8A:82:9B:B2:17:1E:D1:1B:EC:D4 (the serial number is 35:DE:F4:CF and the SHA1 fingerprint is D2:32:09:AD:23:D3:14:23:21:74:E4:0D:7F:9D:62:13:97:86:63:3A). This is the certificate authority (CA) certificate for the [https://api.twilio.com][twilio_api_service] service, which is called when you use Twilio APIs.
-
-Additionally, familiarity with the information at [Creating a Hello World Application Using the Azure Toolkit for Eclipse][azure_java_eclipse_hello_world], or with other techniques for hosting Java applications in Azure if you are not using Eclipse, is highly recommended.
-
-## Create a web form for making a call
-The following code shows how to create a web form to retrieve user data for making a call. For purposes of this example, a new dynamic web project, named **TwilioCloud**, was created, and **callform.jsp** was added as a JSP file.
-
-```jsp
-<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
- pageEncoding="ISO-8859-1" %>
-<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "https://www.w3.org/TR/html4/loose.dtd">
-<html>
- <head>
- <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
- <title>Automated call form</title>
- </head>
- <body>
- <p>Fill in all fields and click <b>Make this call</b>.</p>
- <br/>
- <form action="makecall.jsp" method="post">
- <table>
- <tr>
- <td>To:</td>
- <td><input type="text" size=50 name="callTo" value="" />
- </td>
- </tr>
- <tr>
- <td>From:</td>
- <td><input type="text" size=50 name="callFrom" value="" />
- </td>
- </tr>
- <tr>
- <td>Call message:</td>
- <td><input type="text" size=400 name="callText" value="Hello. This is the call text. Good bye." />
- </td>
- </tr>
- <tr>
- <td colspan=2><input type="submit" value="Make this call" />
- </td>
- </tr>
- </table>
- </form>
- <br/>
- </body>
-</html>
-```
-
-## Create the code to make the call
-The following code, which is called when the user completes the form displayed by callform.jsp, creates the call message and generates the call. For purposes of this example, the JSP file is named **makecall.jsp** and was added to the **TwilioCloud** project. (Use your Twilio account and authentication token instead of the placeholder values assigned to **accountSID** and **authToken** in the code below.)
-
-```jsp
-<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
-import="java.util.*"
-import="com.twilio.*"
-import="com.twilio.sdk.*"
-import="com.twilio.sdk.resource.factory.*"
-import="com.twilio.sdk.resource.instance.*"
-pageEncoding="ISO-8859-1" %>
-<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "https://www.w3.org/TR/html4/loose.dtd">
-<html>
- <head>
- <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
- <title>Call processing happens here</title>
- </head>
- <body>
- <b>This is my make call page.</b><p/>
-<%
-try
-{
- // Use your account SID and authentication token instead
- // of the placeholders shown here.
- String accountSID = "your_twilio_account";
- String authToken = "your_twilio_authentication_token";
-
- // Instantiate an instance of the Twilio client.
- TwilioRestClient client;
- client = new TwilioRestClient(accountSID, authToken);
-
- // Retrieve the account, used later to retrieve the CallFactory.
- Account account = client.getAccount();
-
- // Display the client endpoint.
- out.println("<p>Using Twilio endpoint " + client.getEndpoint() + ".</p>");
-
- // Display the API version.
- String APIVERSION = TwilioRestClient.DEFAULT_VERSION;
- out.println("<p>Twilio client API version is " + APIVERSION + ".</p>");
-
- // Retrieve the values entered by the user.
- String callTo = request.getParameter("callTo");
- // The Outgoing Caller ID, used for the From parameter,
- // must have previously been verified with Twilio.
- String callFrom = request.getParameter("callFrom");
- String userText = request.getParameter("callText");
-
- // Replace spaces in the user's text with '%20',
- // to make the text suitable for a URL.
- userText = userText.replace(" ", "%20");
-
- // Create a URL using the Twilio message and the user-entered text.
- String Url="https://twimlets.com/message";
- Url = Url + "?Message%5B0%5D=" + userText;
-
- // Display the message URL.
- out.println("<p>");
- out.println("The URL is " + Url);
- out.println("</p>");
-
- // Place the call From, To and URL values into a hash map.
- HashMap<String, String> params = new HashMap<String, String>();
- params.put("From", callFrom);
- params.put("To", callTo);
- params.put("Url", Url);
-
- CallFactory callFactory = account.getCallFactory();
- Call call = callFactory.create(params);
- out.println("<p>Call status: " + call.getStatus() + "</p>");
-}
-catch (TwilioRestException e)
-{
- out.println("<p>TwilioRestException encountered: " + e.getMessage() + "</p>");
- out.println("<p>StackTrace: " + e.getStackTrace().toString() + "</p>");
-}
-catch (Exception e)
-{
- out.println("<p>Exception encountered: " + e.getMessage() + "");
- out.println("<p>StackTrace: " + e.getStackTrace().toString() + "</p>");
-}
-%>
- </body>
-</html>
-```
-
-In addition to making the call, makecall.jsp displays the Twilio endpoint, API version, and the call status. An example is the following screenshot:
-
-![Azure Call Response Using Twilio and Java][twilio_java_response]
-
-## Run the application
-Following are the high-level steps to run your application; details for these steps can be found at [Creating a Hello World Application Using the Azure Toolkit for Eclipse][azure_java_eclipse_hello_world].
-
-1. Export your TwilioCloud WAR to the Azure **approot** folder.
-2. Modify **startup.cmd** to unzip your TwilioCloud WAR.
-3. Compile your application for the compute emulator.
-4. Start your deployment in the compute emulator.
-5. Open a browser, and run `http://localhost:8080/TwilioCloud/callform.jsp`.
-6. Enter values in the form, click **Make this call**, and then see the results in makecall.jsp.
-
-When you are ready to deploy to Azure, recompile for deployment to the cloud, deploy to Azure, and run http://*your_hosted_name*.cloudapp.net/TwilioCloud/callform.jsp in the browser (substitute your value for *your_hosted_name*).
-
-## Next steps
-This code was provided to show you basic functionality using Twilio in Java on Azure. Before deploying to Azure in production, you may want to add more error handling or other features. For example:
-
-* Instead of using a web form, you could use Azure storage blobs or SQL Database to store phone numbers and call text. For information about using Azure storage blobs in Java, see [How to Use the Blob Storage Service from Java][howto_blob_storage_java].
-* You could use **RoleEnvironment.getConfigurationSettings** to retrieve the Twilio account ID and authentication token from your deployment's configuration settings, instead of hard-coding the values in makecall.jsp. For information about the **RoleEnvironment** class, see [Using the Azure Service Runtime Library in JSP][azure_runtime_jsp].
-* The makecall.jsp code assigns a Twilio-provided URL, [https://twimlets.com/message][twimlet_message_url], to the **Url** variable. This URL provides a Twilio Markup Language (TwiML) response that informs Twilio how to proceed with the call. For example, the TwiML that is returned can contain a **&lt;Say&gt;** verb that results in text being spoken to the call recipient. Instead of using the Twilio-provided URL, you could build your own service to respond to Twilio's request; for more information, see [How to Use Twilio for Voice and SMS Capabilities in Java][howto_twilio_voice_sms_java]. More information about TwiML can be found at [https://www.twilio.com/docs/api/twiml][twiml], and more information about **&lt;Say&gt;** and other Twilio verbs can be found at [https://www.twilio.com/docs/api/twiml/say][twilio_say].
-* Read the Twilio security guidelines at [https://www.twilio.com/docs/security][twilio_docs_security].
-
-For additional information about Twilio, see [https://www.twilio.com/docs][twilio_docs].
-
-## See Also
-* [How to Use Twilio for Voice and SMS Capabilities in Java][howto_twilio_voice_sms_java]
-
-[twilio_pricing]: https://www.twilio.com/pricing
-[try_twilio]: https://www.twilio.com/try-twilio
-[twilio_api]: https://www.twilio.com/docs/api
-[verify_phone]: https://www.twilio.com/user/account/phone-numbers/verified#
-[twilio_java_github]: https://github.com/twilio/twilio-java
-[twimlet_message_url]: https://twimlets.com/message
-[twiml]: https://www.twilio.com/docs/api/twiml
-[twilio_api_service]: https://api.twilio.com
-[azure_java_eclipse_hello_world]: /java/azure/eclipse/azure-toolkit-for-eclipse-create-hello-world-web-app
-[howto_twilio_voice_sms_java]: partner-twilio-java-how-to-use-voice-sms.md
-[howto_blob_storage_java]: https://www.windowsazure.com/develop/java/how-to-guides/blob-storage/
-[howto_sql_azure_java]: https://msdn.microsoft.com/library/windowsazure/hh749029.aspx
-[azure_runtime_jsp]: /previous-versions/azure/hh690948(v=azure.100)
-[twilio_docs_security]: https://www.twilio.com/docs/security
-[twilio_docs]: https://www.twilio.com/docs
-[twilio_say]: https://www.twilio.com/docs/api/twiml/say
-[twilio_java]: ./media/partner-twilio-java-phone-call-example/WA_TwilioJavaCallForm.jpg
-[twilio_java_response]: ./media/partner-twilio-java-phone-call-example/WA_TwilioJavaMakeCall.jpg
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-nodejs-how-to-use-voice-sms.md
- Title: Using Twilio for Voice, VoIP, and SMS Messaging in Azure
-description: Learn how to make a phone call and send a SMS message with the Twilio API service on Azure. Code samples written in Node.js.
--- Previously updated : 11/25/2014---
-# Using Twilio for Voice, VoIP, and SMS Messaging in Azure
-This guide demonstrates how to build apps that communicate with Twilio and node.js on Azure.
-
-<a name="whatis"></a>
-
-## What is Twilio?
-Twilio is an API platform that makes it easy for developers to make and receive phone calls, send and receive text messages, and embed VoIP calling into browser-based and native mobile applications. Let's briefly go over how this works before diving in.
-
-### Receiving Calls and Text Messages
-Twilio allows developers to [purchase programmable phone numbers][purchase_phone] which can be used to both send and receive calls and text messages. When a Twilio number receives an inbound call or text, Twilio will send your web application an HTTP POST or GET request, asking you for instructions on how to handle the call or text. Your server will respond to Twilio's HTTP request with [TwiML][twiml], a simple set of XML tags that contain instructions on how to handle a call or text. We will see examples of TwiML in just a moment.
-
-### Making Calls and Sending Text Messages
-By making HTTP requests to the Twilio web service API, developers can send text messages or initiate outbound phone calls. For outbound calls, the developer must also specify a URL that returns TwiML instructions for how to handle the outbound call once it is connected.
-
-### Embedding VoIP Capabilities in UI code (JavaScript, iOS, or Android)
-Twilio provides a client-side SDK which can turn any desktop web browser, iOS app, or Android app into a VoIP phone. In this article, we will focus on how to use VoIP calling in the browser. In addition to the *Twilio JavaScript SDK* running in the browser, a server-side application (our node.js application) must be used to issue a "capability token" to the JavaScript client. You can read more about using VoIP with node.js [on the Twilio dev blog][voipnode].
-
-<a name="signup"></a>
-
-## Sign Up For Twilio (Microsoft Discount)
-Before using Twilio services, you must first [sign up for an account][signup]. Microsoft Azure customers receive a special discount - [be sure to sign up here][signup]!
-
-<a name="azuresite"></a>
-
-## Create and Deploy a node.js Azure Website
-Next, you will need to create a node.js website running on Azure. [The official documentation for doing this is located here][azure_new_site]. At a high level, you will be doing the following:
-
-* Signing up for an Azure account, if you don't have one already
-* Using the Azure admin console to create a new website
-* Adding source control support (we will assume you used git)
-* Creating a file `server.js` with a simple node.js web application
-* Deploying this simple application to Azure
-
-<a name="twiliomodule"></a>
-
-## Configure the Twilio Module
-Next, we will begin to write a simple node.js application which makes use of the Twilio API. Before we begin, we need to configure our Twilio account credentials.
-
-### Configuring Twilio Credentials in System Environment Variables
-In order to make authenticated requests against the Twilio back end, we need our account SID and auth token, which function as the username and password set for our Twilio account. The most secure way to configure these for use with the node module in Azure is via system environment variables, which you can set directly in the Azure admin console.
-
-Select your node.js website, and click the "CONFIGURE" link. If you scroll down a bit, you will see an area where you can set configuration properties for your application. Enter your Twilio account credentials ([found on your Twilio Console][twilio_console]) as shown - make sure to name them `TWILIO_ACCOUNT_SID` and `TWILIO_AUTH_TOKEN`, respectively:
-
-![Azure admin console][azure-admin-console]
-
-Once you have configured these variables, restart your application in the Azure console.
-
-### Declaring the Twilio module in package.json
-Next, we need to create a package.json to manage our node module dependencies via [npm]. At the same level as the `server.js` file you created in the *Azure/node.js* tutorial, create a file named `package.json`. Inside this file, place the following:
-
-```json
-{
- "name": "application-name",
- "version": "0.0.1",
- "private": true,
- "scripts": {
- "start": "node server"
- },
- "dependencies": {
- "body-parser": "^1.16.1",
- "ejs": "^2.5.5",
- "errorhandler": "^1.5.0",
- "express": "^4.14.1",
- "morgan": "^1.8.1",
- "twilio": "^2.11.1"
- }
-}
-```
-
-This declares the twilio module as a dependency, as well as the popular [Express web framework][express] and the EJS template engine. Okay, now we're all set - let's write some code!
-
-<a name="makecall"></a>
-
-## Make an Outbound Call
-Let's create a simple form that will place a call to a number we choose. Open up `server.js`, and enter the following code. Note where it says "CHANGE_ME" - put the name of your azure website there:
-
-```javascript
-// Module dependencies
-const express = require('express');
-const path = require('path');
-const http = require('http');
-const twilio = require('twilio');
-const logger = require('morgan');
-const bodyParser = require('body-parser');
-const errorHandler = require('errorhandler');
-const accountSid = process.env.TWILIO_ACCOUNT_SID;
-const authToken = process.env.TWILIO_AUTH_TOKEN;
-// Create Express web application
-const app = express();
-
-// Express configuration
-app.set('port', process.env.PORT || 3000);
-app.set('views', __dirname + '/views');
-app.set('view engine', 'ejs');
-app.use(logger('tiny'));
-app.use(bodyParser.urlencoded({ extended: false }))
-app.use(bodyParser.json())
-app.use(express.static(path.join(__dirname, 'public')));
-
-if (app.get('env') !== 'production') {
- app.use(errorHandler());
-}
-
-// Render an HTML user interface for the application's home page
-app.get('/', (request, response) => response.render('index'));
-
-// Handle the form POST to place a call
-app.post('/call', (request, response) => {
- var client = twilio(accountSid, authToken);
-
- client.makeCall({
- // make a call to this number
- to:request.body.number,
-
- // Change to a Twilio number you bought - see:
- // https://www.twilio.com/console/phone-numbers/incoming
- from:'+15558675309',
-
- // A URL in our app which generates TwiML
- // Change "CHANGE_ME" to your app's name
- url:'https://CHANGE_ME.azurewebsites.net/outbound_call'
- }, () => {
- // Go back to the home page
- response.redirect('/');
- });
-});
-
-// Generate TwiML to handle an outbound call
-app.post('/outbound_call', (request, response) => {
- var twiml = new twilio.TwimlResponse();
-
- // Say a message to the call's receiver
- twiml.say('hello - thanks for checking out Twilio and Azure', {
- voice:'woman'
- });
-
- response.set('Content-Type', 'text/xml');
- response.send(twiml.toString());
-});
-
-// Start server
-app.listen(app.get('port'), function(){
- console.log(`Express server listening on port ${app.get('port')}`);
-});
-```
-
-Next, create a directory called `views` - inside this directory, create a file named `index.ejs` with the following contents:
-
-```html
-<!DOCTYPE html>
-<html>
-<head>
- <title>Twilio Test</title>
- <style>
- input { height:20px; width:300px; font-size:18px; margin:5px; padding:5px; }
- </style>
-</head>
-<body>
- <h1>Twilio Test</h1>
- <form action="/call" method="POST">
- <input placeholder="Enter a phone number" name="number"/>
- <br/>
- <input type="submit" value="Call the number above"/>
- </form>
-</body>
-</html>
-```
-
-Now, deploy your website to Azure and open your home. You should be able to enter your phone number in the text field, and receive a call from your Twilio number!
-
-<a name="sendmessage"></a>
-
-## Send an SMS Message
-Now, let's set up a user interface and form handling logic to send a text message. Open up `server.js`, and add the following code after the last call to `app.post`:
-
-```javascript
-app.post('/sms', (request, response) => {
- const client = twilio(accountSid, authToken);
-
- client.sendSms({
- // send a text to this number
- to:request.body.number,
-
- // A Twilio number you bought - see:
- // https://www.twilio.com/console/phone-numbers/incoming
- from:'+15558675309',
-
- // The body of the text message
- body: request.body.message
-
- }, () => {
- // Go back to the home page
- response.redirect('/');
- });
-});
-```
-
-In `views/index.ejs`, add another form under the first one to submit a number and a text message:
-
-```html
-<form action="/sms" method="POST">
- <input placeholder="Enter a phone number" name="number"/>
- <br/>
- <input placeholder="Enter a message to send" name="message"/>
- <br/>
- <input type="submit" value="Send text to the number above"/>
-</form>
-```
-
-Re-deploy your application to Azure, and you should now be able to submit that form and send yourself (or any of your closest friends) a text message!
-
-<a name="nextsteps"></a>
-
-## Next Steps
-You have now learned the basics of using node.js and Twilio to build apps that communicate. But these examples barely scratch the surface of what's possible with Twilio and node.js. For more information using Twilio with node.js, check out the following resources:
-
-* [Official module docs][docs]
-* [Tutorial on VoIP with node.js applications][voipnode]
-* [Votr - a real-time SMS voting application with node.js and CouchDB (three parts)][votr]
-* [Pair programming in the browser with node.js][pair]
-
-We hope you love hacking node.js and Twilio on Azure!
-
-[purchase_phone]: https://www.twilio.com/console/phone-numbers/search
-[twiml]: https://www.twilio.com/docs/api/twiml
-[signup]: https://ahoy.twilio.com/azure
-[azure_new_site]: app-service/quickstart-nodejs.md
-[twilio_console]: https://www.twilio.com/console
-[npm]: https://npmjs.org
-[express]: https://expressjs.com
-[voipnode]: https://www.twilio.com/blog/2013/04/introduction-to-twilio-client-with-node-js.html
-[docs]: https://www.twilio.com/docs/libraries/reference/twilio-node/
-[votr]: https://www.twilio.com/blog/2012/09/building-a-real-time-sms-voting-app-part-1-node-js-couchdb.html
-[pair]: https://www.twilio.com/blog/2013/06/pair-programming-in-the-browser-with-twilio.html
-[azure-admin-console]: ./media/partner-twilio-nodejs-how-to-use-voice-sms/twilio_1.png
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-php-how-to-use-voice-sms.md
- Title: How to Use Twilio for Voice and SMS (PHP) | Microsoft Docs
-description: Learn how to make a phone call and send a SMS message with the Twilio API service on Azure. Code samples written in PHP.
------ Previously updated : 11/25/2014---
-# How to Use Twilio for Voice and SMS Capabilities in PHP
-This guide demonstrates how to perform common programming tasks with the Twilio API service on Azure. The scenarios covered include making a phone call and sending a Short Message Service (SMS) message. For more information on Twilio and using voice and SMS in your applications, see the [Next Steps](#NextSteps) section.
-
-## <a id="WhatIs"></a>What is Twilio?
-Twilio is powering the future of business communications, enabling developers to embed voice, VoIP, and messaging into applications. They virtualize all infrastructure needed in a cloud-based, global environment, exposing it through the Twilio communications API platform. Applications are simple to build and scalable. Enjoy flexibility with pay-as-you go pricing, and benefit from cloud reliability.
-
-**Twilio Voice** allows your applications to make and receive phone calls. **Twilio SMS** enables your application to send and receive text messages. **Twilio Client** allows you to make VoIP calls from any phone, tablet, or browser and supports WebRTC.
-
-## <a id="Pricing"></a>Twilio Pricing and Special Offers
-Azure customers receive a [special offer](https://www.twilio.com/azure): complimentary $10 of Twilio Credit when you upgrade your Twilio Account. This Twilio Credit can be applied to any Twilio usage ($10 credit equivalent to sending as many as 1,000 SMS messages or receiving up to 1000 inbound Voice minutes, depending on the location of your phone number and message or call destination). Redeem this Twilio credit and get started at: [https://ahoy.twilio.com/azure](https://ahoy.twilio.com/azure).
-
-Twilio is a pay-as-you-go service. There are no set-up fees and you can close your account at any time. You can find more details at [Twilio Pricing][twilio_pricing].
-
-## <a id="Concepts"></a>Concepts
-The Twilio API is a RESTful API that provides voice and SMS functionality for applications. Client libraries are available in multiple languages; for a list, see [Twilio API Libraries][twilio_libraries].
-
-Key aspects of the Twilio API are Twilio verbs and Twilio Markup Language (TwiML).
-
-### <a id="Verbs"></a>Twilio Verbs
-The API makes use of Twilio verbs; for example, the **&lt;Say&gt;** verb instructs Twilio to audibly deliver a message on a call.
-
-The following is a list of Twilio verbs. Learn about the other verbs and capabilities via [Twilio Markup Language documentation](https://www.twilio.com/docs/api/twiml).
-
-* **&lt;Dial&gt;**: Connects the caller to another phone.
-* **&lt;Gather&gt;**: Collects numeric digits entered on the telephone keypad.
-* **&lt;Hangup&gt;**: Ends a call.
-* **&lt;Play&gt;**: Plays an audio file.
-* **&lt;Pause&gt;**: Waits silently for a specified number of seconds.
-* **&lt;Record&gt;**: Records the caller's voice and returns a URL of a file that contains the recording.
-* **&lt;Redirect&gt;**: Transfers control of a call or SMS to the TwiML at a different URL.
-* **&lt;Reject&gt;**: Rejects an incoming call to your Twilio number without billing you
-* **&lt;Say&gt;**: Converts text to speech that is made on a call.
-* **&lt;Sms&gt;**: Sends an SMS message.
-
-### <a id="TwiML"></a>TwiML
-TwiML is a set of XML-based instructions based on the Twilio verbs that inform Twilio of how to process a call or SMS.
-
-As an example, the following TwiML would convert the text **Hello World** to speech.
-
-```xml
-<?xml version="1.0" encoding="UTF-8" ?>
-<Response>
- <Say>Hello World</Say>
-</Response>
-```
-
-When your application calls the Twilio API, one of the API parameters is the URL that returns the TwiML response. For development purposes, you can use Twilio-provided URLs to provide the TwiML responses used by your applications. You could also host your own URLs to produce the TwiML responses, and another option is to use the **TwiMLResponse** object.
-
-For more information about Twilio verbs, their attributes, and TwiML, see [TwiML][twiml]. For additional information about the Twilio API, see [Twilio API][twilio_api].
-
-## <a id="CreateAccount"></a>Create a Twilio Account
-When you're ready to get a Twilio account, sign up at [Try Twilio][try_twilio]. You can start with a free account, and upgrade your account later.
-
-When you sign up for a Twilio account, you'll receive an account ID and an authentication token. Both will be needed to make Twilio API calls. To prevent unauthorized access to your account, keep your authentication token secure. Your account ID and authentication token are viewable at the [Twilio account page][twilio_account], in the fields labeled **ACCOUNT SID** and **AUTH TOKEN**, respectively.
-
-## <a id="create_app"></a>Create a PHP Application
-A PHP application that uses the Twilio service and is running in Azure is no different than any other PHP application that uses the Twilio service. While Twilio services are REST-based and can be called from PHP in several ways, this article will focus on how to use Twilio services with [Twilio library for PHP from GitHub][twilio_php]. For more information about using the Twilio library for PHP, see [https://www.twilio.com/docs/libraries/php][twilio_lib_docs].
-
-Detailed instructions for building and deploying a Twilio/PHP application to Azure are available at [How to Make a Phone Call Using Twilio in a PHP Application on Azure][howto_phonecall_php].
-
-## <a id="configure_app"></a>Configure Your Application to Use Twilio Libraries
-You can configure your application to use the Twilio library for PHP in two ways:
-
-1. Download the Twilio library for PHP from GitHub ([https://github.com/twilio/twilio-php][twilio_php]) and add the **Services** directory to your application.
-
- -OR-
-2. Install the Twilio library for PHP as a PEAR package. It can be installed with the following commands:
-
- ```bash
- $ pear channel-discover twilio.github.com/pear
- $ pear install twilio/Services_Twilio
- ```
-
-Once you have installed the Twilio library for PHP, you can then add a **require_once** statement at the top of your PHP files to reference the library:
-
-```php
-require_once 'Services/Twilio.php';
-```
-
-For more information, see [https://github.com/twilio/twilio-php/blob/master/README.md][twilio_github_readme].
-
-## <a id="howto_make_call"></a>How to: Make an outgoing call
-The following shows how to make an outgoing call using the **Services_Twilio** class. This code also uses a Twilio-provided site to return the Twilio Markup Language (TwiML) response. Substitute your values for the **From** and **To** phone numbers, and ensure that you verify the **From** phone number for your Twilio account prior to running the code.
-
-```php
-// Include the Twilio PHP library.
-require_once 'Services/Twilio.php';
-
-// Library version.
-$version = "2010-04-01";
-
-// Set your account ID and authentication token.
-$sid = "your_twilio_account_sid";
-$token = "your_twilio_authentication_token";
-
-// The number of the phone initiating the call.
-$from_number = "NNNNNNNNNNN";
-
-// The number of the phone receiving call.
-$to_number = "NNNNNNNNNNN";
-
-// Use the Twilio-provided site for the TwiML response.
-$url = "https://twimlets.com/message";
-
-// The phone message text.
-$message = "Hello world.";
-
-// Create the call client.
-$client = new Services_Twilio($sid, $token, $version);
-
-//Make the call.
-try
-{
- $call = $client->account->calls->create(
- $from_number,
- $to_number,
- $url.'?Message='.urlencode($message)
- );
-}
-catch (Exception $e)
-{
- echo 'Error: ' . $e->getMessage();
-}
-```
-
-As mentioned, this code uses a Twilio-provided site to return the TwiML response. You could instead use your own site to provide the TwiML response; for more information, see [How to Provide TwiML Responses from Your Own Web Site](#howto_provide_twiml_responses).
-
-* **Note**: To troubleshoot TLS/SSL certificate validation errors, see [https://www.twilio.com/docs/api/errors][ssl_validation]
-
-## <a id="howto_send_sms"></a>How to: Send an SMS message
-The following shows how to send an SMS message using the **Services_Twilio** class. The **From** number is provided by Twilio for trial accounts to send SMS messages. The **To** number must be verified for your Twilio account prior to running the code.
-
-```php
-// Include the Twilio PHP library.
-require_once 'Services/Twilio.php';
-
-// Library version.
-$version = "2010-04-01";
-
-// Set your account ID and authentication token.
-$sid = "your_twilio_account_sid";
-$token = "your_twilio_authentication_token";
--
-$from_number = "NNNNNNNNNNN"; // With trial account, texts can only be sent from your Twilio number.
-$to_number = "NNNNNNNNNNN";
-$message = "Hello world.";
-
-// Create the call client.
-$client = new Services_Twilio($sid, $token, $version);
-
-// Send the SMS message.
-try
-{
- $client->$client->account->messages->sendMessage($from_number, $to_number, $message);
-}
-catch (Exception $e)
-{
- echo 'Error: ' . $e->getMessage();
-}
-```
-
-## <a id="howto_provide_twiml_responses"></a>How to: Provide TwiML Responses from your own Website
-When your application initiates a call to the Twilio API, Twilio will send your request to a URL that is expected to return a TwiML response. The example above uses the Twilio-provided URL [https://twimlets.com/message][twimlet_message_url]. (While TwiML is designed for use by Twilio, you can view the it in your browser. For example, click [https://twimlets.com/message][twimlet_message_url] to see an empty `<Response>` element; as another example, click [https://twimlets.com/message?Message%5B0%5D=Hello%20World][twimlet_message_url_hello_world] to see a `<Response>` element that contains a `<Say>` element.)
-
-Instead of relying on the Twilio-provided URL, you can create your own site that returns HTTP responses. You can create the site in any language that returns XML responses; this topic assumes you'll be using PHP to create the TwiML.
-
-The following PHP page results in a TwiML response that says **Hello World** on the call.
-
-```xml
-<?php
- header("content-type: text/xml");
- echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n";
-?>
-<Response>
- <Say>Hello world.</Say>
-</Response>
-```
-
-As you can see from the example above, the TwiML response is simply an XML document. The Twilio library for PHP contains classes that will generate TwiML for you. The example below produces the equivalent response as shown above, but uses the **Services\_Twilio\_Twiml** class in the Twilio library for PHP:
-
-```php
-require_once('Services/Twilio.php');
-
-$response = new Services_Twilio_Twiml();
-$response->say("Hello world.");
-print $response;
-```
-
-For more information about TwiML, see [https://www.twilio.com/docs/api/twiml][twiml_reference].
-
-Once you have your PHP page set up to provide TwiML responses, use the URL of the PHP page as the URL passed into the `Services_Twilio->account->calls->create` method. For example, if you have a Web application named **MyTwiML** deployed to an Azure hosted service, and the name of the PHP page is **mytwiml.php**, the URL can be passed to **Services_Twilio->account->calls->create** as shown in the following example:
-
-```php
-require_once 'Services/Twilio.php';
-
-$sid = "your_twilio_account_sid";
-$token = "your_twilio_authentication_token";
-$from_number = "NNNNNNNNNNN";
-$to_number = "NNNNNNNNNNN";
-$url = "http://<your_hosted_service>.cloudapp.net/MyTwiML/mytwiml.php";
-
-// The phone message text.
-$message = "Hello world.";
-
-$client = new Services_Twilio($sid, $token, "2010-04-01");
-
-try
-{
- $call = $client->account->calls->create(
- $from_number,
- $to_number,
- $url.'?Message='.urlencode($message)
- );
-}
-catch (Exception $e)
-{
- echo 'Error: ' . $e->getMessage();
-}
-```
-
-For additional information about using Twilio in Azure with PHP, see [How to Make a Phone Call Using Twilio in a PHP Application on Azure][howto_phonecall_php].
-
-## <a id="AdditionalServices"></a>How to: Use Additional Twilio Services
-In addition to the examples shown here, Twilio offers web-based APIs that you can use to leverage additional Twilio functionality from your Azure application. For full details, see the [Twilio API documentation][twilio_api_documentation].
-
-## <a id="NextSteps"></a>Next Steps
-Now that you've learned the basics of the Twilio service, follow these links to learn more:
-
-* [Twilio Security Guidelines][twilio_security_guidelines]
-* [Twilio HowTo's and Example Code][twilio_howtos]
-* [Twilio Quickstart Tutorials][twilio_quickstarts]
-* [Twilio on GitHub][twilio_on_github]
-* [Talk to Twilio Support][twilio_support]
-
-[twilio_php]: https://github.com/twilio/twilio-php
-[twilio_lib_docs]: https://www.twilio.com/docs/libraries/php
-[twilio_github_readme]: https://github.com/twilio/twilio-php/blob/master/README.md
-[ssl_validation]: https://www.twilio.com/docs/api/errors
-[twilio_api_service]: https://api.twilio.com
-[howto_phonecall_php]: partner-twilio-php-make-phone-call.md
-[twilio_voice_request]: https://www.twilio.com/docs/api/twiml/twilio_request
-[twilio_sms_request]: https://www.twilio.com/docs/api/twiml/sms/twilio_request
-[misc_role_config_settings]: /previous-versions/azure/hh690945(v=azure.100)
-[twimlet_message_url]: https://twimlets.com/message
-[twimlet_message_url_hello_world]: https://twimlets.com/message?Message%5B0%5D=Hello%20World
-[twiml_reference]: https://www.twilio.com/docs/api/twiml
-[twilio_pricing]: https://www.twilio.com/pricing
-[special_offer]: https://ahoy.twilio.com/azure
-[twilio_libraries]: https://www.twilio.com/docs/libraries
-[twiml]: https://www.twilio.com/docs/api/twiml
-[twilio_api]: https://www.twilio.com/docs/api
-[try_twilio]: https://www.twilio.com/try-twilio
-[twilio_account]: https://www.twilio.com/user/account
-[verify_phone]: https://www.twilio.com/user/account/phone-numbers/verified#
-[twilio_api_documentation]: https://www.twilio.com/docs/api
-[twilio_security_guidelines]: https://www.twilio.com/docs/security
-[twilio_howtos]: https://www.twilio.com/docs/all
-[twilio_on_github]: https://github.com/twilio
-[twilio_support]: https://www.twilio.com/help/contact
-[twilio_quickstarts]: https://www.twilio.com/docs/quickstart
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-php-make-phone-call.md
- Title: How to make a phone call from Twilio (PHP) | Microsoft Docs
-description: Learn how to make a phone call and send a SMS message with the Twilio API service on Azure. Samples are for PHP application.
------ Previously updated : 11/25/2014---
-# How to Make a Phone Call Using Twilio in a PHP Application on Azure
-The following example shows you how you can use Twilio to make a call from a PHP web page hosted in Azure. The resulting application will prompt the user for phone call values, as shown in the following screenshot.
-
-![Azure Call Form Using Twilio and PHP][twilio_php]
-
-You'll need to do the following to use the code in this topic:
-
-1. Acquire a Twilio account and authentication token from your [Twilio Console][twilio_console]. To get started with Twilio, evaluate pricing at [https://www.twilio.com/pricing][twilio_pricing]. You can sign up for a trial account at [https://www.twilio.com/try-twilio][try_twilio].
-2. Obtain the [Twilio library for PHP](https://github.com/twilio/twilio-php) or install it as a PEAR package. For more information, see the [readme file](https://github.com/twilio/twilio-php/blob/master/README.md).
-3. Install the Azure SDK for PHP.
-<!-- For an overview of the SDK and instructions on installing it, see [Set up the Azure SDK for PHP](./app-service/quickstart-php.md) -->
-
-## Create a web form for making a call
-The following HTML code shows how to build a web page (**callform.html**) that retrieves user data for making a call:
-
-```html
-<!DOCTYPE html>
-<html>
-<head>
- <title>Automated call form</title>
-</head>
-<body>
- <h1>Automated Call Form</h1>
- <p>Fill in all fields and click <b>Make this call</b>.</p>
- <form action="makecall.php" method="post">
- <table>
- <tr>
- <td>To:</td>
- <td><input name="callTo" size="50" type="text" value=""></td>
- </tr>
- <tr>
- <td>From:</td>
- <td><input name="callFrom" size="50" type="text" value=""></td>
- </tr>
- <tr>
- <td>Call message:</td>
- <td><input name="callText" size="100" type="text" value="Hello. This is the call text. Good bye."></td>
- </tr>
- <tr>
- <td colspan="2"><input type="submit" value="Make this call"></td>
- </tr>
- </table>
- </form><br>
-</body>
-</html>
-```
-
-## Create the code to make the call
-The following code shows how to build **makecall.php**, which is called when the user submits the form displayed by **callform.html**. The code shown below creates the call message and generates the call. Also, be sure to use your Twilio account and authentication token from the [Twilio Console][twilio_console] instead of the placeholder values assigned to **$sid** and **$token** in the code below.
-
-```html
-<html>
-<head><title>Making call...</title></head>
-<body>
-<p>Your call is being made.</p>
-
-<?php
-require_once 'path/to/vendor/autoload.php';
-
-$sid = "your_account_sid";
-$token = "your_authentication_token";
-
-$from_number = $_POST['callFrom']; // Calls must be made from a registered Twilio number.
-$to_number = $_POST['callTo'];
-$message = $_POST['callText'];
-
-$client = new Twilio\Rest\Client($sid, $token);
-
-$call = $client->calls->create(
- $to_number,
- $from_number,
- array('url' => https://twimlets.com/message?Message=' . urlencode($message))
- );
-
-echo "Call status: " . $call->status . "<br />";
-echo "URI resource: " . $call->uri . "<br />";
-?>
-</body>
-</html>
-```
-
-In addition to making the call, **makecall.php** displays some call metadata, as is shown in the image below. For more information about call metadata, see [https://www.twilio.com/docs/api/rest/call#instance-properties][twilio_call_properties].
-
-![Azure Call Response Using Twilio and PHP][twilio_php_response]
-
-## Run the application
-The next step is to [deploy your application to Azure Web Apps with Git](app-service/quickstart-php.md) (though not all the information there is relevant).
-
-## Next steps
-This code was provided to show you basic functionality using Twilio in PHP on Azure. Before deploying to Azure in production, you may want to add more error handling or other features. For example:
-
-* Instead of using a web form, you could use Azure storage blobs or SQL Database to store phone numbers and call text. For information about using Azure storage blobs in PHP, see [Using Azure Storage with PHP Applications][howto_blob_storage_php]. For information about using SQL Database in PHP, see [Using SQL Database with PHP Applications][howto_sql_azure_php].
-* The **makecall.php** code uses Twilio-provided URL ([https://twimlets.com/message][twimlet_message_url]) to provide a Twilio Markup Language (TwiML) response that informs Twilio how to proceed with the call. For example, the TwiML that is returned can contain a `<Say>` verb that results in text being spoken to the call recipient. Instead of using the Twilio-provided URL, you could build your own service to respond to Twilio's request; for more information, see [How to Use Twilio for Voice and SMS Capabilities in PHP][howto_twilio_voice_sms_php]. More information about TwiML can be found at [https://www.twilio.com/docs/api/twiml][twiml], and more information about `<Say>` and other Twilio verbs can be found at [https://www.twilio.com/docs/api/twiml/say][twilio_say].
-* Read the Twilio security guidelines at [https://www.twilio.com/docs/security][twilio_docs_security].
-
-For additional information about Twilio, see [https://www.twilio.com/docs][twilio_docs].
-
-## See Also
-* [How to Use Twilio for Voice and SMS Capabilities in PHP](partner-twilio-php-how-to-use-voice-sms.md)
-
-[twilio_console]: https://www.twilio.com/console
-[twilio_pricing]: https://www.twilio.com/pricing
-[try_twilio]: https://www.twilio.com/try-twilio
-[twilio_api]: https://www.twilio.com/docs/api
-[verify_phone]: https://www.twilio.com/console/phone-numbers/verified
-[twimlet_message_url]: https://twimlets.com/message
-[twiml]: https://www.twilio.com/docs/api/twiml
-[twilio_api_service]: https://api.twilio.com
-[build_php_azure_app]: http://azurephp.interoperabilitybridges.com/articles/build-and-deploy-a-windows-azure-php-application
-[howto_twilio_voice_sms_php]: partner-twilio-php-how-to-use-voice-sms.md
-[howto_blob_storage_php]: ./storage/blobs/storage-quickstart-blobs-php.md
-[howto_sql_azure_php]: ./azure-sql/database/connect-query-content-reference-guide.md
-[twilio_call_properties]: https://www.twilio.com/docs/api/rest/call#instance-properties
-[twilio_docs_security]: https://www.twilio.com/docs/security
-[twilio_docs]: https://www.twilio.com/docs
-[twilio_say]: https://www.twilio.com/docs/api/twiml/say
-[ssl_validation]: http://readthedocs.org/docs/twilio-php/en/latest/usage/rest.html
-[twilio_php]: ./media/partner-twilio-php-make-phone-call/WA_TwilioPHPCallForm.jpg
-[twilio_php_response]: ./media/partner-twilio-php-make-phone-call/WA_TwilioPHPMakeCall.jpg
-[twilio_php_github]: https://github.com/twilio/twilio-php
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-python-how-to-use-voice-sms.md
- Title: Use Twilio for voice and SMS (Python) | Microsoft Docs
-description: Learn how to make a phone call and send an SMS message with the Twilio API service on Azure. Code samples written in Python.
------ Previously updated : 02/19/2015----
-# Use Twilio for voice and SMS capabilities in Python
-This article demonstrates how to perform common programming tasks with the Twilio API service on Azure. The covered scenarios include making a phone call and sending a Short Message Service (SMS) message.
-
-For more information on Twilio and using voice and SMS in your applications, see the [Next steps](#NextSteps) section.
-
-## <a id="WhatIs"></a>What is Twilio?
-Twilio enables developers to embed voice, voice over IP (VoIP), and messaging into applications. Developers virtualize all the needed infrastructure in a cloud-based, global environment, exposing it through the Twilio API platform. Applications are simple to build and scalable.
-
-Twilio components include:
--- **Twilio Voice**: Allows your applications to make and receive phone calls.-- **Twilio SMS**: Enables your application to send and receive text messages.-- **Twilio Client**: Allows you to make VoIP calls from any phone, tablet, or browser. It supports the WebRTC specification for real-time communication.-
-## <a id="Pricing"></a>Twilio pricing and special offers
-Azure customers receive a [special offer][special_offer]: a $10 Twilio credit when you upgrade your Twilio account. This credit can be applied to any Twilio usage. A $10 credit is equivalent to sending as many as 1,000 SMS messages or receiving up to 1,000 inbound voice minutes, depending on the location of your phone number and message or call destination.
-
-Twilio is a pay-as-you-go service. There are no setup fees, and you can close your account at any time. You can find more details at [Twilio Pricing][twilio_pricing].
-
-## <a id="Concepts"></a>Concepts
-The Twilio API is a RESTful API that provides voice and SMS functionality for applications. Client libraries are available in multiple languages. For a list, see the [Twilio API libraries][twilio_libraries].
-
-Key aspects of the Twilio API are Twilio verbs and Twilio Markup Language (TwiML).
-
-### <a id="Verbs"></a>Twilio verbs
-The API uses verbs like these that tell Twilio what to do:
-
-* **&lt;Dial&gt;**: Connects the caller to another phone.
-* **&lt;Gather&gt;**: Collects numeric digits entered on the telephone keypad.
-* **&lt;Hangup&gt;**: Ends a call.
-* **&lt;Pause&gt;**: Waits silently for a specified number of seconds.
-* **&lt;Play&gt;**: Plays an audio file.
-* **&lt;Queue&gt;**: Adds to a queue of callers.
-* **&lt;Record&gt;**: Records the voice of the caller and returns a URL of a file that contains the recording.
-* **&lt;Redirect&gt;**: Transfers control of a call or SMS to the TwiML at a different URL.
-* **&lt;Reject&gt;**: Rejects an incoming call to your Twilio number without billing you.
-* **&lt;Say&gt;**: Converts text to speech that's made on a call.
-* **&lt;Sms&gt;**: Sends an SMS message.
-
-Learn about the other verbs and capabilities via the [Twilio Markup Language documentation][twiml].
-
-### <a id="TwiML"></a>TwiML
-TwiML is a set of XML-based instructions based on the Twilio verbs that tell Twilio how to process a call or SMS message.
-
-As an example, the following TwiML would convert the text **Hello World** to speech:
-
-```xml
-<?xml version="1.0" encoding="UTF-8" ?>
- <Response>
- <Say>Hello World</Say>
- </Response>
-```
-
-When your application calls the [Twilio API][twilio_api], one of the API parameters is the URL that returns the TwiML response. For development purposes, you can use Twilio-provided URLs to supply the TwiML responses that your applications will use. You can also host your own URLs to produce the TwiML responses, and another option is to use the `TwiMLResponse` object.
-
-## <a id="CreateAccount"></a>Create a Twilio account
-When you're ready to get a Twilio account, sign up at [Try Twilio][try_twilio]. You can start with a free account and upgrade your account later.
-
-When you sign up for a Twilio account, you receive an account security ID (SID) and an authentication token. You'll need both to make Twilio API calls. To prevent unauthorized access to your account, keep your authentication token secure. Your account SID and authentication token are viewable in the [Twilio Console][twilio_console], in the fields labeled **ACCOUNT SID** and **AUTH TOKEN**.
-
-## <a id="create_app"></a>Create a Python application
-A Python application that uses Twilio and is running in Azure is no different from any other Python application that uses Twilio. Although Twilio services are REST-based and can be called from Python in several ways, this article will focus on how to use Twilio services with the [Twilio library for Python from GitHub][twilio_python]. For more information about using this library, see the [Twilio Python library documentation][twilio_lib_docs].
-
-First, [set up a new Azure Linux virtual machine][azure_vm_setup] to act as a host for your new Python web application. After the virtual machine is running, you'll need to expose your application on a public port.
-
-To add an incoming rule:
- 1. Go to the [network security group][azure_nsg] page.
- 2. Select the network security group that corresponds with your virtual machine.
- 3. Add **Outgoing Rule** information for **port 80**. Be sure to allow incoming calls from any address.
-
-To set the DNS name label:
- 1. Go to the [public IP addresses][azure_ips] page.
- 2. Select the public IP that corresponds with your virtual machine.
- 3. Set the **DNS Name Label** information in the **Configuration** section. In this example, it looks something like *\<your-domain-label\>.centralus.cloudapp.azure.com*.
-
-After you're able to connect through SSH to the virtual machine, you can install the web framework of your choice. The two most well known in Python are [Flask](http://flask.pocoo.org/) and [Django](https://www.djangoproject.com). You can install either of them by running the `pip install` command.
-
-Keep in mind that we configured the virtual machine to allow traffic only on port 80. So be sure to configure the application to use this port.
-
-## <a id="configure_app"></a>Configure your application to use the Twilio library
-You can configure your application to use the Twilio library for Python in two ways:
-
-* Install the Twilio library for Python as a Pip package by using the following command:
-
- `$ pip install twilio`
-
-* Download the [Twilio library for Python from GitHub][twilio_python] and install it like this:
-
- `$ python setup.py install`
-
-After you've installed the Twilio library for Python, you can then import it in your Python files:
-
-`import twilio`
-
-For more information, see the [Twilio GitHub readme](https://github.com/twilio/twilio-python/blob/master/README.md).
-
-## <a id="howto_make_call"></a>Make an outgoing call
-The following example shows how to make an outgoing call. This code also uses a Twilio-provided site to return the TwiML response. Substitute your values for the `from_number` and `to_number` phone numbers. Ensure that you've verified the `from_number` phone number for your Twilio account before running the code.
-
-```python
-from urllib.parse import urlencode
-
-# Import the Twilio Python Client.
-from twilio.rest import TwilioRestClient
-
-# Set your account ID and authentication token.
-account_sid = "your_twilio_account_sid"
-auth_token = "your_twilio_authentication_token"
-
-# The number of the phone initiating the call.
-# This should either be a Twilio number or a number that you've verified.
-from_number = "NNNNNNNNNNN"
-
-# The number of the phone receiving call.
-to_number = "NNNNNNNNNNN"
-
-# Use the Twilio-provided site for the TwiML response.
-url = "https://twimlets.com/message?"
-
-# The phone message text.
-message = "Hello world."
-
-# Initialize the Twilio client.
-client = TwilioRestClient(account_sid, auth_token)
-
-# Make the call.
-call = client.calls.create(to=to_number,
- from_=from_number,
- url=url + urlencode({'Message': message}))
-print(call.sid)
-```
-
-> [!IMPORTANT]
-> Phone numbers should be formatted with a plus sign and a country code. An example is `+16175551212` (E.164 format). Twilio will also accept unformatted US numbers, such as `(415) 555-1212` or `415-555-1212`.
-
-This code uses a Twilio-provided site to return the TwiML response. You can instead use your own site to provide the TwiML response. For more information, see [Provide TwiML responses from your own website](#howto_provide_twiml_responses).
-
-## <a id="howto_send_sms"></a>Send an SMS message
-The following example shows how to send an SMS message by using the `TwilioRestClient` class. Twilio provides the `from_number` number for trial accounts to send SMS messages. The `to_number` number must be verified for your Twilio account before you run the code.
-
-```python
-# Import the Twilio Python Client.
-from twilio.rest import TwilioRestClient
-
-# Set your account ID and authentication token.
-account_sid = "your_twilio_account_sid"
-auth_token = "your_twilio_authentication_token"
-
-from_number = "NNNNNNNNNNN" # With a trial account, texts can only be sent from your Twilio number.
-to_number = "NNNNNNNNNNN"
-message = "Hello world."
-
-# Initialize the Twilio client.
-client = TwilioRestClient(account_sid, auth_token)
-
-# Send the SMS message.
-message = client.messages.create(to=to_number,
- from_=from_number,
- body=message)
-```
-
-## <a id="howto_provide_twiml_responses"></a>Provide TwiML responses from your own website
-When your application starts a call to the Twilio API, Twilio sends your request to a URL that's expected to return a TwiML response. The preceding example uses the Twilio-provided URL [https://twimlets.com/message][twimlet_message_url].
-
-> [!NOTE]
-> Although TwiML is designed for use by Twilio, you can view it in your browser. For example, select [https://twimlets.com/message][twimlet_message_url] to see an empty `<Response>` element. As another example, select [https://twimlets.com/message?Message%5B0%5D=Hello%20World][twimlet_message_url_hello_world] to see a `<Response>` element that contains a `<Say>` element.
-
-Instead of relying on the Twilio-provided URL, you can create your own site that returns HTTP responses. You can create the site in any language that returns XML responses. This article assumes you'll use Python to create the TwiML.
-
-The following examples will output a TwiML response that says **Hello World** on the call.
-
-With Flask:
-
-```python
-from flask import Response
-@app.route("/")
-def hello():
- xml = '<Response><Say>Hello world.</Say></Response>'
- return Response(xml, mimetype='text/xml')
-```
-
-With Django:
-
-```python
-from django.http import HttpResponse
-def hello(request):
- xml = '<Response><Say>Hello world.</Say></Response>'
- return HttpResponse(xml, content_type='text/xml')
-```
-
-As you can see from the preceding example, the TwiML response is simply an XML document. The Twilio library for Python contains classes that will generate TwiML for you. The following example produces the equivalent response as shown earlier, but it uses the `twiml` module in the Twilio library for Python:
-
-```python
-from twilio import twiml
-
-response = twiml.Response()
-response.say("Hello world.")
-print(str(response))
-```
-
-For more information about TwiML, see the [TwiML reference][twiml_reference].
-
-After your Python application is set up to provide TwiML responses, use the URL of the application as the URL passed into the `client.calls.create` method. For example, if you have a web application named *MyTwiML* deployed to an Azure-hosted service, you can use its URL as a webhook, as shown in the following example:
-
-```python
-from twilio.rest import TwilioRestClient
-
-account_sid = "your_twilio_account_sid"
-auth_token = "your_twilio_authentication_token"
-from_number = "NNNNNNNNNNN"
-to_number = "NNNNNNNNNNN"
-url = "http://your-domain-label.centralus.cloudapp.azure.com/MyTwiML/"
-
-# Initialize the Twilio Client.
-client = TwilioRestClient(account_sid, auth_token)
-
-# Make the call.
-call = client.calls.create(to=to_number,
- from_=from_number,
- url=url)
-print(call.sid)
-```
-
-## <a id="AdditionalServices"></a>Use additional Twilio services
-In addition to the examples shown here, Twilio offers web-based APIs that you can use to get more Twilio functionality from your Azure application. For full details, see the [Twilio API documentation][twilio_api].
-
-## <a id="NextSteps"></a>Next steps
-Now that you've learned the basics of the Twilio service, follow these links to learn more:
-
-* [Twilio security guidelines][twilio_security_guidelines]
-* [Twilio how-to guides and example code][twilio_howtos]
-* [Twilio quickstart tutorials][twilio_quickstarts]
-* [Twilio on GitHub][twilio_on_github]
-* [Talk to Twilio Support][twilio_support]
-
-[special_offer]: https://ahoy.twilio.com/azure
-[twilio_python]: https://github.com/twilio/twilio-python
-[twilio_lib_docs]: https://www.twilio.com/docs/libraries/python
-[twilio_github_readme]: https://github.com/twilio/twilio-python/blob/master/README.md
-
-[twimlet_message_url]: https://twimlets.com/message
-[twimlet_message_url_hello_world]: https://twimlets.com/message?Message%5B0%5D=Hello%20World
-[twiml_reference]: https://www.twilio.com/docs/api/twiml
-[twilio_pricing]: https://www.twilio.com/pricing
-
-[twilio_libraries]: https://www.twilio.com/docs/libraries
-[twiml]: https://www.twilio.com/docs/api/twiml
-[twilio_api]: https://www.twilio.com/docs/api
-[try_twilio]: https://www.twilio.com/try-twilio
-[twilio_console]: https://www.twilio.com/console
-[twilio_security_guidelines]: https://www.twilio.com/docs/security
-[twilio_howtos]: https://www.twilio.com/docs/all
-[twilio_on_github]: https://github.com/twilio
-[twilio_support]: https://www.twilio.com/help/contact
-[twilio_quickstarts]: https://www.twilio.com/docs/quickstart
-[azure_ips]: ./virtual-network/virtual-network-public-ip-address.md
-[azure_vm_setup]: ./virtual-machines/linux/quick-create-portal.md
-[azure_nsg]: ./virtual-network/manage-network-security-group.md
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-ruby-how-to-use-voice-sms.md
- Title: How to Use Twilio for Voice and SMS (Ruby) | Microsoft Docs
-description: Learn how to make a phone call and send a SMS message with the Twilio API service on Azure. Code samples written in Ruby.
------ Previously updated : 11/25/2014---
-# How to Use Twilio for Voice and SMS Capabilities in Ruby
-This guide demonstrates how to perform common programming tasks with the Twilio API service on Azure. The scenarios covered include making a phone call and sending a Short Message Service (SMS) message. For more information on Twilio and using voice and SMS in your applications, see the [Next Steps](#NextSteps) section.
-
-## <a id="WhatIs"></a>What is Twilio?
-Twilio is a telephony web-service API that lets you use your existing web languages and skills to build voice and SMS applications. Twilio is a third-party service (not an Azure feature and not a Microsoft product).
-
-**Twilio Voice** allows your applications to make and receive phone calls. **Twilio SMS** allows your applications to make and receive SMS messages. **Twilio Client** allows your applications to enable voice communication using existing Internet connections, including mobile connections.
-
-## <a id="Pricing"></a>Twilio Pricing and Special Offers
-Information about Twilio pricing is available at [Twilio Pricing][twilio_pricing]. Azure customers receive a [special offer][special_offer]: a free credit of 1000 texts or 1000 inbound minutes. To sign up for this offer or get more information, please visit [https://ahoy.twilio.com/azure][special_offer].
-
-## <a id="Concepts"></a>Concepts
-The Twilio API is a RESTful API that provides voice and SMS functionality for applications. Client libraries are available in multiple languages; for a list, see [Twilio API Libraries][twilio_libraries].
-
-### <a id="TwiML"></a>TwiML
-TwiML is a set of XML-based instructions that inform Twilio of how to process a call or SMS.
-
-As an example, the following TwiML would convert the text **Hello World** to speech.
-
-```xml
-<?xml version="1.0" encoding="UTF-8" ?>
-<Response>
- <Say>Hello World</Say>
-</Response>
-```
-
-All TwiML documents have `<Response>` as their root element. From there, you use Twilio Verbs to define the behavior of your application.
-
-### <a id="Verbs"></a>TwiML Verbs
-Twilio Verbs are XML tags that tell Twilio what to **do**. For example, the **&lt;Say&gt;** verb instructs Twilio to audibly deliver a message on a call.
-
-The following is a list of Twilio verbs.
-
-* **&lt;Dial&gt;**: Connects the caller to another phone.
-* **&lt;Gather&gt;**: Collects numeric digits entered on the telephone keypad.
-* **&lt;Hangup&gt;**: Ends a call.
-* **&lt;Play&gt;**: Plays an audio file.
-* **&lt;Pause&gt;**: Waits silently for a specified number of seconds.
-* **&lt;Record&gt;**: Records the caller's voice and returns a URL of a file that contains the recording.
-* **&lt;Redirect&gt;**: Transfers control of a call or SMS to the TwiML at a different URL.
-* **&lt;Reject&gt;**: Rejects an incoming call to your Twilio number without billing you
-* **&lt;Say&gt;**: Converts text to speech that is made on a call.
-* **&lt;Sms&gt;**: Sends an SMS message.
-
-For more information about Twilio verbs, their attributes, and TwiML, see [TwiML][twiml]. For additional information about the Twilio API, see [Twilio API][twilio_api].
-
-## <a id="CreateAccount"></a>Create a Twilio Account
-When you're ready to get a Twilio account, sign up at [Try Twilio][try_twilio]. You can start with a free account, and upgrade your account later.
-
-When you sign up for a Twilio account, you'll get a free phone number for your application. You'll also receive an account SID and an auth token. Both will be needed to make Twilio API calls. To prevent unauthorized access to your account, keep your authentication token secure. Your account SID and auth token are viewable at the [Twilio account page][twilio_account], in the fields labeled **ACCOUNT SID** and **AUTH TOKEN**, respectively.
-
-### <a id="VerifyPhoneNumbers"></a>Verify Phone Numbers
-In addition to the number you are given by Twilio, you can also verify numbers that you control (i.e. your cell phone or home phone number) for use in your applications.
-
-For information on how to verify a phone number, see [Manage Numbers][verify_phone].
-
-## <a id="create_app"></a>Create a Ruby Application
-A Ruby application that uses the Twilio service and is running in Azure is no different than any other Ruby application that uses the Twilio service. While Twilio services are RESTful and can be called from Ruby in several ways, this article will focus on how to use Twilio services with [Twilio helper library for Ruby][twilio_ruby].
-
-First, [set-up a new Azure Linux VM][azure_vm_setup] to act as a host for your new Ruby web application. Ignore the steps involving the creation of a Rails app, just set-up the VM. Make sure you create an Endpoint with an external port of 80 and an internal port of 5000.
-
-In the examples below, we will be using [Sinatra][sinatra], a very simple web framework for Ruby. But you can certainly use the Twilio helper library for Ruby with any other web framework, including Ruby on Rails.
-
-SSH into your new VM and create a directory for your new app. Inside that directory, create a file called Gemfile and copy the following code into it:
-
-```bash
-source 'https://rubygems.org'
-gem 'sinatra'
-gem 'thin'
-```
-
-On the command line run `bundle install`. This will install the dependencies above. Next create a file called `web.rb`. This will be where the code for your web app lives. Paste the following code into it:
-
-```ruby
-require 'sinatra'
-
-get '/' do
- "Hello Monkey!"
-end
-```
-
-At this point you should be able the run the command `ruby web.rb -p 5000`. This will spin-up a small web server on port 5000. You should be able to browse to this app in your browser by visiting the URL you set-up for your Azure VM. Once you can reach your web app in the browser, you're ready to start building a Twilio app.
-
-## <a id="configure_app"></a>Configure Your Application to Use Twilio
-You can configure your web app to use the Twilio library by updating your `Gemfile` to include this line:
-
-```bash
-gem 'twilio-ruby'
-```
-
-On the command line, run `bundle install`. Now open `web.rb` and including this line at the top:
-
-```ruby
-require 'twilio-ruby'
-```
-
-You're now all set to use the Twilio helper library for Ruby in your web app.
-
-## <a id="howto_make_call"></a>How to: Make an outgoing call
-The following shows how to make an outgoing call. Key concepts include using the Twilio helper library for Ruby to make REST API calls and rendering TwiML. Substitute your values for the **From** and **To** phone numbers, and ensure that you verify the **From** phone number for your Twilio account prior to running the code.
-
-Add this function to `web.md`:
-
-```ruby
-# Set your account ID and authentication token.
-sid = "your_twilio_account_sid";
-token = "your_twilio_authentication_token";
-
-# The number of the phone initiating the call.
-# This should either be a Twilio number or a number that you've verified
-from = "NNNNNNNNNNN";
-
-# The number of the phone receiving call.
-to = "NNNNNNNNNNN";
-
-# Use the Twilio-provided site for the TwiML response.
-url = "http://yourdomain.cloudapp.net/voice_url";
-
-get '/make_call' do
- # Create the call client.
- client = Twilio::REST::Client.new(sid, token);
-
- # Make the call
- client.account.calls.create(to: to, from: from, url: url)
-end
-
-post '/voice_url' do
- "<Response>
- <Say>Hello Monkey!</Say>
- </Response>"
-end
-```
-
-If you open-up `http://yourdomain.cloudapp.net/make_call` in a browser, that will trigger the call to the Twilio API to make the phone call. The first two parameters in `client.account.calls.create` are fairly self-explanatory: the number the call is `from` and the number the call is `to`.
-
-The third parameter (`url`) is the URL that Twilio requests to get instructions on what to do once the call is connected. In this case we set-up a URL (`http://yourdomain.cloudapp.net`) that returns a simple TwiML document and uses the `<Say>` verb to do some text-to-speech and say "Hello Monkey" to the person receiving the call.
-
-## <a id="howto_receive_sms"></a>How to: Receive an SMS message
-In the previous example we initiated an **outgoing** phone call. This time, let's use the phone number that Twilio gave us during sign-up to process an **incoming** SMS message.
-
-First, log-in to your [Twilio dashboard][twilio_account]. Click on "Numbers" in the top nav and then click on the Twilio number you were provided. You'll see two URLs that you can configure. A Voice Request URL and an SMS Request URL. These are the URLs that Twilio calls whenever a phone call is made or an SMS is sent to your number. The URLs are also known as "web hooks".
-
-We would like to process incoming SMS messages, so let's update the URL to `http://yourdomain.cloudapp.net/sms_url`. Go ahead and click Save Changes at the bottom of the page. Now, back in `web.rb` let's program our application to handle this:
-
-```ruby
-post '/sms_url' do
- "<Response>
- <Message>Hey, thanks for the ping! Twilio and Azure rock!</Message>
- </Response>"
-end
-```
-
-After making the change, make sure to re-start your web app. Now, take out your phone and send an SMS to your Twilio number. You should promptly get an SMS response that says "Hey, thanks for the ping! Twilio and Azure rock!".
-
-## <a id="additional_services"></a>How to: Use Additional Twilio Services
-In addition to the examples shown here, Twilio offers web-based APIs that you can use to leverage additional Twilio functionality from your Azure application. For full details, see the [Twilio API documentation][twilio_api_documentation].
-
-### <a id="NextSteps"></a>Next Steps
-Now that you've learned the basics of the Twilio service, follow these links to learn more:
-
-* [Twilio Security Guidelines][twilio_security_guidelines]
-* [Twilio HowTos and Example Code][twilio_howtos]
-* [Twilio Quickstart Tutorials][twilio_quickstarts]
-* [Twilio on GitHub][twilio_on_github]
-* [Talk to Twilio Support][twilio_support]
-
-[twilio_ruby]: https://www.twilio.com/docs/ruby/install
-----
-[twilio_pricing]: https://www.twilio.com/pricing
-[special_offer]: https://ahoy.twilio.com/azure
-[twilio_libraries]: https://www.twilio.com/docs/libraries
-[twiml]: https://www.twilio.com/docs/api/twiml
-[twilio_api]: https://www.twilio.com/docs/api
-[try_twilio]: https://www.twilio.com/try-twilio
-[twilio_account]: https://www.twilio.com/user/account
-[verify_phone]: https://www.twilio.com/user/account/phone-numbers/verified#
-[twilio_api_documentation]: https://www.twilio.com/docs/api
-[twilio_security_guidelines]: https://www.twilio.com/docs/security
-[twilio_howtos]: https://www.twilio.com/docs/all
-[twilio_on_github]: https://github.com/twilio
-[twilio_support]: https://www.twilio.com/help/contact
-[twilio_quickstarts]: https://www.twilio.com/docs/quickstart
-[sinatra]: http://www.sinatrarb.com/
-[azure_vm_setup]: /previous-versions/azure/virtual-machines/linux/classic/ruby-rails-web-app
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-hive-metastore-source.md
The Hive Metastore source supports Full scan to extract metadata from a **Hive M
> [!Note] > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
-5. Supported Hive versions are 2.x to 3.x.
+5. Supported Hive versions are 2.x to 3.x. Supported Databricks versions are 8.0 and above.
## Setting up authentication for a scan
role-based-access-control Role Definitions List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-definitions-list.md
Previously updated : 05/06/2021 Last updated : 07/29/2021
To see the list of administrator roles for Azure Active Directory, see [Administ
Follow these steps to list all roles in the Azure portal.
-The **Roles** tab was recently updated with some additional features. If you want to view the previous experience, see the **Roles (Classic)** tab. You can use either roles tab to work with your roles, however, if you create or delete custom roles, you might need to manually refresh the page to see the latest changes.
-
-#### [Roles](#tab/roles/)
- 1. In the Azure portal, click **All services** and then select any scope. For example, you can select **Management groups**, **Subscriptions**, **Resource groups**, or a resource. 1. Click the specific resource.
The **Roles** tab was recently updated with some additional features. If you wan
![Screenshot showing role permissions using new experience.](./media/role-definitions-list/role-permissions.png)
-#### [Roles (Classic)](#tab/roles-classic/)
-
-1. In the Azure portal, click **All services** and then select any scope. For example, you can select **Management groups**, **Subscriptions**, **Resource groups**, or a resource.
-
-1. Click the specific resource.
-
-1. Click **Access control (IAM)**.
-
-1. Click the **Roles (Classic)** tab to see a list of all the built-in and custom roles.
-
- You can see the number of users and groups that are assigned to each role at the current scope.
-
- ![Roles list](./media/role-definitions-list/roles-list-classic.png)
--- ## Azure PowerShell ### List all roles
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-rbac.md
Set the feature flag on the portal URL to work with the preview roles: Search Se
### [**PowerShell**](#tab/rbac-powershell)
-When [using PowerShell to assign roles](/role-based-access-control/role-assignments-powershell), call [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment), providing the Azure user or group name, and the scope of the assignment.
+When [using PowerShell to assign roles](/azure/role-based-access-control/role-assignments-powershell), call [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment), providing the Azure user or group name, and the scope of the assignment.
Before you start, make sure you load the Azure and AzureAD modules and connect to Azure:
Use the preview Management REST API, version 2021-04-01-preview, for this task.
1. Set `disableLocalAuth` to **True**.
-If you revert the last step, setting `disableLocalAuth` to **False**, the search service will resume acceptance of API keys on the request automatically (assuming they are specified).
+If you revert the last step, setting `disableLocalAuth` to **False**, the search service will resume acceptance of API keys on the request automatically (assuming they are specified).
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Azure Sentinel feature availability in
| - [Azure ADIP](../../sentinel/connect-azure-ad-identity-protection.md) | GA | GA | | - [Azure DDoS Protection](../../sentinel/connect-azure-ddos-protection.md) | GA | GA | | - [Azure Defender](../../sentinel/connect-azure-security-center.md) | GA | GA |
-| - [Azure Defender for IoT](../../sentinel/connect-asc-iot.md) | GA | Not Available |
+| - [Azure Defender for IoT](../../sentinel/connect-asc-iot.md) | Public Preview | Not Available |
| - [Azure Firewall ](../../sentinel/connect-azure-firewall.md) | GA | GA | | - [Azure Information Protection](../../sentinel/connect-azure-information-protection.md) | Public Preview | Not Available | | - [Azure Key Vault ](../../sentinel/connect-azure-key-vault.md) | Public Preview | Not Available |
The following table displays the current Azure Defender for IoT feature availabi
- Understand the [shared responsibility](shared-responsibility.md) model and which security tasks are handled by the cloud provider and which tasks are handled by you. - Understand the [Azure Government Cloud](../../azure-government/documentation-government-welcome.md) capabilities and the trustworthy design and security used to support compliance applicable to federal, state, and local government organizations and their partners. - Understand the [Office 365 Government plan](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/office-365-us-government#about-office-365-government-environments).-- Understand [compliance in Azure](../../compliance/index.yml) for legal and regulatory standards.
+- Understand [compliance in Azure](../../compliance/index.yml) for legal and regulatory standards.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sendgrid-dotnet-how-to-send-email.md
- Title: How to use the SendGrid email service (.NET) | Microsoft Docs
-description: Learn how to send email with the SendGrid email service on Azure. Code samples written in C# and use the .NET API.
------- Previously updated : 02/15/2017----
-# How to Send Email Using SendGrid with Azure
-## Overview
-This guide demonstrates how to perform common programming tasks with the
-SendGrid email service on Azure. The samples are written in C\#
-and supports .NET Standard 1.3. The scenarios covered include constructing
-email, sending email, adding attachments, and enabling various mail and
-tracking settings. For more information on SendGrid and sending email, see
-the [Next steps][Next steps] section.
-
-## What is the SendGrid Email Service?
-SendGrid is a [cloud-based email service] that provides reliable
-[transactional email delivery], scalability, and real-time analytics along with flexible APIs
-that make custom integration easy. Common SendGrid use cases include:
-
-* Automatically sending receipts or purchase confirmations to customers.
-* Administering distribution lists for sending customers monthly fliers and promotions.
-* Collecting real-time metrics for things like blocked email and customer engagement.
-* Forwarding customer inquiries.
-* Processing incoming emails.
-
-For more information, visit [https://sendgrid.com](https://sendgrid.com) or
-SendGrid's [C# library][sendgrid-csharp] GitHub repo.
-
-## Create a SendGrid Account
-
-## Reference the SendGrid .NET Class Library
-The [SendGrid NuGet package](https://www.nuget.org/packages/Sendgrid) is the easiest way to get the SendGrid API and to configure your application with all dependencies. NuGet is a Visual Studio extension included with Microsoft Visual Studio 2015 and above that makes it easy to install and update libraries and tools.
-
-> [!NOTE]
-> To install NuGet if you are running a version of Visual Studio earlier than Visual Studio 2015, visit [https://www.nuget.org](https://www.nuget.org), and click the **Install NuGet** button.
->
->
-
-To install the SendGrid NuGet package in your application, do the following:
-
-1. Click on **New Project** and select a **Template**.
-
- ![Create a new project][create-new-project]
-2. In **Solution Explorer**, right-click **References**, then click
- **Manage NuGet Packages**.
-
- ![SendGrid NuGet package][SendGrid-NuGet-package]
-3. Search for **SendGrid** and select the **SendGrid** item in the
- results list.
-4. Select the latest stable version of the Nuget package from the version dropdown to be able to work with the object model and APIs demonstrated in this article.
-
- ![SendGrid package][sendgrid-package]
-5. Click **Install** to complete the installation, and then close this
- dialog.
-
-SendGrid's .NET class library is called **SendGrid**. It contains the following namespaces:
-
-* **SendGrid** for communicating with SendGrid's API.
-* **SendGrid.Helpers.Mail** for helper methods to easily create SendGridMessage objects that specify how to send emails.
-
-Add the following code namespace declarations to the top of any C# file in which you want to programmatically access the SendGrid email service.
-
-```csharp
-using SendGrid;
-using SendGrid.Helpers.Mail;
-```
-
-## How to: Create an Email
-Use the **SendGridMessage** object to create an email message. Once the message object is created, you can set properties and methods, including the email sender, the email recipient, and the subject and body of the email.
-
-The following example demonstrates how to create a fully populated email object:
-
-```csharp
-var msg = new SendGridMessage();
-
-msg.SetFrom(new EmailAddress("dx@example.com", "SendGrid DX Team"));
-
-var recipients = new List<EmailAddress>
-{
- new EmailAddress("jeff@example.com", "Jeff Smith"),
- new EmailAddress("anna@example.com", "Anna Lidman"),
- new EmailAddress("peter@example.com", "Peter Saddow")
-};
-msg.AddTos(recipients);
-
-msg.SetSubject("Testing the SendGrid C# Library");
-
-msg.AddContent(MimeType.Text, "Hello World plain text!");
-msg.AddContent(MimeType.Html, "<p>Hello World!</p>");
-```
-
-For more information on all properties and methods supported by the
-**SendGrid** type, see [sendgrid-csharp][sendgrid-csharp] on GitHub.
-
-## How to: Send an Email
-After creating an email message, you can send it using SendGrid's API. Alternatively, you may use [.NET's built in library][NET-library].
-
-Sending email requires that you supply your SendGrid API Key. If you need details about how to configure API Keys, please visit SendGrid's API Keys [documentation][documentation].
-
-You may store these credentials via your Azure portal by clicking Application settings and adding the key/value pairs under App settings.
-
-![Azure app settings][azure_app_settings]
-
-Then, you may access them as follows:
-
-```csharp
-var apiKey = System.Environment.GetEnvironmentVariable("SENDGRID_APIKEY");
-var client = new SendGridClient(apiKey);
-```
-
-The following examples show how to send an email message using the SendGrid Web API with a console application.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using SendGrid;
-using SendGrid.Helpers.Mail;
-
-namespace Example
-{
- internal class Example
- {
- private static void Main()
- {
- Execute().Wait();
- }
-
- static async Task Execute()
- {
- var apiKey = System.Environment.GetEnvironmentVariable("SENDGRID_APIKEY");
- var client = new SendGridClient(apiKey);
- var msg = new SendGridMessage()
- {
- From = new EmailAddress("test@example.com", "DX Team"),
- Subject = "Hello World from the SendGrid CSharp SDK!",
- PlainTextContent = "Hello, Email!",
- HtmlContent = "<strong>Hello, Email!</strong>"
- };
- msg.AddTo(new EmailAddress("test@example.com", "Test User"));
- var response = await client.SendEmailAsync(msg);
- }
- }
-}
-```
-
-## How to: Send email from ASP .NET Core API using MailHelper class
-
-The below example can be used to send a single email to multiple persons from the ASP .NET Core API using the `MailHelper` class of `SendGrid.Helpers.Mail` namespace. For this example we are using ASP .NET Core 1.0.
-
-In this example, the API key has been stored in the `appsettings.json` file which can be overridden from the Azure portal as shown in the above examples.
-
-The contents of `appsettings.json` file should look similar to:
-
-```json
-{
- "Logging": {
- "IncludeScopes": false,
- "LogLevel": {
- "Default": "Debug",
- "System": "Information",
- "Microsoft": "Information"
- }
- },
- "SENDGRID_API_KEY": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
-}
-```
-
-First, we need to add the below code in the `Startup.cs` file of the .NET Core API project. This is required so that we can access the `SENDGRID_API_KEY` from the `appsettings.json` file by using dependency injection in the API controller. The `IConfiguration` interface can be injected at the constructor of the controller after adding it in the `ConfigureServices` method below. The content of `Startup.cs` file looks like the following after adding the required code:
-
-```csharp
- public IConfigurationRoot Configuration { get; }
-
- public void ConfigureServices(IServiceCollection services)
- {
- // Add mvc here
- services.AddMvc();
- services.AddSingleton<IConfiguration>(Configuration);
- }
-```
-
-At the controller, after injecting the `IConfiguration` interface, we can use the `CreateSingleEmailToMultipleRecipients` method of the `MailHelper` class to send a single email to multiple recipients. The method accepts one additional boolean parameter named `showAllRecipients`. This parameter can be used to control whether email recipients will be able to see each others email address in the To section of email header. The sample code for controller should be like below
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Threading.Tasks;
-using Microsoft.AspNetCore.Mvc;
-using SendGrid;
-using SendGrid.Helpers.Mail;
-using Microsoft.Extensions.Configuration;
-
-namespace SendgridMailApp.Controllers
-{
- [Route("api/[controller]")]
- public class NotificationController : Controller
- {
- private readonly IConfiguration _configuration;
-
- public NotificationController(IConfiguration configuration)
- {
- _configuration = configuration;
- }
-
- [Route("SendNotification")]
- public async Task PostMessage()
- {
- var apiKey = _configuration.GetSection("SENDGRID_API_KEY").Value;
- var client = new SendGridClient(apiKey);
- var from = new EmailAddress("test1@example.com", "Example User 1");
- List<EmailAddress> tos = new List<EmailAddress>
- {
- new EmailAddress("test2@example.com", "Example User 2"),
- new EmailAddress("test3@example.com", "Example User 3"),
- new EmailAddress("test4@example.com","Example User 4")
- };
-
- var subject = "Hello world email from Sendgrid ";
- var htmlContent = "<strong>Hello world with HTML content</strong>";
- var displayRecipients = false; // set this to true if you want recipients to see each others mail id
- var msg = MailHelper.CreateSingleEmailToMultipleRecipients(from, tos, subject, "", htmlContent, false);
- var response = await client.SendEmailAsync(msg);
- }
- }
-}
-```
-
-## How to: Add an attachment
-Attachments can be added to a message by calling the **AddAttachment** method and minimally specifying the file name and Base64 encoded content you want to attach. You can include multiple attachments by calling this method once for each file you wish to attach or by using the **AddAttachments** method. The following example demonstrates adding an attachment to a message:
-
-```csharp
-var banner2 = new Attachment()
-{
- Content = Convert.ToBase64String(raw_content),
- Type = "image/png",
- Filename = "banner2.png",
- Disposition = "inline",
- ContentId = "Banner 2"
-};
-msg.AddAttachment(banner2);
-```
-
-## How to: Use mail settings to enable footers, tracking, and analytics
-SendGrid provides additional email functionality through the use of mail settings and tracking settings. These settings can be added to an email message to enable specific functionality such as click tracking, Google analytics, subscription tracking, and so on. For a full list of apps, see the [Settings Documentation][settings-documentation].
-
-Apps can be applied to **SendGrid** email messages using methods implemented as part of the **SendGridMessage** class. The following examples demonstrate the footer and click tracking filters:
-
-The following examples demonstrate the footer and click tracking
-filters:
-
-### Footer settings
-
-```csharp
- msg.SetFooterSetting(
- true,
- "Some Footer HTML",
- "<strong>Some Footer Text</strong>");
-```
-
-### Click tracking
-
-```csharp
-msg.SetClickTracking(true);
-```
-
-## How to: Use Additional SendGrid Services
-SendGrid offers several APIs and webhooks that you can use to leverage additional functionality within your Azure application. For more details, see the [SendGrid API Reference][SendGrid API documentation].
-
-## Next steps
-Now that you've learned the basics of the SendGrid Email service, follow
-these links to learn more.
-
-* SendGrid C\# library repo: [sendgrid-csharp][sendgrid-csharp]
-* SendGrid API documentation: <https://sendgrid.com/docs>
-
-[Next steps]: #next-steps
-[What is the SendGrid Email Service?]: #whatis
-[Create a SendGrid Account]: #createaccount
-[Reference the SendGrid .NET Class Library]: #reference
-[How to: Create an Email]: #createemail
-[How to: Send an Email]: #sendemail
-[How to: Add an Attachment]: #addattachment
-[How to: Use Filters to Enable Footers, Tracking, and Analytics]: #usefilters
-[How to: Use Additional SendGrid Services]: #useservices
-
-[create-new-project]: ./media/sendgrid-dotnet-how-to-send-email/new-project.png
-[SendGrid-NuGet-package]: ./media/sendgrid-dotnet-how-to-send-email/reference.png
-[sendgrid-package]: ./media/sendgrid-dotnet-how-to-send-email/sendgrid-package.png
-[azure_app_settings]: ./media/sendgrid-dotnet-how-to-send-email/azure-app-settings.png
-[sendgrid-csharp]: https://github.com/sendgrid/sendgrid-csharp
-[SMTP vs. Web API]: https://sendgrid.com/docs/Integrate/https://docsupdatetracker.net/index.html
-[App Settings]: https://sendgrid.com/docs/API_Reference/SMTP_API/apps.html
-[SendGrid API documentation]: https://sendgrid.com/docs/api-reference/
-[NET-library]: https://sendgrid.com/docs/Integrate/Code_Examples/v2_Mail/csharp.html#-Using-NETs-Builtin-SMTP-Library
-[documentation]: https://sendgrid.com/docs/Classroom/Send/api_keys.html
-[settings-documentation]: https://sendgrid.com/docs/API_Reference/SMTP_API/apps.html
-
-[cloud-based email service]: https://sendgrid.com/solutions
-[transactional email delivery]: https://sendgrid.com/use-cases/transactional-email
-
sentinel Azure Sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/azure-sentinel-billing.md
Title: Azure Sentinel costs and billing | Microsoft Docs
-description: Learn about the Azure Sentinel pricing model, estimating and managing Azure Sentinel costs, and understanding your costs and bill.
+ Title: Plan and manage costs for Azure Sentinel
+description: Learn how to plan, understand, and manage costs and billing for Azure Sentinel by using cost analysis in the Azure portal and other methods.
- Previously updated : 06/03/2021++ Last updated : 07/27/2021
-# Azure Sentinel costs and billing
+# Plan and manage costs for Azure Sentinel
-Azure Sentinel uses an extensive query language to analyze, interact with, and derive insights from huge volumes of operational data in seconds. Azure Sentinel stores its data for analysis in Azure Monitor Log Analytics workspaces.
+This article describes how to plan for and manage costs for Azure Sentinel. First, you use the Azure pricing calculator to help plan for Azure Sentinel costs, before you add any resources for the service. Next, as you add Azure resources, review the estimated costs.
-When enabled on a Log Analytics workspace, Azure Sentinel automatically analyzes all the data that workspace ingests, and bills on the volume of data that workspace ingests and stores. This article describes ways you can monitor, understand, and save on usage and costs for Azure Sentinel and associated Log Analytics workspaces.
+After you've started using Azure Sentinel resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. This article describes several ways to manage and optimize Azure Sentinel costs.
-## Azure Sentinel pricing model
+Costs for Azure Sentinel are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Sentinel, you're billed for all Azure services and resources your Azure subscription uses, including Partner services.
+
+## Prerequisites
+
+- To view cost data and perform cost analysis in Cost Management, you must have a supported Azure account type, with at least read access.
+
+ While cost analysis in Cost Management supports most Azure account types, not all are supported. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+ For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+- You must have details about your data sources. Azure Sentinel allows you to bring in data from one or more data sources. Some of these data sources are free, and others incur charges. For more information, see [Free data sources](#free-data-sources).
+
+## Estimate costs before using Azure Sentinel
+
+If you're not yet using Azure Sentinel, you can use the [Azure Sentinel pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=azure-sentinel) to estimate potential costs. Enter *Azure Sentinel* in the Search box and select the resulting Azure Sentinel tile. The pricing calculator helps you estimate your likely costs based on your expected data ingestion and retention.
+
+For example, you can enter the GB of daily data you expect to ingest in Azure Sentinel, and the region for your workspace. The calculator provides the aggregate monthly cost across these components:
+
+- Log Analytics data ingestion
+- Azure Sentinel data analysis
+- Log Analytics data retention
+
+> [!NOTE]
+> The costs shown in this image are for example purposes only. They're not intended to reflect actual costs.
++
+## Understand the full billing model for Azure Sentinel
Azure Sentinel offers a flexible and predictable pricing model. For more information, see the [Azure Sentinel pricing page](https://azure.microsoft.com/pricing/details/azure-sentinel/). For the related Log Analytics charges, see [Azure Monitor Log Analytics pricing](https://azure.microsoft.com/pricing/details/log-analytics/).
-### Pay-As-You-Go and Commitment Tiers
+Azure Sentinel runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
+### How you're charged for Azure Sentinel
+
+There are two ways to pay for the Azure Sentinel service: **Pay-As-You-Go** and **Commitment Tiers**.
+
-There are two ways to pay for the Azure Sentinel service: Pay-As-You-Go and Commitment Tiers.
+- **Pay-As-You-Go** is the default model, based on the actual data volume stored and optionally for data retention beyond 90 days. Data volume is measured in GB (10^9 bytes).
-Pay-As-You-Go is the default model, based on the actual data volume stored and optionally for data retention beyond 90 days. Data volume is measured in GB (10^9 bytes).
+- Log Analytics and Azure Sentinel also have **Commitment Tier** pricing, formerly called Capacity Reservations, which is more predictable and saves as much as 65% compared to Pay-As-You-Go pricing.
-Log Analytics and Azure Sentinel also have Commitment Tier pricing, formerly called Capacity Reservations, which is more predictable and saves as much as 65% compared to Pay-As-You-Go pricing. With Commitment Tier pricing, you can buy a commitment starting at 100 GB/day. Any usage above the commitment level is billed at the Commitment Tier rate you selected. For example, a Commitment Tier of 100GB/day bills you for the committed 100GB/day data volume, plus any additional GB/day at the discounted rate for that tier.
+ With Commitment Tier pricing, you can buy a commitment starting at 100 GB/day. Any usage above the commitment level is billed at the Commitment Tier rate you selected. For example, a Commitment Tier of 100GB/day bills you for the committed 100GB/day data volume, plus any additional GB/day at the discounted rate for that tier.
-You can increase your commitment tier anytime, and decrease it every 31 days, to optimize costs as your data volume increases or decreases. To see your current Azure Sentinel pricing tier, select **Settings** in the Azure Sentinel left navigation, and then select the **Pricing** tab. Your current pricing tier is marked as **Current tier**.
+ You can increase your commitment tier anytime, and decrease it every 31 days, to optimize costs as your data volume increases or decreases. To see your current Azure Sentinel pricing tier, select **Settings** in the Azure Sentinel left navigation, and then select the **Pricing** tab. Your current pricing tier is marked as **Current tier**.
+
+ To set and change your Commitment Tier, see [Set or change pricing tier](#set-or-change-pricing-tier).
+
+### Understand your Azure Sentinel bill
+
+Billable meters are the individual components of your service that appear on your bill and are also shown in cost analysis under your service. At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all Azure Sentinel costs. There's a separate line item for each meter.
+
+To see your Azure bill, select **Cost Analysis** in the left navigation of **Cost Management + Billing**. On the **Cost analysis** screen, select the drop-down caret in the **View** field, and select **Invoice details**.
+
+> [!NOTE]
+> The costs shown in this image are for example purposes only. They're not intended to reflect actual costs.
+
+![Screenshot showing the Azure Sentinel section of a sample Azure bill.](media/billing/sample-bill.png)
+
+Azure Sentinel and Log Analytics charges appear on your Azure bill as separate line items based on your selected pricing plan. If you exceed your workspace's Commitment Tier usage in a given month, the Azure bill shows one line item for the Commitment Tier with its associated fixed cost, and a separate line item for the ingestion beyond the Commitment Tier, billed at your same Commitment Tier rate.
+
+The following table shows how Azure Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure invoice:
+
+|Cost|Service name|Meter|
+|-||-|
+|Azure Sentinel Commitment Tier|**sentinel**|**`n` gb commitment tier**|
+|Log Analytics Commitment Tier|**azure monitor**|**`n` gb commitment tier**|
+|Azure Sentinel overage over the Commitment Tier, or Pay-As-You-Go|**sentinel**|**analysis**|
+|Log Analytics overage over the Commitment Tier, or Pay-As-You-Go|**log analytics**|**data ingestion**|
-To set and change your Commitment Tier, see [Set or change pricing tier](#set-or-change-pricing-tier).
+For more information on viewing and downloading your Azure bill, see [Azure cost and billing information](../cost-management-billing/understand/download-azure-daily-usage.md).
### Costs for other services
After you enable Azure Sentinel on a Log Analytics workspace, you can retain all
You can specify different retention settings for individual data types. For more information, see [Retention by data type](../azure-monitor/logs/manage-cost-storage.md#retention-by-data-type).
+### Additional CEF ingestion costs
+
+CEF is a supported Syslog events format in Azure Sentinel. You can use CEF to bring in valuable security information from a variety of sources to your Azure Sentinel workspace. CEF logs land in the CommonSecurityLog table in Azure Sentinel, which includes all the standard up-to-date CEF fields.
+
+Many devices and data sources allow for logging fields beyond the standard CEF schema. These additional fields land in the AdditionalExtensions table. These fields could have higher ingestion volumes than the standard CEF fields, because the event content within these fields can be variable.
+
+### Costs that might accrue after resource deletion
+
+Removing Azure Sentinel doesn't remove the Log Analytics workspace Azure Sentinel was deployed on, or any separate charges that workspace might be incurring.
+ ### Free trial You can enable Azure Sentinel on a new or existing Log Analytics workspace at no additional cost for the first 31 days. Charges related to Log Analytics, Automation, and BYOML still apply during the free trial. Usage beyond the first 31 days is charged per [Azure Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel).
For data connectors that include both free and paid data types, you can select w
![Screenshot showing the Data connector page for MCAS, with the free Security Alerts selected and the paid MCASShadowITReporting not selected.](media/billing/data-types.png)
+For more information about free and paid data sources and connectors, see [Connect data sources](connect-data-sources.md).
+ > [!NOTE] > Data connectors listed as Public Preview do not generate cost. Data connectors generate cost only once becoming Generally Available (GA). >
There are several ways to understand and manage Azure Sentinel usage and costs.
Manage data ingestion and retention: -- [Use Commitment Tier pricing to optimize costs](#set-or-change-pricing-tier) based on your data ingestion volume.-- [Define a Log Analytics data volume cap](#define-a-data-volume-cap-in-log-analytics) to manage ingestion, although security data is excluded from the cap.-- [Optimize Log Analytics costs with dedicated clusters](#optimize-log-analytics-costs-with-dedicated-clusters).-- [Separate non-security data in a different workspace](#separate-non-security-data-in-a-different-workspace).-- [Reduce long-term data retention costs with Azure Data Explorer (ADX)](#reduce-long-term-data-retention-costs-with-adx).-- [Use Data Collection Rules for your Windows Security Events](#use-data-collection-rules-for-your-windows-security-events).
+### Using Azure Prepayment with Azure Sentinel
-Understand, monitor, and alert for data ingestion and cost changes:
+You can pay for Azure Sentinel charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay bills to third party organizations for their products and services, or for products from the Azure Marketplace.
-- [Run queries to understand your data ingestion](#run-queries-to-understand-your-data-ingestion).-- [Deploy a workbook to visualize data ingestion](#deploy-a-workbook-to-visualize-data-ingestion).-- [Use a cost management playbook](#use-a-playbook-for-cost-management-alerts) that can send an alert when ingestion exceeds a predefined threshold.-- [Understand Common Event Format (CEF) data ingestion](#understand-cef-ingestion-volume).
+## Monitor costs
-### Manage data ingestion and retention
+As you use Azure resources with Azure Sentinel, you incur costs. Azure resource usage unit costs vary by time intervals such as seconds, minutes, hours, and days, or by unit usage, like bytes and megabytes. As soon as Azure Sentinel use starts, it incurs costs, and you can see the costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-Use the following methods to manage data ingestion and retention for your Azure Sentinel workspace.
-
-#### Set or change pricing tier
-
-To optimize for highest savings, monitor your ingestion volume to ensure you have the Commitment Tier that aligns most closely with your ingestion volume patterns. You can increase or decrease your Commitment Tier to align with changing data volumes.
-
-You can increase your Commitment Tier anytime, which restarts the 31-day commitment period. However, to move back to Pay-As-You-Go or to a lower Commitment Tier, you must wait until after the 31-day commitment period finishes. Billing for Commitment Tiers is on a daily basis.
+When you use cost analysis, you view Azure Sentinel costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
-To see your current Azure Sentinel pricing tier, select **Settings** in the Azure Sentinel left navigation, and then select the **Pricing** tab. Your current pricing tier is marked **Current tier**.
+The [Azure Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md) hub provides useful functionality. After you open **Cost Management + Billing** in the Azure portal, select **Cost Management** in the left navigation and then select the [scope](..//cost-management-billing/costs/understand-work-scopes.md) or set of resources to investigate, such as an Azure subscription or resource group.
-To change your pricing tier commitment, select one of the other tiers on the pricing page, and then select **Apply**. You must have **Contributor** or **Owner** role in Azure Sentinel to change the pricing tier.
+The **Cost Analysis** screen shows detailed views of your Azure usage and costs, with the option to apply a variety of controls and filters.
-![Screenshot showing the Pricing page in Azure Sentinel Settings, with Pay-As-You-Go indicated as the current pricing tier.](media/billing/pricing.png)
+For example, to see charts of your daily costs for a certain time frame:
+1. Select the drop-down caret in the **View** field and select **Accumulated costs** or **Daily costs**.
+1. Select the drop-down caret in the date field and select a date range.
+1. Select the drop-down caret next to **Granularity** and select **Daily**.
> [!NOTE] > Azure Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
In Log Analytics, you can enable a daily volume cap that limits the daily ingest
To define a daily volume cap, select **Usage and estimated costs** in the left navigation of your Log Analytics workspace, and then select **Daily cap**. Select **On**, enter a daily volume cap amount, and then select **OK**. + ![Screenshot showing the Usage and estimated costs screen and the Daily cap window.](media/billing/daily-cap.png) The **Usage and estimated costs** screen also shows your ingested data volume trend in the past 31 days, and the total retained data volume.
Data collection rules enable you to manage collection settings at scale, while s
Besides for the predefined sets of events that you can select to ingest, such as All events, Minimal, or Common, data collection rules enable you to build custom filters and select specific events to ingest. The Azure Monitor Agent uses these rules to filter the data at the source, and then ingest only the events you've selected, while leaving everything else behind. Selecting specific events to ingest can help you optimize your costs and save more.
-### Understand, monitor, and alert for changes in data ingestion and costs
+> [!NOTE]
+> The costs shown in this image are for example purposes only. They're not intended to reflect actual costs.
+
+![Screenshot showing a Cost Management + Billing Cost analysis screen.](media/billing/cost-management.png)
-Use the following methods to understand, monitor, and alert for changes in your Azure Sentinel workspace.
+You could also apply further controls. For example, to view only the costs associated with Azure Sentinel, select **Add filter**, select **Service name**, and then select the service names **sentinel**, **log analytics**, and **azure monitor**.
-#### Run queries to understand your data ingestion
+### Run queries to understand your data ingestion
-Here are some queries you can use to understand your data ingestion volume.
+Azure Sentinel uses an extensive query language to analyze, interact with, and derive insights from huge volumes of operational data in seconds. Here are some Kusto queries you can use to understand your data ingestion volume.
Run the following query to show data ingestion volume by solution:
Usage
| sort by Solution asc, DataType asc ```
-#### Deploy a workbook to visualize data ingestion
+### Deploy a workbook to visualize data ingestion
The **Workspace Usage Report workbook** provides your workspace's data consumption, cost, and usage statistics. The workbook gives the workspace's data ingestion status and amount of free and billable data. You can use the workbook logic to monitor data ingestion and costs, and to build custom views and rule-based alerts.
To enable the Workspace Usage Report workbook:
1. Select **View template** to use the workbook as is, or select **Save** to create an editable copy of the workbook. If you save a copy, select **View saved workbook**. 1. In the workbook, select the **Subscription** and **Workspace** you want to view, and then set the **TimeRange** to the time frame you want to see. You can set the **Show help** toggle to **Yes** to display in-place explanations in the workbook.
-#### Use a playbook for cost management alerts
+## Export cost data
+
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+
+## Create budgets
+
+You can create [budgets](../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+
+You can create budgets with filters for specific resources or services in Azure if you want more granularity in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more information about the filter options available when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+### Use a playbook for cost management alerts
To help you control your Azure Sentinel budget, you can create a cost management playbook. The playbook sends you an alert if your Azure Sentinel workspace exceeds a budget, which you define, within a given timeframe. The Azure Sentinel GitHub community provides the [Send-IngestionCostAlert](https://github.com/iwafula025/Azure-Sentinel/tree/master/Playbooks/Send-IngestionCostAlert) cost management playbook on GitHub. This playbook is activated by a recurrence trigger, and gives you a high level of flexibility. You can control execution frequency, ingestion volume, and the message to trigger, based on your requirements.
-#### Understand CEF ingestion volume
+### Define a data volume cap in Log Analytics
-CEF is a supported Syslog events format in Azure Sentinel. You can use CEF to bring in valuable security information from a variety of sources to your Azure Sentinel workspace. CEF logs land in the CommonSecurityLog table in Azure Sentinel, which includes all the standard up-to-date CEF fields.
+In Log Analytics, you can enable a daily volume cap that limits the daily ingestion for your workspace. The daily cap can help you manage unexpected increases in data volume, stay within your limit, and limit unplanned charges.
-Many devices and data sources allow for logging fields beyond the standard CEF schema. These additional fields land in the AdditionalExtensions table. These fields could have higher ingestion volumes than the standard CEF fields, because the event content within these fields can be variable.
+To define a daily volume cap, select **Usage and estimated costs** in the left navigation of your Log Analytics workspace, and then select **Daily cap**. Select **On**, enter a daily volume cap amount, and then select **OK**.
-## Understand your Azure Sentinel costs and bill
+![Screenshot showing the Usage and estimated costs screen and the Daily cap window.](media/billing/daily-cap.png)
-It's important to understand and track your Azure Sentinel costs. The [Azure Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md) hub provides useful functionality. After you open **Cost Management + Billing** in the Azure portal, select **Cost Management** in the left navigation and then select the [scope](..//cost-management-billing/costs/understand-work-scopes.md) or set of resources to investigate, such as an Azure subscription or resource group.
+The **Usage and estimated costs** screen also shows your ingested data volume trend in the past 31 days, and the total retained data volume.
-To see your Azure bill, select **Cost Analysis** in the left navigation of **Cost Management + Billing**. On the **Cost analysis** screen, select the drop-down caret in the **View** field, and select **Invoice details**.
+> [!IMPORTANT]
+> The daily cap doesn't limit collection of all data types. Security data is excluded from the cap. For more information about managing the daily cap in Log Analytics, see [Manage your maximum daily data volume](../azure-monitor/logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume).
-> [!NOTE]
-> The costs shown in this image are for example purposes only. They're not intended to reflect actual costs.
+## Other ways to manage and reduce Azure Sentinel costs
-![Screenshot showing the Azure Sentinel section of a sample Azure bill.](media/billing/sample-bill.png)
+To manage data ingestion and retention costs:
-Azure Sentinel and Log Analytics charges appear on your Azure bill as separate line items based on your selected pricing plan. If you exceed your workspace's Commitment Tier usage in a given month, the Azure bill shows one line item for the Commitment Tier with its associated fixed cost, and a separate line item for the ingestion beyond the Commitment Tier, billed at your same Commitment Tier rate.
+- [Use Commitment Tier pricing to optimize costs](#set-or-change-pricing-tier) based on your data ingestion volume.
+- [Separate non-security data in a different workspace](#separate-non-security-data-in-a-different-workspace).
+- [Optimize Log Analytics costs with dedicated clusters](#optimize-log-analytics-costs-with-dedicated-clusters).
+- [Reduce long-term data retention costs with Azure Data Explorer (ADX)](#reduce-long-term-data-retention-costs-with-adx).
+- [Use Data Collection Rules for your Windows Security Events](#use-data-collection-rules-for-your-windows-security-events).
-The following table shows how Azure Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure invoice:
+### Set or change pricing tier
-|Cost|Service name|Meter|
-|-||-|
-|Azure Sentinel Commitment Tier|**sentinel**|**`n` gb commitment tier**|
-|Log Analytics Commitment Tier|**azure monitor**|**`n` gb commitment tier**|
-|Azure Sentinel overage over the Commitment Tier, or Pay-As-You-Go|**sentinel**|**analysis**|
-|Log Analytics overage over the Commitment Tier, or Pay-As-You-Go|**log analytics**|**data ingestion**|
+To optimize for highest savings, monitor your ingestion volume to ensure you have the Commitment Tier that aligns most closely with your ingestion volume patterns. You can increase or decrease your Commitment Tier to align with changing data volumes.
-For more information on viewing and downloading your Azure bill, see [Azure cost and billing information](../cost-management-billing/understand/download-azure-daily-usage.md).
+You can increase your Commitment Tier anytime, which restarts the 31-day commitment period. However, to move back to Pay-As-You-Go or to a lower Commitment Tier, you must wait until after the 31-day commitment period finishes. Billing for Commitment Tiers is on a daily basis.
-The **Cost Analysis** screen also shows detailed views of your Azure usage and costs, with the option to apply a variety of controls and filters.
+To see your current Azure Sentinel pricing tier, select **Settings** in the Azure Sentinel left navigation, and then select the **Pricing** tab. Your current pricing tier is marked **Current tier**.
-For example, to see charts of your daily costs for a certain time frame:
-1. Select the drop-down caret in the **View** field and select **Accumulated costs** or **Daily costs**.
-1. Select the drop-down caret in the date field and select a date range.
-1. Select the drop-down caret next to **Granularity** and select **Daily**.
+To change your pricing tier commitment, select one of the other tiers on the pricing page, and then select **Apply**. You must have **Contributor** or **Owner** role in Azure Sentinel to change the pricing tier.
+
+![Screenshot showing the Pricing page in Azure Sentinel Settings, with Pay-As-You-Go indicated as the current pricing tier.](media/billing/pricing.png)
> [!NOTE]
-> The costs shown in this image are for example purposes only. They're not intended to reflect actual costs.
+> Azure Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
-![Screenshot showing a Cost Management + Billing Cost analysis screen.](media/billing/cost-management.png)
+The Azure Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Changing pricing tier](../azure-monitor/logs/manage-cost-storage.md#changing-pricing-tier).
-You could also apply further controls. For example, to view only the costs associated with Azure Sentinel, select **Add filter**, select **Service name**, and then select the service names **sentinel**, **log analytics**, and **azure monitor**.
+### Separate non-security data in a different workspace
+
+Azure Sentinel analyzes all the data ingested into Azure Sentinel-enabled Log Analytics workspaces. It's best to have a separate workspace for non-security operations data, to ensure it doesn't incur Azure Sentinel costs.
+
+When hunting or investigating threats in Azure Sentinel, you might need to access operational data stored in these standalone Azure Log Analytics workspaces. You can access this data by using cross-workspace querying in the log exploration experience and workbooks. However, you can't use cross-workspace analytics rules and hunting queries unless Azure Sentinel is enabled on all the workspaces.
+
+### Optimize Log Analytics costs with dedicated clusters
+
+If you ingest at least 1TB/day into your Azure Sentinel workspace or workspaces in the same region, consider moving to a Log Analytics dedicated cluster to decrease costs. A Log Analytics dedicated cluster Commitment Tier aggregates data volume across workspaces that collectively ingest a total of 1TB/day or more.
+
+Log Analytics dedicated clusters don't apply to Azure Sentinel Commitment Tiers. Azure Sentinel costs still apply per workspace in the dedicated cluster.
+
+You can add multiple Azure Sentinel workspaces to a Log Analytics dedicated cluster. There are a couple of advantages to using a Log Analytics dedicated cluster for Azure Sentinel:
+
+- Cross-workspace queries run faster if all the workspaces involved in the query are in the dedicated cluster. It's still best to have as few workspaces as possible in your environment, and a dedicated cluster still retains the [100 workspace limit](../azure-monitor/logs/cross-workspace-query.md) for inclusion in a single cross-workspace query.
+
+- All workspaces in the dedicated cluster can share the Log Analytics Commitment Tier set on the cluster. Not having to commit to separate Log Analytics Commitment Tiers for each workspace can allow for cost savings and efficiencies. By enabling a dedicated cluster, you commit to a minimum Log Analytics Commitment Tier of 1 TB ingestion per day.
+
+Here are some other considerations for moving to a dedicated cluster for cost optimization:
+
+- The maximum number of clusters per region and subscription is two.
+- All workspaces linked to a cluster must be in the same region.
+- The maximum of workspaces linked to a cluster is 1000.
+- You can unlink a linked workspace from your cluster. The number of link operations on a particular workspace is limited to two in a period of 30 days.
+- You can't move an existing workspace to a customer managed key (CMK) cluster. You need to create the workspace in the cluster.
+- Moving a cluster to another resource group or subscription isn't currently supported.
+- A workspace link to a cluster fails if the workspace is linked to another cluster.
+
+For more information about dedicated clusters, see [Log Analytics dedicated clusters](../azure-monitor/logs/manage-cost-storage.md#log-analytics-dedicated-clusters).
+
+### Reduce long-term data retention costs with ADX
+
+Azure Sentinel data retention is free for the first 90 days. To adjust the data retention time period in Log Analytics, select **Usage and estimated costs** in the left navigation, then select **Data retention**, and then adjust the slider.
+
+Azure Sentinel security data might lose some of its value after a few months. Security operations center (SOC) users might not need to access older data as frequently as newer data, but still might need to access the data for sporadic investigations or audit purposes. To reduce Azure Sentinel data retention costs, you can use Azure Data Explorer for long-term data retention at lower cost. ADX provides the right balance of cost and usability for aged data that no longer needs Azure Sentinel security intelligence.
+
+With ADX, you can store data at a lower price, but still explore the data using the same Kusto Query Language (KQL) queries as in Azure Sentinel. You can also use the ADX proxy feature to do cross-platform queries. These queries aggregate and correlate data spread across ADX, Application Insights, Azure Sentinel, and Log Analytics.
+
+For more information, see [Integrate Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md).
+
+### Use data collection rules for your Windows Security Events
+
+The [Windows Security Events connector](connect-windows-security-events.md?tabs=LAA) enables you to stream security events from any computer running Windows Server that's connected to your Azure Sentinel workspace, including physical, virtual, or on-premises servers, or in any cloud. This connector includes support for the Azure Monitor agent, which uses data collection rules to define the data to collect from each agent.
+
+Data collection rules enable you to manage collection settings at scale, while still allowing unique, scoped configurations for subsets of machines. For more information, see [Configure data collection for the Azure Monitor agent](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md).
+
+Besides for the predefined sets of events that you can select to ingest, such as All events, Minimal, or Common, data collection rules enable you to build custom filters and select specific events to ingest. The Azure Monitor Agent uses these rules to filter the data at the source, and then ingest only the events you've selected, while leaving everything else behind. Selecting specific events to ingest can help you optimize your costs and save more.
## Next steps
-For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
+- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
sentinel Connect Asc Iot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-asc-iot.md
Last updated 01/20/2021
-# Connect your data from Azure Defender (formerly Azure Security Center) for IoT to Azure Sentinel
+# Connect your data from Azure Defender (formerly Azure Security Center) for IoT to Azure Sentinel (Public preview)
Use the Defender for IoT connector to stream all your Defender for IoT events into Azure Sentinel. This integration enables organizations to quickly detect multistage attacks that often cross IT and OT boundaries. Additionally, Defender for IoTΓÇÖs integration with Azure Sentinel's security orchestration, automation, and response (SOAR) capabilities enables automated response and prevention using built-in OT-optimized playbooks.
+> [!IMPORTANT]
+> The Azure Defender for IoT connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## Prerequisites
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/dns-normalization-schema.md
The fields below are specific to DNS events. That said, many of them do have sim
| **DstIpAddr** | Optional | IP Address | `127.0.0.1` | The IP address of the server receiving the DNS request. For a regular DNS request, this value would typically be the reporting device, and in most cases set to **127.0.0.1**. | | **DstPortNumber** | Optional | Integer | `53` | Destination Port number | | **IpAddr** | | Alias | | Alias for SrcIpAddr |
-| <a name=query></a>**DnsQuery** | Mandatory | String | `www.malicious.com` | The domain that needs to be resolved. <br><br>While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](#additionalfields) field. |
+| <a name=query></a>**DnsQuery** | Mandatory | FQDN | `www.malicious.com` | The domain that needs to be resolved. <br><br>Note that there are sources that send the query in a different format. Most notably, in the DNS protocol itself, the query includes a dot at the end. This should be removed.<br><br>While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](#additionalfields) field. |
| **Domain** | | Alias || Alias to [Query](#query). | | **DnsQueryType** | Optional | Integer | `28` | This field may contain [DNS Resource Record Type codes](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml)). | | **DnsQueryTypeName** | Mandatory | Enumerated | `AAAA` | The field may contain [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case as needed. If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value. |
The fields below are specific to DNS events. That said, many of them do have sim
| <a name=responsecodename></a>**DnsResponseCodeName** | Mandatory | Enumerated | `NXDOMAIN` | The [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. | | **DnsResponseCode** | Optional | Integer | `3` | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).| | **TransactionIdHex** | Recommended | String | | The DNS unique hex transaction ID. |
-| **NetworkProtocol** | Optional | String | `UDP` | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. |
+| **NetworkProtocol** | Optional | Enumerated | `UDP` | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. |
| **DnsQueryClass** | Optional | Integer | | The [DNS class ID](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable.| | **DnsQueryClassName** | Optional | String | `"IN"` | The [DNS class name](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable. | | <a name=flags></a>**DnsFlags** | Optional | List of strings | `["DR"]` | The flags field, as provided by the reporting device. If flag information is provided in multiple fields, concatenate them with comma as a separator. <br><br>Since DNS flags are complex to parse and are less often used by analytics, parsing and normalization are not required, and Azure Sentinel uses an auxiliary function to provide flags information. For more information, see [Handling DNS response](#handling-dns-response).|
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization.md
Each schema field has a type. Some have built-in, Azure Log Analytics types such
|**Date/Time** | Depending on the ingestion method capability, use any of the following physical representations in descending priority: <br><br>- Log Analytics built-in datetime type <br>- An integer field using Log Analytics datetime numerical representation. <br>- A string field using Log Analytics datetime numerical representation <br>- A string field storing a supported [Log Analytics date/time format](/azure/data-explorer/kusto/query/scalar-data-types/datetime). | [Log Analytics date and time representation](/azure/kusto/query/scalar-data-types/datetime) is similar but different than Unix time representation. For more information, see the [conversion guidelines](/azure/kusto/query/datetime-timespan-arithmetic). <br><br>**Note**: When applicable, the time should be time zone adjusted. | |**MAC Address** | String | Colon-Hexadecimal notation | |**IP Address** |String | Azure Sentinel schemas do not have separate IPv4 and IPv6 addresses. Any IP address field may include either an IPv4 address or IPv6 address, as follows: <br><br>- **IPv4** in a dot-decimal notation, for example <br>- **IPv6** in 8 hextets notation, allowing for the short form<br><br>For example:<br>`192.168.10.10` (IPv4)<br>`FEDC:BA98:7654:3210:FEDC:BA98:7654:3210` (IPv6)<br>`1080::8:800:200C:417A` (IPv6 short form) |
+|**FQDN** | string | A fully qualified domain name using a dot notation, for example `docs.microsoft.com` |
|**Country** | String | A string using [ISO 3166-1](https://www.iso.org/iso-3166-country-codes.html), according to the following priority: <br><br> - Alpha-2 codes, such as `US` for the United States <br> - Alpha-3 codes, such as `USA` for the United States) <br>- Short name<br><br>The list of code can be found on the [International Standards Organization (ISO) Web Site](https://www.iso.org/obp/ui/#search)| |**Region** | String | The country subdivision name, using ISO 3166-2<br><br>The list of code can be found on the [International Standards Organization (ISO) Web Site](https://www.iso.org/obp/ui/#search)| |**City** | String | |
sentinel Process Events Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/process-events-normalization-schema.md
Event fields are common to all schemas and describe the activity itself and the
| **EventStartTime** | Mandatory | Date/time | If the source supports aggregation and the record represents multiple events, this field specifies the time that the first event was generated. Otherwise, this field aliases the [TimeGenerated](#timegenerated) field. | | **EventEndTime** | Mandatory | Alias | Alias to the [TimeGenerated](#timegenerated) field. | | **EventType** | Mandatory | Enumerated | Describes the operation reported by the record. <br><br>For Process records, supported values include: <br>- `ProcessCreated` <br>- `ProcessTerminated` |
-| **EventResult** | Mandatory | Enumerated | Describes the result of the event, normalized to one of the following supported values: <br><br>- `Success`<br>- `Partial`<br>- `Failure`<br>- `NA` (not applicable) <br><br>The source may provide only a value for the **EventResultDetails** field, which must be analyzed to get the **EventResult** value. |
+| **EventResult** | Mandatory | Enumerated | Describes the result of the event, normalized to one of the following supported values: <br><br>- `Success`<br>- `Partial`<br>- `Failure`<br>- `NA` (not applicable) <br><br>The source may provide only a value for the **EventResultDetails** field, which must be analyzed to get the **EventResult** value.<br><br>Note that Process Events commonly report only success. |
| **EventOriginalUid** | Optional | String | A unique ID of the original record, if provided by the source.<br><br>Example: `69f37748-ddcd-4331-bf0f-b137f1ea83b`| | **EventOriginalType** | Optional | String | The original event type or ID, if provided by the source.<br><br>Example: `4688`| | <a name ="eventproduct"></a>**EventProduct** | Mandatory | String | The product generating the event. <br><br>Example: `Sysmon`<br><br>**Note**: This field may not be available in the source record. In such cases, this field must be set by the parser. |
The process event schema references the following entities, which are central to
| **TargetProcessFileSize** | Optional | String | Size of the file that ran the process responsible for the event. | | **TargetProcessFileVersion** | Optional | String |The product version from the version information in the target process image file. <br><br> Example: `7.9.5.0` | | **TargetProcessFileInternalName** | Optional | String | The product internal file name from the version information of the image file of the target process. |
-| **TargetProcessFileOriginallName** | Optional | String | The product original file name from the version information of the image file of the target process. |
+| **TargetProcessFileOriginalName** | Optional | String | The product original file name from the version information of the image file of the target process. |
| **TargetProcessIsHidden** | Optional | Boolean | An indication of whether the target process is in hidden mode. | | **TargetProcessInjectedAddress** | Optional | String | The memory address in which the responsible target process is stored. | | **TargetProcessMD5** | Optional | MD5 | The MD5 hash of the target process image file. <br><br> Example: `75a599802f1fa166cdadb360960b1dd0`|
The process event schema references the following entities, which are central to
| **TargetProcessSHA512** | Optional | SHA512 | The SHA-512 hash of the target process image file. | | **TargetProcessIMPHASH** | Optional | String | The Import Hash of all the library DLLs that are used by the target process. | | <a name="targetprocesscommandline"></a> **TargetProcessCommandLine** | Mandatory | String | The command line used to run the target process. <br><br> Example: `"choco.exe" -v` |
+| <a name="targetprocesscurrentdirectory"></a> **TargetProcessCurrentDirectory** | Optional | String | The current directory in which the target process is executed. <br><br> Example: `c:\windows\system32` |
| **TargetProcessCreationTime** | Mandatory | DateTime | The product version from the version information of the target process image file. | | **TargetProcessId**| Mandatory | String | The process ID (PID) of the target process. <br><br>Example: `48610176`<br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. | | **TargetProcessGuid** | Optional | String |A generated unique identifier (GUID) of the target process. Enables identifying the process across systems. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` |
sentinel Sap Solution Detailed Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-detailed-requirements.md
The following table describes the recommended sizing for your virtual machine, d
The following SAP log change requests are required for the SAP solution, depending on your SAP Basis version: -- **SAP Basis versions 7.50 and higher**, install NPLK900131-- **For lower versions**, install NPLK900132-- **To create an SAP role with the required authorizations**, for any supported SAP Basis version, install NPLK900114. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system) and [Required ABAP authorizations](#required-abap-authorizations).
+- **SAP Basis versions 7.50 and higher**, install NPLK900144
+- **For lower versions**, install NPLK900146
+- **To create an SAP role with the required authorizations**, for any supported SAP Basis version, install NPLK900140. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system) and [Required ABAP authorizations](#required-abap-authorizations).
> [!NOTE] > The required SAP log change requests expose custom RFC FMs that are required for the connector, and do not change any standard or custom objects.
service-bus-messaging Service Bus Outages Disasters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-outages-disasters.md
To learn more about disaster recovery, see these articles:
[BrokeredMessage.Label]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage [Geo-replication with Service Bus Standard Tier]: https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/GeoReplication [Azure SQL Database Business Continuity]:../azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview.md
-[Azure resiliency technical guidance]: /azure/architecture/resiliency
+[Azure resiliency technical guidance]: /azure/architecture/framework/resiliency/app-design
-[1]: ./media/service-bus-outages-disasters/az.png
+[1]: ./media/service-bus-outages-disasters/az.png
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-versions.md
Support for Service Fabric on a specific OS ends when support for the OS version
| OS version | Service Fabric support end date | OS Lifecycle link | | | | |
-| Windows 10 2019 LTSC | 1/9/2029 | <a href="/lifecycle/products/windows-10-2019-ltsc">Windows 10 2019 LTSC - Microsoft Lifecycle</a> |
+| Windows 10 2019 LTSC | 1/9/2029 | <a href="/lifecycle/products/windows-10-ltsc-2019">Windows 10 2019 LTSC - Microsoft Lifecycle</a> |
| Version 20H2 | 5/9/2023 | <a href="/lifecycle/products/windows-10-enterprise-and-education">Windows 10 Enterprise and Education - Microsoft Lifecycle</a> | | Version 2004 | 12/14/2021| <a href="/lifecycle/products/windows-10-enterprise-and-education">Windows 10 Enterprise and Education - Microsoft Lifecycle</a> | | Version 1909 | 5/10/2022 | <a href="/lifecycle/products/windows-10-enterprise-and-education">Windows 10 Enterprise and Education - Microsoft Lifecycle</a> |
static-web-apps Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/custom-domain.md
You'll need to configure a TXT record with your domain provider. Azure DNS is re
| Setting | Value | | -- | - |
- | Name | `@` for root domain, or enter the subdomain |
+ | Name | `_dnsauth.<your_subdomain>` |
| Type | TXT | | TTL | Leave as default value | | TTL Unit | Leave as default value |
static-web-apps Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/key-vault-secrets.md
Security secrets require the following items to be in place.
- Grant access a Key Vault secret access to the identity. - Reference the Key Vault secret from the Static Web Apps application settings.
-This article demonstrates how to set up each of these items in your application.
+This article demonstrates how to set up each of these items in production for [bring your own functions applications](./functions-bring-your-own.md).
-> [!NOTE]
-> This functionality is only available in production environments and does not work with [staging versions of your static web app](./review-publish-pull-requests.md).
+Key Vault integration is not available for:
+
+- [Staging versions of your static web app](./review-publish-pull-requests.md). Key Vault integration is only supported in the production environment.
+- [Static web apps using managed functions](./apis.md).
## Prerequisites -- Existing Azure Static Web Apps site.
+- Existing Azure Static Web Apps site using [bring your own functions](./functions-bring-your-own.md).
- Existing Key Vault resource with a secret value. ## Create identity
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/anonymous-read-access-configure.md
$location = "<location>"
# Create a storage account with AllowBlobPublicAccess set to true (or null). New-AzStorageAccount -ResourceGroupName $rgName `
- -AccountName $accountName `
+ -Name $accountName `
-Location $location ` -SkuName Standard_GRS -AllowBlobPublicAccess $false
New-AzStorageAccount -ResourceGroupName $rgName `
# Set AllowBlobPublicAccess set to false Set-AzStorageAccount -ResourceGroupName $rgName `
- -AccountName $accountName `
+ -Name $accountName `
-AllowBlobPublicAccess $false # Read the AllowBlobPublicAccess property.
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/assign-azure-role-data-access.md
Keep in mind the following points about Azure role assignments in Azure Storage:
- When you create an Azure Storage account, you are not automatically assigned permissions to access data via Azure AD. You must explicitly assign yourself an Azure role for Azure Storage. You can assign it at the level of your subscription, resource group, storage account, or container. - If the storage account is locked with an Azure Resource Manager read-only lock, then the lock prevents the assignment of Azure roles that are scoped to the storage account or a container.
+- If you have set the appropriate allow permissions to access data via Azure AD and are unable to access the data, for example you are getting an "AuthorizationPermissionMismatch" error. Be sure to allow enough time for the permissions changes you have made in Azure AD to replicate, and be sure that you do not have any deny assignments that block your access, see [Understand Azure deny assignments](../../role-based-access-control/deny-assignments.md).
## Next steps
storage Data Lake Storage Supported Blob Storage Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-supported-blob-storage-features.md
The following table shows how each Blob storage feature is supported with Data L
|Logging in Azure Monitor|Preview |Preview|[Monitoring Azure Storage](./monitor-blob-storage.md)| |Snapshots|Preview|Preview|[Blob snapshots](snapshots-overview.md)| |Static websites|Generally Available<div role="complementary" aria-labelledby="preview-form"></div>|Generally Available<div role="complementary" aria-labelledby="preview-form"></div>|[Static website hosting in Azure Storage](storage-blob-static-website.md)|
-|Immutable storage|Preview<div role="complementary" aria-labelledby="preview-form">|Preview|[Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md)|
+|Immutable storage|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|[Store business-critical blob data with immutable storage](immutable-storage-overview.md)|
|Container soft delete|Preview|Preview|[Soft delete for containers](soft-delete-container-overview.md)| |Azure Storage inventory|Preview|Preview|[Use Azure Storage inventory to manage blob data (preview)](blob-inventory.md)| |Custom domains|Preview<div role="complementary" aria-labelledby="preview-form-1"><sup>1</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form-1"><sup>1</sup></div>|[Map a custom domain to an Azure Blob storage endpoint](storage-custom-domain-name.md)|
storage Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-protection-overview.md
Previously updated : 05/10/2021 Last updated : 07/22/2021
The following table summarizes the options available in Azure Storage for common
| Scenario | Data protection option | Recommendations | Protection benefit | Available for Data Lake Storage | |--|--|--|--|--| | Prevent a storage account from being deleted or modified. | Azure Resource Manager lock<br />[Learn more...](../common/lock-account-resource.md) | Lock all of your storage accounts with an Azure Resource Manager lock to prevent deletion of the storage account. | Protects the storage account against deletion or configuration changes.<br /><br />Does not protect containers or blobs in the account from being deleted or overwritten. | Yes |
-| Prevent a container and its blobs from being deleted or modified for an interval that you control. | Immutability policy on a container<br />[Learn more...](storage-blob-immutable-storage.md) | Set an immutability policy on a container to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a container and its blobs from all deletes and overwrites.<br /><br />When a legal hold or a locked time-based retention policy is in effect, the storage account is also protected from deletion. Containers for which no immutability policy has been set are not protected from deletion. | Yes, in preview |
+| Prevent a container and its blobs from being deleted or modified for an interval that you control. | Immutability policy on a container<br />[Learn more...](immutable-storage-overview.md) | Set an immutability policy on a container to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a container and its blobs from all deletes and overwrites.<br /><br />When a legal hold or a locked time-based retention policy is in effect, the storage account is also protected from deletion. Containers for which no immutability policy has been set are not protected from deletion. | Yes, in preview |
| Restore a deleted container within a specified interval. | Container soft delete<br />[Learn more...](soft-delete-container-overview.md) | Enable container soft delete for all storage accounts, with a minimum retention interval of 7 days.<br /><br />Enable blob versioning and blob soft delete together with container soft delete to protect individual blobs in a container.<br /><br />Store containers that require different retention periods in separate storage accounts. | A deleted container and its contents may be restored within the retention period.<br /><br />Only container-level operations (e.g., [Delete Container](/rest/api/storageservices/delete-container)) can be restored. Container soft delete does not enable you to restore an individual blob in the container if that blob is deleted. | Yes, in preview | | Automatically save the state of a blob in a previous version when it is overwritten. | Blob versioning<br />[Learn more...](versioning-overview.md) | Enable blob versioning, together with container soft delete and blob soft delete, for storage accounts where you need optimal protection for blob data.<br /><br />Store blob data that does not require versioning in a separate account to limit costs. | Every blob write operation creates a new version. The current version of a blob may be restored from a previous version if the current version is deleted or overwritten. | No | | Restore a deleted blob or blob version within a specified interval. | Blob soft delete<br />[Learn more...](soft-delete-blob-overview.md) | Enable blob soft delete for all storage accounts, with a minimum retention interval of 7 days.<br /><br />Enable blob versioning and container soft delete together with blob soft delete for optimal protection of blob data.<br /><br />Store blobs that require different retention periods in separate storage accounts. | A deleted blob or blob version may be restored within the retention period. | No |
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/encryption-scope-manage.md
$accountName = "<storage-account>"
$scopeName1 = "customer1scope" New-AzStorageEncryptionScope -ResourceGroupName $rgName `
- -AccountName $accountName `
+ -StorageAccountName $accountName `
-EncryptionScopeName $scopeName1 ` -StorageEncryption ```
Remember to replace the placeholder values in the example with your own values:
```powershell New-AzStorageEncryptionScope -ResourceGroupName $rgName `
- -AccountName $accountName `
+ -StorageAccountName $accountName `
-EncryptionScopeName $scopeName2 ` -KeyUri $keyUri ` -KeyvaultEncryption
storage Immutable Legal Hold Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/immutable-legal-hold-overview.md
+
+ Title: Legal holds for immutable blob data
+
+description: A legal hold stores blob data in a Write-Once, Read-Many (WORM) format until it is explicitly cleared. Use a legal hold when the period of time that the data must be kept in a WORM state is unknown.
+++++ Last updated : 07/22/2021++++
+# Legal holds for immutable blob data
+
+A legal hold is a temporary immutability policy that can be applied for legal investigation purposes or general protection policies. A legal hold stores blob data in a Write-Once, Read-Many (WORM) format until it is explicitly cleared. When a legal hold is in effect, blobs can be created and read, but not modified or deleted. Use a legal hold when the period of time that the data must be kept in a WORM state is unknown.
+
+For more information about immutability policies for Blob Storage, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
+
+## Legal hold scope
+
+A legal hold policy can be configured at either of the following scopes:
+
+- Version-level policy (preview): A legal hold can be configured on an individual blob version level for granular management of sensitive data.
+- Container-level policy: A legal hold that is configured at the container level applies to all blobs in that container. Individual blobs cannot be configured with their own immutability policies.
+
+### Version-level policy scope (preview)
+
+To configure a legal hold on a blob version, you must first enable version-level immutability on the parent container. Version-level immutability cannot be disabled after it is enabled. For more information, see [Enable support for version-level immutability on a container](immutable-policy-configure-version-scope.md#enable-support-for-version-level-immutability-on-a-container).
+
+After version-level immutability is enabled for a container, a legal hold can no longer be set at the container level. Legal holds must be applied to individual blob versions. A legal hold may be configured for the current version or a previous version of a blob.
+
+Version-level legal hold policies require that blob versioning is enabled for the storage account. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md). Keep in mind that enabling versioning may have a billing impact. For more information, see the **Pricing and billing** section in [Blob versioning](versioning-overview.md#pricing-and-billing).
+
+To learn more about enabling a version-level legal hold, see [Configure or clear a legal hold](immutable-policy-configure-version-scope.md#configure-or-clear-a-legal-hold).
+
+### Container-level scope
+
+When you configure a legal hold for a container, that hold applies to all objects in the container. When the legal hold is cleared, clients can once again write and delete objects in the container, unless there is also a time-based retention policy in effect for the container.
+
+When a legal hold is applied to a container, all existing blobs move into an immutable WORM state in less than 30 seconds. All new blobs that are uploaded to that policy-protected container will also move into an immutable state. Once all blobs are in an immutable state, overwrite or delete operations in the immutable container are not allowed. In the case of an account with a hierarchical namespace, blobs cannot be renamed or moved to a different directory.
+
+To learn how to configure a legal hold with container-level scope, see [Configure or clear a legal hold](immutable-policy-configure-container-scope.md#configure-or-clear-a-legal-hold).
+
+#### Legal hold tags
+
+A container-level legal hold must be associated with one or more user-defined alphanumeric tags that serve as identifier strings. For example, a tag may include a case ID or event name.
+
+#### Audit logging
+
+Each container with a legal hold in effect provides a policy audit log. the log contains the user ID, command type, time stamps, and legal hold tags. The audit log is retained for the lifetime of the policy, in accordance with the SEC 17a-4(f) regulatory guidelines.
+
+The [Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md) provides a more comprehensive log of all management service activities. [Azure resource logs](../../azure-monitor/essentials/platform-logs-overview.md) retain information about data operations. It is the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.
+
+#### Limits
+
+The following limits apply to container-level legal holds:
+
+- For a storage account, the maximum number of containers with a legal hold setting is 10,000.
+- For a container, the maximum number of legal hold tags is ten.
+- The minimum length of a legal hold tag is three alphanumeric characters. The maximum length is 23 alphanumeric characters.
+- For a container, a maximum of ten legal hold policy audit logs are retained for the duration of the policy.
+
+## Next steps
+
+- [Data protection overview](data-protection-overview.md)
+- [Store business-critical blob data with immutable storage](immutable-storage-overview.md)
+- [Time-based retention policies for immutable blob data](immutable-time-based-retention-policy-overview.md)
+- [Configure immutability policies for blob versions (preview)](immutable-policy-configure-version-scope.md)
+- [Configure immutability policies for containers](immutable-policy-configure-container-scope.md)
storage Immutable Policy Configure Container Scope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/immutable-policy-configure-container-scope.md
+
+ Title: Configure immutability policies for containers
+
+description: Learn how to configure an immutability policy that is scoped to a container. Immutability policies provide WORM (Write Once, Read Many) support for Blob Storage by storing data in a non-erasable, non-modifiable state.
+++++ Last updated : 07/22/2021+++++
+# Configure immutability policies for containers
+
+Immutable storage for Azure Blob Storage enables users to store business-critical data in a WORM (Write Once, Read Many) state. While in a WORM state, data cannot be modified or deleted for a user-specified interval. By configuring immutability policies for blob data, you can protect your data from overwrites and deletes. Immutability policies include time-based retention policies and legal holds. For more information about immutability policies for Blob Storage, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
+
+An immutability policy may be scoped either to an individual blob version (preview) or to a container. This article describes how to configure a container-level immutability policy. To learn how to configure version-level immutability policies, see [Configure immutability policies for blob versions (preview)](immutable-policy-configure-version-scope.md).
+
+## Configure a retention policy on a container
+
+To configure a time-based retention policy on a container, use the Azure portal, PowerShell, or Azure CLI. You can configure a container-level retention policy for between 1 and 146000 days.
+
+### [Portal](#tab/azure-portal)
+
+To configure a time-based retention policy on a container with the Azure portal, follow these steps:
+
+1. Navigate to the desired container.
+1. Select the **More** button on the right, then select **Access policy**.
+1. In the **Immutable blob storage** section, select **Add policy**.
+1. In the **Policy type** field, select **Time-based retention**, and specify the retention period in days.
+1. To create a policy with container scope, do not check the box for **Enable version-level immutability**.
+1. If desired, select **Allow additional protected appends** to enable writes to append blobs that are protected by an immutability policy. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+
+ :::image type="content" source="media/immutable-policy-configure-container-scope/configure-retention-policy-container-scope.png" alt-text="Screenshot showing how to configure immutability policy scoped to container":::
+
+After you've configured the immutability policy, you will see that it is scoped to the container:
++
+### [PowerShell](#tab/azure-powershell)
+
+To configure a time-based retention policy on a container with PowerShell, call the [Set-AzRmStorageContainerImmutabilityPolicy](/powershell/module/az.storage/set-azrmstoragecontainerimmutabilitypolicy) command, providing the retention interval in days. Remember to replace placeholder values in angle brackets with your own values:
+
+```azurepowershell
+Set-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName <resource-group> `
+ -StorageAccountName <storage-account> `
+ -ContainerName <container> `
+ -ImmutabilityPeriod 10
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To configure a time-based retention policy on a container with Azure CLI, call the [az storage container immutability-policy create](/cli/azure/storage/container/immutability-policy#az_storage_container_immutability_policy_create) command, providing the retention interval in days. Remember to replace placeholder values in angle brackets with your own values:
+
+```azurecli
+az storage container immutability-policy \
+ --resource-group <resource-group>
+ --account-name <storage-account> \
+ --container-name <container> \
+ --period 10
+```
+++
+## Modify an unlocked retention policy
+
+You can modify an unlocked time-based retention policy to shorten or lengthen the retention interval and to allow additional writes to append blobs in the container. You can also delete an unlocked policy.
+
+### [Portal](#tab/azure-portal)
+
+To modify an unlocked time-based retention policy in the Azure portal, follow these steps:
+
+1. Navigate to the desired container.
+1. Select the **More** button and choose **Access policy**.
+1. Under the **Immutable blob versions** section, locate the existing unlocked policy. Select the **More** button, then select **Edit** from the menu.
+1. Provide a new retention interval for the policy. You can also select **Allow additional protected appends** to permit writes to protected append blobs.
+
+ :::image type="content" source="media/immutable-policy-configure-container-scope/modify-retention-policy-container-scope.png" alt-text="Screenshot showing how to modify an unlocked time-based retention policy":::
+
+To delete an unlocked policy, select the **More** button, then **Delete**.
+
+> [!NOTE]
+> You can enable version-level immutability policies (preview) by selecting the Enable version-level immutability checkbox. For more information about enabling version-level immutability policies, see [Configure immutability policies for blob versions (preview)](immutable-policy-configure-version-scope.md).
+
+### [PowerShell](#tab/azure-powershell)
+
+To modify an unlocked policy, first retrieve the policy by calling the [Get-AzRmStorageContainerImmutabilityPolicy](/powershell/module/az.storage/get-azrmstoragecontainerimmutabilitypolicy) command. Next, call the [Set-AzRmStorageContainerImmutabilityPolicy](/powershell/module/az.storage/set-azrmstoragecontainerimmutabilitypolicy) command to update the policy. Include the new retention interval in days and the `-ExtendPolicy` parameter. Remember to replace placeholder values in angle brackets with your own values:
+
+```azurepowershell
+$policy = Get-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName <resource-group> `
+ -AccountName <storage-account> `
+ -ContainerName <container>
+
+Set-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName <resource-group> `
+ -StorageAccountName <storage-account> `
+ -ContainerName <container> `
+ -ImmutabilityPeriod 21 `
+ -AllowProtectedAppendWrite true `
+ -Etag $policy.Etag `
+ -ExtendPolicy
+```
+
+To delete an unlocked policy, call the [Remove-AzRmStorageContainerImmutabilityPolicy](/powershell/module/az.storage/remove-azrmstoragecontainerimmutabilitypolicy) command.
+
+```azurepowershell
+Remove-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName <resource-group> `
+ -AccountName <storage-account> `
+ -ContainerName <container>
+ -Etag $policy.Etag
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To modify an unlocked time-based retention policy with Azure CLI, call the [az storage container immutability-policy extend](/cli/azure/storage/container/immutability-policy#az_storage_container_immutability_policy_extend) command, providing the new retention interval in days. Remember to replace placeholder values in angle brackets with your own values:
+
+```azurecli
+$etag=$(az storage container immutability-policy show /
+ --account-name <storage-account> /
+ --container-name <container> /
+ --query etag /
+ --output tsv)
+
+az storage container immutability-policy \
+ --resource-group <resource-group>
+ --account-name <storage-account> \
+ --container-name <container> \
+ --period 21 \
+ --if-match $etag \
+ --allow-protected-append-writes true
+```
+
+To delete an unlocked policy, call the [az storage container immutability-policy delete](/cli/azure/storage/container/immutability-policy#az_storage_container_immutability_policy_delete) command.
+++
+## Lock a time-based retention policy
+
+When you have finished testing a time-based retention policy, you can lock the policy. A locked policy is compliant with SEC 17a-4(f) and other regulatory compliance. You can lengthen the retention interval for a locked policy up to five times, but you cannot shorten it.
+
+After a policy is locked, you cannot delete it. However, you can delete the blob after the retention interval has expired.
+
+### [Portal](#tab/azure-portal)
+
+To lock a policy with the Azure portal, follow these steps:
+
+1. Navigate to a container with an unlocked policy.
+1. Under the **Immutable blob versions** section, locate the existing unlocked policy. Select the **More** button, then select **Lock policy** from the menu.
+1. Confirm that you want to lock the policy.
++
+### [PowerShell](#tab/azure-powershell)
+
+To lock a policy with PowerShell, first call the [Get-AzRmStorageContainerImmutabilityPolicy](/powershell/module/az.storage/get-azrmstoragecontainerimmutabilitypolicy) command to retrieve the policy's ETag. Next, call the [Lock-AzRmStorageContainerImmutabilityPolicy](/powershell/module/az.storage/lock-azrmstoragecontainerimmutabilitypolicy) command and pass in the ETag value to lock the policy. Remember to replace placeholder values in angle brackets with your own values:
+
+```azurepowershell
+$policy = Get-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName <resource-group> `
+ -AccountName <storage-account> `
+ -ContainerName <container>
+
+Lock-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName <resource-group> `
+ -AccountName <storage-account> `
+ -ContainerName <container> `
+ -Etag $policy.Etag
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To lock a policy with Azure CLI, first call the [az storage container immutability-policy show](/cli/azure/storage/container/immutability-policy#az_storage_container_immutability_policy_show) command to retrieve the policy's ETag. Next, call the [az storage container immutability-policy lock](/cli/azure/storage/container/immutability-policy#az_storage_container_immutability_policy_lock) command and pass in the ETag value to lock the policy. Remember to replace placeholder values in angle brackets with your own values:
+
+```azurecli
+$etag=$(az storage container immutability-policy show /
+ --account-name <storage-account> /
+ --container-name <container> /
+ --query etag /
+ --output tsv)
+
+az storage container immutability-policy lock /
+ --resource-group <resource-group> /
+ --account-name <storage-account> /
+ --container-name <container> /
+ --if-match $etag
+```
+++
+## Configure or clear a legal hold
+
+A legal hold stores immutable data until the legal hold is explicitly cleared. To learn more about legal hold policies, see [Legal holds for immutable blob data](immutable-legal-hold-overview.md).
+
+### [Portal](#tab/azure-portal)
+
+To configure a legal hold on a container with the Azure portal, follow these steps:
+
+1. Navigate to the desired container.
+1. Select the **More** button and choose **Access policy**.
+1. Under the **Immutable blob versions** section, select **Add policy**.
+1. Choose **Legal hold** as the policy type, and select **OK** to apply it.
+
+The following image shows a container with both a time-based retention policy and legal hold configured.
++
+To clear a legal hold, navigate to the **Access policy** dialog, select the **More** button, and choose **Delete**.
+
+### [PowerShell](#tab/azure-powershell)
+
+To configure a legal hold on a container with PowerShell, call the [Add-AzRmStorageContainerLegalHold](/powershell/module/az.storage/add-azrmstoragecontainerlegalhold) command. Remember to replace placeholder values in angle brackets with your own values:
+
+```azurepowershell
+Add-AzRmStorageContainerLegalHold -ResourceGroupName <resource-group> `
+ -StorageAccountName <storage-account> `
+ -Name <container> `
+ -Tag <tag1>,<tag2>,...
+```
+
+To clear a legal hold, call the [Remove-AzRmStorageContainerLegalHold](/powershell/module/az.storage/remove-azrmstoragecontainerlegalhold) command:
+
+```azurepowershell
+Remove-AzRmStorageContainerLegalHold -ResourceGroupName <resource-group> `
+ -StorageAccountName <storage-account> `
+ -Name <container> `
+ -Tag <tag1>,<tag2>,...
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To configure a legal hold on a container with PowerShell, call the [az storage container legal-hold set](/cli/azure/storage/container/legal-hold#az_storage_container_legal_hold_set) command. Remember to replace placeholder values in angle brackets with your own values:
+
+```azurecli
+az storage container legal-hold set /
+ --tags tag1 tag2 /
+ --container-name <container> /
+ --account-name <storage-account> /
+ --resource-group <resource-group>
+```
+
+To clear a legal hold, call the [az storage container legal-hold clear](/cli/azure/storage/container/legal-hold#az_storage_container_legal_hold_clear) command:
+
+```azurecli
+az storage container legal-hold clear /
+ --tags tag1 tag2 /
+ --container-name <container> /
+ --account-name <storage-account> /
+ --resource-group <resource-group>
+```
+++
+## Next steps
+
+- [Store business-critical blob data with immutable storage](immutable-storage-overview.md)
+- [Time-based retention policies for immutable blob data](immutable-time-based-retention-policy-overview.md)
+- [Legal holds for immutable blob data](immutable-legal-hold-overview.md)
+- [Configure immutability policies for blob versions (preview)](immutable-policy-configure-version-scope.md)
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/immutable-policy-configure-version-scope.md
+
+ Title: Configure immutability policies for blob versions (preview)
+
+description: Learn how to configure an immutability policy that is scoped to a blob version (preview). Immutability policies provide WORM (Write Once, Read Many) support for Blob Storage by storing data in a non-erasable, non-modifiable state.
+++++ Last updated : 07/22/2021++++
+# Configure immutability policies for blob versions (preview)
+
+Immutable storage for Azure Blob Storage enables users to store business-critical data in a WORM (Write Once, Read Many) state. While in a WORM state, data cannot be modified or deleted for a user-specified interval. By configuring immutability policies for blob data, you can protect your data from overwrites and deletes. Immutability policies include time-based retention policies and legal holds. For more information about immutability policies for Blob Storage, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
+
+An immutability policy may be scoped either to an individual blob version (preview) or to a container. This article describes how to configure a version-level immutability policy. To learn how to configure container-level immutability policies, see [Configure immutability policies for containers](immutable-policy-configure-container-scope.md).
+
+Configuring a version-level immutability policy is a two-step process:
+
+1. First, enable support for version-level immutability on a new or existing container. See [Enable support for version-level immutability on a container](#enable-support-for-version-level-immutability-on-a-container) for details.
+1. Next, configure a time-based retention policy or legal hold that applies to one or more blob versions in that container.
+
+> [!IMPORTANT]
+> Version-level immutability policies are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+To configure version-level time-based retention policies, blob versioning must be enabled for the storage account. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
+
+For information about supported storage account configurations for version-level immutability policies, see [Supported account configurations](immutable-storage-overview.md#supported-account-configurations).
+
+## Enable support for version-level immutability on a container
+
+Before you can apply a time-based retention policy to a blob version, you must enable support for version-level immutability. Both new and existing containers can be configured to support version-level immutability. However, an existing container must undergo a migration process in order to enable support.
+
+Keep in mind that enabling version-level immutability support for a container does not make data in that container immutable. You must also configure either a default immutability policy for the container, or an immutability policy on a specific blob version.
+
+### Enable version-level immutability for a new container
+
+To use a version-level immutability policy, you must first explicitly enable support for version-level WORM on the container. You can enable support for version-level WORM either when you create the container, or when you add a version-level immutability policy to an existing container.
+
+To create a container that supports version-level immutability in the Azure portal, follow these steps:
+
+1. Navigate to the **Containers** page for your storage account in the Azure portal, and select **Add**.
+1. In the **New container** dialog, provide a name for your container, then expand the **Advanced** section.
+1. Select **Enable version-level immutability support** to enable version-level immutability for the container.
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/create-container-version-level-immutability.png" alt-text="Screenshot showing how to create a container with version-level immutability enabled":::
+
+### Migrate an existing container to support version-level immutability
+
+To configure version-level immutability policies for an existing container, you must migrate the container to support version-level immutable storage. Container migration may take some time and cannot be reversed.
+
+An existing container must be migrated regardless of whether it has a container-level time-based retention policy configured. If the container has an existing container-level legal hold, then it cannot be migrated until the legal hold is removed.
+
+To migrate a container to support version-level immutable storage in the Azure portal, follow these steps:
+
+1. Navigate to the desired container.
+1. Select the **More** button on the right, then select **Access policy**.
+1. Under **Immutable blob storage**, select **Add policy**.
+1. For the **Policy type** field, choose *Time-based retention*, and specify the retention interval.
+1. Select **Enable version-level immutability**.
+1. Select **OK** to begin the migration.
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/migrate-existing-container.png" alt-text="Screenshot showing how to migrate an existing container to support version-level immutability":::
+
+## Configure a time-based retention policy on a container
+
+After a container is enabled for version-level immutability, you can specify a default version-level time-based retention policy for the container. The default policy applies to all blob versions in the container, unless you override the policy for an individual version.
+
+### Configure a default time-based retention policy on a container
+
+To apply a default version-level immutability policy to a container in the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the **Containers** page, and locate the container to which you want to apply the policy.
+1. Select the **More** button to the right of the container name, and choose **Access policy**.
+1. In the **Access policy** dialog, under the **Immutable blob storage** section, choose **Add policy**.
+1. Select **Time-based retention policy** and specify the retention interval.
+1. If desired, select **Allow additional protected appends** to enable writes to append blobs that are protected by an immutability policy. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+1. Select **OK** to apply the default policy to the container.
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/configure-default-retention-policy-container.png" alt-text="Screenshot showing how to configure a default version-level retention policy for a container":::
+
+### Determine the scope of a retention policy on a container
+
+To determine the scope of a time-based retention policy in the Azure portal, follow these steps:
+
+1. Navigate to the desired container.
+1. Select the **More** button on the right, then select **Access policy**.
+1. Under **Immutable blob storage**, locate the **Scope** field. If the container is configured with a default version-level retention policy, then the scope is set to *Version*, as shown in the following image:
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/version-scoped-retention-policy.png" alt-text="Screenshot showing default version-level retention policy configured for container":::
+
+1. If the container is configured with a container-level retention policy, then the scope is set to *Container*, as shown in the following image:
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/container-scoped-retention-policy.png" alt-text="Screenshot showing container-level retention policy configured for container":::
+
+## Configure a time-based retention policy on an existing version
+
+Time-based retention policies maintain blob data in a WORM state for a specified interval. For more information about time-based retention policies, see [Time-based retention policies for immutable blob data](immutable-time-based-retention-policy-overview.md).
+
+You have three options for configuring a time-based retention policy for a blob version:
+
+- Option 1: You can configure a default policy that is scoped to the container and that applies to all objects in the container by default. Objects in the container will inherit the default policy unless you explicitly override it by configuring a policy on an individual blob version. For more details, see [Configure a default time-based retention policy on a container](#configure-a-default-time-based-retention-policy-on-a-container).
+- Option 2: You can configure a policy on the current version of the blob. This policy can override a default policy configured on the container, if one exists and it is unlocked. By default, any previous versions that are created after the policy is configured will inherit the policy on the current version of the blob. For more details, see [Configure a retention policy on the current version of a blob](#configure-a-retention-policy-on-the-current-version-of-a-blob).
+- Option 3: You can configure a policy on a previous version of a blob. This policy can override a default policy configured on the current version, if one exists and it is unlocked. For more details, see [Configure a retention policy on a previous version of a blob](#configure-a-retention-policy-on-a-previous-version-of-a-blob).
+
+### Configure a retention policy on the current version of a blob
+
+The Azure portal displays a list of blobs when you navigate to a container. Each blob displayed represents the current version of the blob. For more information on blob versioning, see [Blob versioning](versioning-overview.md).
+
+To configure a time-based retention policy on the current version of a blob, follow these steps:
+
+1. Navigate to the container that contains the target blob.
+1. Select the **More** button to the right of the blob name, and choose **Access policy**. If a time-based retention policy has already been configured for the previous version, it appears in the **Access policy** dialog.
+1. In the **Access policy** dialog, under the **Immutable blob versions** section, choose **Add policy**.
+1. Select **Time-based retention policy** and specify the retention interval.
+1. Select **OK** to apply the policy to the current version of the blob.
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/configure-retention-policy-version.png" alt-text="Screenshot showing how to configure a retention policy for the current version of a blob":::
+
+You can view the properties for a blob to see whether a policy is enabled on the current version. Select the blob, then navigate to the **Overview** tab and locate the **Version-level immutability policy** property. If a policy is enabled, the **Retention period** property will display the expiry date and time for the policy. Keep in mind that a policy may either be configured for the current version, or may be inherited from the blob's parent container if a default policy is in effect.
++
+### Configure a retention policy on a previous version of a blob
+
+You can also configure a time-based retention policy on a previous version of a blob. A previous version is always immutable in that it cannot be modified. However, a previous version can be deleted. A time-based retention policy protects against deletion while it is in effect.
+
+To configure a time-based retention policy on a previous version of a blob, follow these steps:
+
+1. Navigate to the container that contains the target blob.
+1. Select the blob, then navigate to the **Versions** tab.
+1. Locate the target version, then select the **More** button and choose **Access policy**. If a time-based retention policy has already been configured for the previous version, it appears in the **Access policy** dialog.
+1. In the **Access policy** dialog, under the **Immutable blob versions** section, choose **Add policy**.
+1. Select **Time-based retention policy** and specify the retention interval.
+1. Select **OK** to apply the policy to the current version of the blob.
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/configure-retention-policy-previous-version.png" alt-text="Screenshot showing how to configure retention policy for a previous blob version in Azure portal":::
+
+## Configure a time-based retention policy when uploading a blob
+
+When you use the Azure portal to upload a blob to a container that supports version-level immutability, you have several options for configuring a time-based retention policy for the new blob:
+
+- Option 1: If a default retention policy is configured for the container, you can upload the blob with the container's policy. This option is selected by default when there is a retention policy on the container.
+- Option 2: If a default retention policy is configured for the container, you can choose to override the default policy, either by defining a custom retention policy for the new blob, or by uploading the blob with no policy.
+- Option 3: If no default policy is configured for the container, then you can upload the blob with a custom policy, or with no policy.
+
+To configure a time-based retention policy when you upload a blob, follow these steps:
+
+1. Navigate to the desired container, and select **Upload**.
+1. In the **Upload** blob dialog, expand the **Advanced** section.
+1. Configure the time-based retention policy for the new blob in the **Retention policy** field. If there is a default policy configured for the container, that policy is selected by default. You can also specify a custom policy for the blob.
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/configure-retention-policy-blob-upload.png" alt-text="Screenshot showing options for configuring retention policy on blob upload in Azure portal":::
+
+## Modify an unlocked retention policy
+
+You can modify an unlocked time-based retention policy to shorten or lengthen the retention interval. You can also delete an unlocked policy. Editing or deleting an unlocked time-based retention policy for a blob version does not affect policies in effect for any other versions. If there is a default time-based retention policy in effect for the container, then the blob version with the modified or deleted policy will no longer inherit from the container.
+
+To modify an unlocked time-based retention policy, follow these steps:
+
+1. Locate the target version, which may be the current version or a previous version of a blob. Select the **More** button and choose **Access policy**.
+1. Under the **Immutable blob versions** section, locate the existing unlocked policy. Select the **More** button, then select **Edit** from the menu.
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/edit-existing-version-policy.png" alt-text="Screenshot showing how to edit an existing version-level time-based retention policy in Azure portal":::
+
+1. Provide the new date and time for the policy expiration.
+
+To delete an unlocked policy, follow steps 1 through 4, then select **Delete** from the menu.
+
+## Lock a time-based retention policy
+
+When you have finished testing a time-based retention policy, you can lock the policy. A locked policy is compliant with SEC 17a-4(f) and other regulatory compliance. You can lengthen the retention interval for a locked policy up to five times, but you cannot shorten it.
+
+After a policy is locked, you cannot delete it. However, you can delete the blob after the retention interval has expired.
+
+To lock a policy, follow these steps:
+
+1. Locate the target version, which may be the current version or a previous version of a blob. Select the **More** button and choose **Access policy**.
+1. Under the **Immutable blob versions** section, locate the existing unlocked policy. Select the **More** button, then select **Lock policy** from the menu.
+1. Confirm that you want to lock the policy.
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/lock-policy-portal.png" alt-text="Screenshot showing how to lock a time-based retention policy in Azure portal":::
+
+## Configure or clear a legal hold
+
+A legal hold stores immutable data until the legal hold is explicitly cleared. To learn more about legal hold policies, see [Legal holds for immutable blob data](immutable-legal-hold-overview.md).
+
+To configure a legal hold on a blob version, follow these steps:
+
+1. Locate the target version, which may be the current version or a previous version of a blob. Select the **More** button and choose **Access policy**.
+1. Under the **Immutable blob versions** section, select **Add policy**.
+1. Choose **Legal hold** as the policy type, and select **OK** to apply it.
+
+The following image shows a current version of a blob with both a time-based retention policy and legal hold configured.
++
+To clear a legal hold, navigate to the **Access policy** dialog, select the **More** button, and choose **Delete**.
+
+## Next steps
+
+- [Store business-critical blob data with immutable storage](immutable-storage-overview.md)
+- [Time-based retention policies for immutable blob data](immutable-time-based-retention-policy-overview.md)
+- [Legal holds for immutable blob data](immutable-legal-hold-overview.md)
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/immutable-storage-overview.md
+
+ Title: Overview of immutability storage for Blob Storage
+
+description: Azure Storage offers WORM (Write Once, Read Many) support for Blob Storage that enables users to store data in a non-erasable, non-modifiable state. Time-based retention policies store blob data in a WORM state for a specified interval, while legal holds remain in effect until explicitly cleared.
+++++ Last updated : 07/22/2021+++++
+# Store business-critical blob data with immutable storage
+
+Immutable storage for Azure Blob Storage enables users to store business-critical data in a WORM (Write Once, Read Many) state. While in a WORM state, data cannot be modified or deleted for a user-specified interval. By configuring immutability policies for blob data, you can protect your data from overwrites and deletes.
+
+Immutable storage for Azure Blob storage supports two types of immutability policies:
+
+- **Time-based retention policies**: With a time-based retention policy, users can set policies to store data for a specified interval. When a time-based retention policy is set, objects can be created and read, but not modified or deleted. After the retention period has expired, objects can be deleted but not overwritten. To learn more about time-based retention policies, see [Time-based retention policies for immutable blob data](immutable-time-based-retention-policy-overview.md).
+
+- **Legal hold policies**: A legal hold stores immutable data until the legal hold is explicitly cleared. When a legal hold is set, objects can be created and read, but not modified or deleted. To learn more about legal hold policies, see [Legal holds for immutable blob data](immutable-legal-hold-overview.md).
+
+The following diagram shows how time-based retention policies and legal holds prevent write and delete operations while they are in effect.
++
+## About immutable storage for blobs
+
+Immutable storage helps healthcare organization, financial institutions, and related industries&mdash;particularly broker-dealer organizations&mdash;to store data securely. Immutable storage can be leveraged in any scenario to protect critical data against modification or deletion.
+
+Typical applications include:
+
+- **Regulatory compliance**: Immutable storage for Azure Blob storage helps organizations address SEC 17a-4(f), CFTC 1.31(d), FINRA, and other regulations.
+
+- **Secure document retention**: Immutable storage for blobs ensures that data can't be modified or deleted by any user, not even by users with account administrative privileges.
+
+- **Legal hold**: Immutable storage for blobs enables users to store sensitive information that is critical to litigation or business use in a tamper-proof state for the desired duration until the hold is removed. This feature is not limited only to legal use cases but can also be thought of as an event-based hold or an enterprise lock, where the need to protect data based on event triggers or corporate policy is required.
+
+## Regulatory compliance
+
+Microsoft retained a leading independent assessment firm that specializes in records management and information governance, Cohasset Associates, to evaluate immutable storage for blobs and its compliance with requirements specific to the financial services industry. Cohasset validated that immutable storage, when used to retain blobs in a WORM state, meets the relevant storage requirements of CFTC Rule 1.31(c)-(d), FINRA Rule 4511, and SEC Rule 17a-4(f). Microsoft targeted this set of rules because they represent the most prescriptive guidance globally for records retention for financial institutions.
+
+The Cohasset report is available in the [Microsoft Service Trust Center](https://aka.ms/AzureWormStorage). The [Azure Trust Center](https://www.microsoft.com/trustcenter/compliance/compliance-overview) contains detailed information about Microsoft's compliance certifications. To request a letter of attestation from Microsoft regarding WORM immutability compliance, please contact [Azure Support](https://azure.microsoft.com/support/options/).
+
+## Immutability policy scope
+
+Immutability policies can be scoped to a blob version (preview) or to a container. How an object behaves under an immutability policy depends on the scope of the policy. For more information about policy scope for each type of immutability policy, see the following sections:
+
+- [Time-based retention policy scope](immutable-time-based-retention-policy-overview.md#time-based-retention-policy-scope)
+- [Legal hold scope](immutable-legal-hold-overview.md#legal-hold-scope)
+
+You can configure both a time-based retention policy and a legal hold for a resource (container or blob version), depending on the scope. The following table summarizes which immutability policies are supported for each resource scope:
+
+| Scope | Container is configured to support version-level immutability policies | Container is not configured to support version-level immutability policies |
+|--|--|--|
+| Container | Supports one default version-level immutability policy. Does not support legal hold. | Supports one container-level immutability policy and one legal hold. |
+| Blob version | Supports one version-level immutability policy and one legal hold. | N/A |
+
+### About the preview
+
+The version-level immutability policies preview is available in the following regions:
+
+- Canada Central
+- Canada East
+- France Central
+- France South
+
+> [!IMPORTANT]
+> Version-level immutability policies are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Summary of immutability scenarios
+
+The protection afforded by an immutability policy depends on the scope of the immutability policy and, in the case of a time-based retention policy, whether it is locked or unlocked and whether it is active or expired.
+
+### Scenarios with version-level scope
+
+The following table provides a summary of protections provided by version-level immutability policies.
+
+| Scenario | Prohibited operations | Blob protection | Container protection | Account protection |
+|--|--|--|--|--|
+| A blob version is protected by an *active* retention policy and/or a legal hold is in effect | [Delete Blob](/rest/api/storageservices/delete-blob), [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata), [Put Page](/rest/api/storageservices/put-page), and [Append Block](/rest/api/storageservices/append-block)<sup>1</sup> | The blob version cannot be deleted. User metadata cannot be written. <br /><br /> Overwriting a blob with [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), or [Copy Blob](/rest/api/storageservices/copy-blob) creates a new version.<sup>2</sup> | Container deletion fails if at least one blob exists in the container, regardless of whether policy is locked or unlocked. | Storage account deletion fails if there is at least one container with version-level immutable storage enabled. |
+| A blob version is protected by an *expired* retention policy and no legal hold is in effect | [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata), [Put Page](/rest/api/storageservices/put-page), and [Append Block](/rest/api/storageservices/append-block)<sup>1</sup> | The blob version can be deleted. User metadata cannot be written. <br /><br /> Overwriting a blob with [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), or [Copy Blob](/rest/api/storageservices/copy-blob) creates a new version<sup>2</sup>. | Container deletion fails if at least one blob exists in the container, regardless of whether policy is locked or unlocked. | Storage account deletion fails if there is at least one container that contains a blob version with a locked time-based retention policy.<br /><br />Unlocked policies do not provide delete protection. |
+
+<sup>1</sup> The [Append Block](/rest/api/storageservices/append-block) operation is only permitted for time-based retention policies with the **allowProtectedAppendWrites** property enabled. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+<sup>2</sup> Blob versions are always immutable for content. If versioning is enabled for the storage account, then a write operation to a block blob creates a new version, with the exception of the [Put Block](/rest/api/storageservices/put-block) operation.
+
+### Scenarios with container-level scope
+
+The following table provides a summary of protections provided by container-level immutability policies.
+
+| Scenario | Prohibited operations | Blob protection | Container protection | Account protection |
+|--|--|--|--|--|
+| A container is protected by an *active* time-based retention policy with container scope and/or a legal hold is in effect | [Delete Blob](/rest/api/storageservices/delete-blob), [Put Blob](/rest/api/storageservices/put-blob)<sup>1</sup>, [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata), [Put Page](/rest/api/storageservices/put-page), [Set Blob Properties](/rest/api/storageservices/set-blob-properties), [Snapshot Blob](/rest/api/storageservices/snapshot-blob), [Incremental Copy Blob](/rest/api/storageservices/incremental-copy-blob), [Append Block](/rest/api/storageservices/append-block)<sup>2</sup> | All blobs in the container are immutable for content and user metadata | Container deletion fails if a container-level policy is in effect. | Storage account deletion fails if there is a container with at least one blob present. |
+| A container is protected by an *expired* time-based retention policy with container scope and no legal hold is in effect | [Put Blob](/rest/api/storageservices/put-blob)<sup>1</sup>, [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata), [Put Page](/rest/api/storageservices/put-page), [Set Blob Properties](/rest/api/storageservices/set-blob-properties), [Snapshot Blob](/rest/api/storageservices/snapshot-blob), [Incremental Copy Blob](/rest/api/storageservices/incremental-copy-blob), [Append Block](/rest/api/storageservices/append-block)<sup>2</sup> | Delete operations are allowed. Overwrite operations are not allowed. | Container deletion fails if at least one blob exists in the container, regardless of whether policy is locked or unlocked. | Storage account deletion fails if there is at least one container with a locked time-based retention policy.<br /><br />Unlocked policies do not provide delete protection. |
+
+<sup>1</sup> Azure Storage permits the [Put Blob](/rest/api/storageservices/put-blob) operation to create a new blob. Subsequent overwrite operations on an existing blob path in an immutable container are not allowed.
+
+<sup>2</sup> The [Append Block](/rest/api/storageservices/append-block) operation is only permitted for time-based retention policies with the **allowProtectedAppendWrites** property enabled. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+
+> [!NOTE]
+> Some workloads, such as [SQL Backup to URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url), create a blob and then add to it. If a container has an active time-based retention policy or legal hold in place, this pattern will not succeed.
+
+## Supported account configurations
+
+Immutability policies are supported for both new and existing storage accounts. The following table shows which types of storage accounts are supported for each type of policy:
+
+| Type of immutability policy | Scope of policy | Types of storage accounts supported | Supports hierarchical namespace (preview) |
+|--|--|--|--|
+| Time-based retention policy | Version-level scope (preview) | General-purpose v2<br />Premium block blob | No |
+| Time-based retention policy | Container-level scope | General-purpose v2<br />Premium block blob<br />General-purpose v1 (legacy)<sup>1</sup><br> Blob storage (legacy) | Yes |
+| Legal hold | Version-level scope (preview) | General-purpose v2<br />Premium block blob | No |
+| Legal hold | Container-level scope | General-purpose v2<br />Premium block blob<br />General-purpose v1 (legacy)<sup>1</sup><br> Blob storage (legacy) | Yes |
+
+<sup>1</sup> Microsoft recommends upgrading general-purpose v1 accounts to general-purpose v2 so that you can take advantage of more features. For information on upgrading an existing general-purpose v1 storage account, see [Upgrade a storage account](../common/storage-account-upgrade.md).
+
+### Access tiers
+
+All blob access tiers support immutable storage. You can change the access tier of a blob with the Set Blob Tier operation. For more information, see [Access tiers for Azure Blob Storage - hot, cool, and archive](storage-blob-storage-tiers.md).
+
+### Redundancy configurations
+
+All redundancy configurations support immutable storage. For geo-redundant configurations, customer-managed failover is not supported. For more information about redundancy configurations, see [Azure Storage redundancy](../common/storage-redundancy.md).
+
+### Hierarchical namespace support
+
+Immutable storage support for accounts with a hierarchical namespace is in preview. To enroll in the preview, see [Preview Features on Azure Data Lake Storage](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2EUNXd_ZNJCq_eDwZGaF5VUOUc3NTNQSUdOTjgzVUlVT1pDTzU4WlRKRy4u).
+
+Keep in mind that you cannot rename or move a blob when the blob is in the immutable state and the account has a hierarchical namespace enabled. Both the blob name and the directory structure provide essential container-level data that cannot be modified once the immutable policy is in place.
+
+> [!IMPORTANT]
+> Immutable storage for Azure Blob storage in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Recommended blob types
+
+Microsoft recommends that you configure immutability policies mainly for block blobs and append blobs. Configuring an immutability policy for a page blob that stores a VHD disk for an active virtual machine is discouraged as writes to the disk will be blocked. Microsoft recommends that you thoroughly review the documentation and test your scenarios before locking any time-based policies.
+
+## Immutable storage with blob soft delete
+
+When blob soft delete is configured for a storage account, it applies to all blobs within the account regardless of whether a legal hold or time-based retention policy is in effect. Microsoft recommends enabling soft delete for additional protection before any immutability policies are applied.
+
+If you enable blob soft delete and then configure an immutability policy, any blobs that have already been soft deleted will be permanently deleted once the soft delete retention policy has expired. Soft-deleted blobs can be restored during the soft delete retention period. A blob or version that has not yet been soft deleted is protected by the immutability policy and cannot be soft deleted until after the time-based retention policy has expired or the legal hold has been removed.
+
+## Use blob inventory to track immutability policies
+
+Azure Storage blob inventory provides an overview of the containers in your storage accounts and the blobs, snapshots, and blob versions within them. You can use the blob inventory report to understand the attributes of blobs and containers, including whether a resource has an immutability policy configured.
+
+When you enable blob inventory, Azure Storage generates an inventory report on a daily basis. The report provides an overview of your data for business and compliance requirements.
+
+For more information about blob inventory, see [Azure Storage blob inventory (preview)](blob-inventory.md).
+
+## Pricing
+
+There is no additional capacity charge for using immutable storage. Immutable data is priced in the same way as mutable data. For pricing details on Azure Blob storage, see the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
+
+Creating, modifying, or deleting a time-based retention policy or legal hold on a blob version results in a write transaction charge.
+
+If you fail to pay your bill and your account has an active time-based retention policy in effect, normal data retention policies will apply as stipulated in the terms and conditions of your contract with Microsoft. For general information, see [Data management at Microsoft](https://www.microsoft.com/trust-center/privacy/data-management).
+
+## Next steps
+
+- [Time-based retention policies for immutable blob data](immutable-time-based-retention-policy-overview.md)
+- [Legal holds for immutable blob data](immutable-legal-hold-overview.md)
+- [Data protection overview](data-protection-overview.md)
storage Immutable Time Based Retention Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/immutable-time-based-retention-policy-overview.md
+
+ Title: Time-based retention policies for immutable blob data
+
+description: Time-based retention policies store blob data in a Write-Once, Read-Many (WORM) state for a specified interval. You can configure a time-based retention policy that is scoped to a blob version (preview) or to a container.
+++++ Last updated : 07/22/2021++++
+# Time-based retention policies for immutable blob data
+
+A time-based retention policy stores blob data in a Write-Once, Read-Many (WORM) format for a specified interval. When a time-based retention policy is set, clients can create and read blobs, but cannot modify or delete them. After the retention interval has expired, blobs can be deleted but not overwritten.
+
+For more information about immutability policies for Blob Storage, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
+
+## Retention interval for a time-based policy
+
+The minimum retention interval for a time-based retention policy is one day, and the maximum is 146,000 days (400 years).
+
+When you configure a time-based retention policy, the affected objects will stay in the immutable state for the duration of the *effective* retention period. The effective retention period for objects is equal to the difference between the blob's creation time and the user-specified retention interval. Because a policy's retention interval can be extended, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.
+
+For example, suppose that a user creates a time-based retention policy with a retention interval of five years. An existing blob in that container, _testblob1_, was created one year ago, so the effective retention period for _testblob1_ is four years. When a new blob, _testblob2_, is uploaded to the container, the effective retention period for _testblob2_ is five years from the time of its creation.
+
+## Locked versus unlocked policies
+
+When you first configure a time-based retention policy, the policy is unlocked for testing purposes. When you have finished testing, you can lock the policy so that it is fully compliant with SEC 17a-4(f) and other regulatory compliance.
+
+Both locked and unlocked policies protect against deletes and overwrites. However, you can modify an unlocked policy by shortening or extending the retention period. You can also delete an unlocked policy.
+
+You cannot delete a locked time-based retention policy. You can extend the retention period, but you cannot decrease it. A maximum of five increases to the effective retention period is allowed over the lifetime of a locked policy that is defined at the container level. For a policy configured for a blob version, there is no limit to the number of increase to the effective period.
+
+> [!IMPORTANT]
+> A time-based retention policy must be locked for the blob to be in a compliant immutable (write and delete protected) state for SEC 17a-4(f) and other regulatory compliance. Microsoft recommends that you lock the policy in a reasonable amount of time, typically less than 24 hours. While the unlocked state provides immutability protection, using the unlocked state for any purpose other than short-term testing is not recommended.
+
+## Time-based retention policy scope
+
+A time-based retention policy can be configured at either of the following scopes:
+
+- Version-level policy (preview): A time-based retention policy can be configured to apply to a blob version for granular management of sensitive data. You can apply the policy to an individual version, or configure a default policy for a container that will apply by default to all blobs uploaded to that container.
+- Container-level policy: A time-based retention policy that is configured at the container level applies to all objects in that container. Individual objects cannot be configured with their own immutability policies.
+
+Audit logs are available on the container for both version-level and container-level time-based retention policies. Audit logs are not available for a policy that is scoped to a blob version.
+
+### Version-level policy scope (preview)
+
+To configure version-level retention policies, you must first enable version-level immutability on the parent container. Version-level immutability cannot be disabled after it is enabled, although unlocked policies can be deleted. For more information, see [Enable support for version-level immutability on a container](immutable-policy-configure-version-scope.md#enable-support-for-version-level-immutability-on-a-container).
+
+You can enable support for version-level immutability at the time that you create a container. Existing containers can also support version-level immutability, but must undergo a migration process first. This process may take some time and is not reversible. For more information about migrating a container to support version-level immutability, see [Migrate an existing container to support version-level immutability](immutable-policy-configure-version-scope.md#migrate-an-existing-container-to-support-version-level-immutability).
+
+Version-level time-based retention policies require that [blob versioning](versioning-overview.md) is enabled for the storage account. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md). Keep in mind that enabling versioning may have a billing impact. For more information, see the **Pricing and billing** section in [Blob versioning](versioning-overview.md#pricing-and-billing).
+
+After versioning is enabled, when a blob is first uploaded, that version of the blob is the current version. Each time the blob is overwritten, a new version is created that stores the previous state of the blob. When you delete the current version of a blob, the current version becomes a previous version and is retained until explicitly deleted. A previous blob version possesses the time-based retention policy that was in effect when the current version became a previous version.
+
+If a default policy is in effect for the container, then when an overwrite operation creates a previous version, the new current version inherits the default policy for the container.
+
+Each version may have only one time-based retention policy configured. A version may also have one legal hold configured. For more details about supported immutability policy configurations based on scope, see [Immutability policy scope](immutable-storage-overview.md#immutability-policy-scope).
+
+To learn how to configure version-level time-based retention policies, see [Configure immutability policies for blob versions (preview)](immutable-policy-configure-version-scope.md).
+
+> [!IMPORTANT]
+> Version-level time-based retention policies are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> It may take up to 30 seconds after version-level immutability is enabled before you can configure version-level time-based retention policies
+
+#### Configure a policy on the current version
+
+After you enable support for version-level immutability for a container, you have the option to configure a default time-based retention policy for the container. When you configure a default time-based retention policy for the container and then upload a blob, the blob inherits that policy by default. You can also choose to override the default policy for any blob on upload by configuring a custom policy for that blob.
+
+If the default time-based retention policy for the container is unlocked, then the current version of a blob that inherits the default policy will also have an unlocked policy. After an individual blob is uploaded, you can shorten or extend the retention period for the policy on the current version of the blob, or delete the current version. You can also lock the policy for the current version, even if the default policy on the container remains unlocked.
+
+If the default time-based retention policy for the container is locked, then the current version of a blob that inherits the default policy will also have an locked policy. However, if you override the default policy when you upload a blob by setting a policy only for that blob, then that blob's policy will remain unlocked until you explicitly lock it. When the policy on the current version is locked, you can extend the retention interval, but you cannot delete the policy or shorten the retention interval.
+
+If there is no default policy configured for a container, then you can upload a blob either with a custom policy or with no policy.
+
+If the default policy on a container is modified, policies on objects within that container remain unchanged, even if those policies were inherited from the default policy.
+
+The following table shows the various options available for setting a time-based retention policy on a blob on upload:
+
+| Default policy status on container | Upload a blob with the default policy | Upload a blob with a custom policy | Upload a blob with no policy |
+|--|--|--|--|
+| Default policy on container (unlocked) | Blob is uploaded with default unlocked policy | Blob is uploaded with custom unlocked policy | Blob is uploaded with no policy |
+| Default policy on container (locked) | Blob is uploaded with default locked policy | Blob is uploaded with custom unlocked policy | Blob is uploaded with no policy |
+| No default policy on container | N/A | Blob is uploaded with custom unlocked policy | Blob is uploaded with no policy |
+
+#### Configure a policy on a previous version
+
+When versioning is enabled, a write or delete operation to a blob creates a new previous version of that blob that saves the blob's state before the operation. By default, a previous version possesses the time-based retention policy that was in effect for the current version, if any, when the current version became a previous version. The new current version inherits the policy on the container, if there is one.
+
+If the policy inherited by a previous version is unlocked, then the retention interval can be shortened or lengthened, or the policy can be deleted. The policy on a previous version can also be locked for that version, even if the policy on the current version is unlocked.
+
+If the policy inherited by a previous version is locked, then the retention interval can be lengthened. The policy cannot be deleted, nor can the retention interval be shortened.
+
+If there is no policy configured on the current version, then the previous version does not inherit any policy. You can configure a custom policy for the version.
+
+If the policy on a current version is modified, the policies on existing previous versions remain unchanged, even if the policy was inherited from a current version.
+
+### Container-level policy scope
+
+A container-level time-based retention policy applies to all objects in a container, both new and existing. For an account with a hierarchical namespace, a container-level policy also applies to all directories in the container.
+
+When a time-based retention policy is applied to a container, all existing blobs move into an immutable WORM state in less than 30 seconds. All new blobs that are uploaded to that policy-protected container will also move into an immutable state. Once all blobs are in an immutable state, overwrite or delete operations in the immutable container are not allowed. In the case of an account with a hierarchical namespace, blobs cannot be renamed or moved to a different directory.
+
+The following limits apply to container-level retention policies:
+
+- For a storage account, the maximum number of containers with locked time-based immutable policies is 10,000.
+- For a container, the maximum number of edits to extend the retention interval for a locked time-based policy is five.
+- For a container, a maximum of seven time-based retention policy audit logs are retained for a locked policy.
+
+To learn how to configure a time-based retention policy on a container, see [Configure immutability policies for containers](immutable-policy-configure-container-scope.md).
+
+## Allow protected append blobs writes
+
+Append blobs are comprised of blocks of data and optimized for data append operations required by auditing and logging scenarios. By design, append blobs only allow the addition of new blocks to the end of the blob. Regardless of immutability, modification or deletion of existing blocks in an append blob is fundamentally not allowed. To learn more about append blobs, see [About Append Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-append-blobs).
+
+Only time-based retention policies have an the **AllowProtectedAppendWrites** property setting that allows for writing new blocks to an append blob while maintaining immutability protection and compliance. If this setting is enabled, you can create an append blob directly in the policy-protected container and then continue to add new blocks of data to the end of the append blob with the Append Block operation. Only new blocks can be added; any existing blocks cannot be modified or deleted. Time-retention immutability protection still applies, preventing deletion of the append blob until the effective retention period has elapsed. Enabling this setting does not affect the immutability behavior of block blobs or page blobs.
+
+As this setting is part of a time-based retention policy, the append blobs remain in the immutable state for the duration of the *effective* retention period. Since new data can be appended beyond the initial creation of the append blob, there is a slight difference in how the retention period is determined. The effective retention is the difference between append blob's last modification time and the user-specified retention interval. Similarly, when the retention interval is extended, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.
+
+For example, suppose that a user creates a time-based retention policy with the **AllowProtectedAppendWrites** property enabled and a retention interval of 90 days. An append blob, _logblob1_, is created in the container today, new logs continue to be added to the append blob for the next 10 days, so that the effective retention period for _logblob1_ is 100 days from today (the time of its last append + 90 days).
+
+Unlocked time-based retention policies allow the the **AllowProtectedAppendWrites** property setting to be enabled and disabled at any time. Once the time-based retention policy is locked, the **AllowProtectedAppendWrites** property setting cannot be changed.
+
+## Audit logging
+
+Each container with a time-based retention policy enabled provides a policy audit log. The audit log includes up to seven time-based retention commands for locked time-based retention policies. Log entries include the user ID, command type, time stamps, and retention interval. The audit log is retained for the lifetime of the policy, in accordance with the SEC 17a-4(f) regulatory guidelines.
+
+The [Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md) provides a more comprehensive log of all management service activities. [Azure resource logs](../../azure-monitor/essentials/platform-logs-overview.md) retain information about data operations. It is the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.
+
+Changes to time-based retention policies at the version level are not audited.
+
+## Next steps
+
+- [Data protection overview](data-protection-overview.md)
+- [Store business-critical blob data with immutable storage](immutable-storage-overview.md)
+- [Legal holds for immutable blob data](immutable-legal-hold-overview.md)
+- [Configure immutability policies for blob versions (preview)](immutable-policy-configure-version-scope.md)
+- [Configure immutability policies for containers](immutable-policy-configure-container-scope.md)
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/monitor-blob-storage.md
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor
Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -StorageAccountId <storage-account-resource-id> -Enabled $true -Category <operations-to-log> ```
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID of the blob service. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
+Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID of the blob service. You can find the resource ID in the Azure portal by opening the **Endpoints** page of your storage account.
You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **Category** parameter.
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/moni
az monitor diagnostic-settings create --name <setting-name> --storage-account <storage-account-name> --resource <storage-service-resource-id> --resource-group <resource-group> --logs '[{"category": <operations>, "enabled": true }]' ```
-Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID Blob storage service. You can find the resource ID in the Azure portal by opening the **Properties** page of your storage account.
+Replace the `<storage-service-resource--id>` placeholder in this snippet with the resource ID Blob storage service. You can find the resource ID in the Azure portal by opening the **Endpoints** page of your storage account.
You can use `StorageRead`, `StorageWrite`, and `StorageDelete` for the value of the **category** parameter.
For a list of all Azure Monitor support metrics, which includes Azure Blob Stora
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Management.Monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
-In these examples, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the Blob storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
+In these examples, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the Blob storage service. You can find these resource IDs on the **Endpoints** pages of your storage account in the Azure portal.
Replace the `<subscription-ID>` variable with the ID of your subscription. For guidance on how to obtain values for `<tenant-ID>`, `<application-ID>`, and `<AccessKey>`, see [Use the portal to create an Azure AD application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
The following example shows how to read metric data on the metric supporting mul
You can list the metric definition of your storage account or the Blob storage service. Use the [Get-AzMetricDefinition](/powershell/module/az.monitor/get-azmetricdefinition) cmdlet.
-In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Blob storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
+In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Blob storage service. You can find these resource IDs on the **Endpoints** pages of your storage account in the Azure portal.
```powershell $resourceId = "<resource-ID>"
You can read account-level metric values of your storage account or the Blob sto
You can list the metric definition of your storage account or the Blob storage service. Use the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az_monitor_metrics_list_definitions) command.
-In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Blob storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
+In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Blob storage service. You can find these resource IDs on the **Endpoints** pages of your storage account in the Azure portal.
```azurecli-interactive az monitor metrics list-definitions --resource <resource-ID>
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/object-replication-overview.md
Object replication is supported when the source and destination accounts are in
### Immutable blobs
-Object replication does not support immutable blobs. If a source or destination container has a time-based retention policy or legal hold, then object replication fails. For more information about immutable blobs, see [Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md).
+Object replication does not support immutable blobs. If a source or destination container has a time-based retention policy or legal hold, then object replication fails. For more information about immutable blobs, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
## Object replication policies and rules
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/security-recommendations.md
Azure Security Center periodically analyzes the security state of your Azure res
| Turn on soft delete for blobs | Soft delete for blobs enables you to recover blob data after it has been deleted. For more information on soft delete for blobs, see [Soft delete for Azure Storage blobs](./soft-delete-blob-overview.md). | - | | Turn on soft delete for containers | Soft delete for containers enables you to recover a container after it has been deleted. For more information on soft delete for containers, see [Soft delete for containers](./soft-delete-container-overview.md). | - | | Lock storage account to prevent accidental or malicious deletion or configuration changes | Apply an Azure Resource Manager lock to your storage account to protect the account from accidental or malicious deletion or configuration change. Locking a storage account does not prevent data within that account from being deleted. It only prevents the account itself from being deleted. For more information, see [Apply an Azure Resource Manager lock to a storage account](../common/lock-account-resource.md).
-| Store business-critical data in immutable blobs | Configure legal holds and time-based retention policies to store blob data in a WORM (Write Once, Read Many) state. Blobs stored immutably can be read, but cannot be modified or deleted for the duration of the retention interval. For more information, see [Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md). | - |
+| Store business-critical data in immutable blobs | Configure legal holds and time-based retention policies to store blob data in a WORM (Write Once, Read Many) state. Blobs stored immutably can be read, but cannot be modified or deleted for the duration of the retention interval. For more information, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md). | - |
| Require secure transfer (HTTPS) to the storage account | When you require secure transfer for a storage account, all requests to the storage account must be made over HTTPS. Any requests made over HTTP are rejected. Microsoft recommends that you always require secure transfer for all of your storage accounts. For more information, see [Require secure transfer to ensure secure connections](../common/storage-require-secure-transfer.md). | - | | Limit shared access signature (SAS) tokens to HTTPS connections only | Requiring HTTPS when a client uses a SAS token to access blob data helps to minimize the risk of eavesdropping. For more information, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md). | - |
storage Storage Blob Immutability Policies Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-immutability-policies-manage.md
- Title: Set and manage immutability policies for Blob storage - Azure Storage
-description: Learn how to use WORM (Write Once, Read Many) support for Blob (object) storage to store data in a non-erasable, non-modifiable state for a specified interval.
----- Previously updated : 11/26/2019-----
-# Set and manage immutability policies for Blob storage
-
-Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state. This state makes the data non-erasable and non-modifiable for a user-specified interval. For the duration of the retention interval, blobs can be created and read, but cannot be modified or deleted. Immutable storage is available for general-purpose v2 and Blob storage accounts in all Azure regions.
-
-This article shows how to set and manage immutability policies and legal holds for data in Blob storage using the Azure portal, PowerShell, or Azure CLI. For more information about immutable storage, see [Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md).
-
-## Set retention policies and legal holds
-
-### [Portal](#tab/azure-portal)
-
-1. Create a new container or select an existing container to store the blobs that need to be kept in the immutable state. The container must be in a general-purpose v2 or Blob storage account.
-
-2. Select **Access policy** in the container settings. Then select **Add policy** under **Immutable blob storage**.
-
- ![Container settings in the portal](media/storage-blob-immutability-policies-manage/portal-image-1.png)
-
-3. To enable time-based retention, select **Time-based retention** from the drop-down menu.
-
- !["Time-based retention" selected under "Policy type"](media/storage-blob-immutability-policies-manage/portal-image-2.png)
-
-4. Enter the retention interval in days (acceptable values are 1 to 146000 days).
-
- !["Update retention period to" box](media/storage-blob-immutability-policies-manage/portal-image-5-retention-interval.png)
-
- The initial state of the policy is unlocked allowing you to test the feature and make changes to the policy before you lock it. Locking the policy is essential for compliance with regulations like SEC 17a-4.
-
-5. Lock the policy. Right-click the ellipsis (**...**), and the following menu appears with additional actions:
-
- !["Lock policy" on the menu](media/storage-blob-immutability-policies-manage/portal-image-4-lock-policy.png)
-
-6. Select **Lock Policy** and confirm the lock. The policy is now locked and cannot be deleted, only extensions of the retention interval will be allowed. Blob deletes and overrides are not permitted.
-
- ![Confirm "Lock policy" on the menu](media/storage-blob-immutability-policies-manage/portal-image-5-lock-policy.png)
-
-7. To enable legal holds, select **Add Policy**. Select **Legal hold** from the drop-down menu.
-
- !["Legal hold" on the menu under "Policy type"](media/storage-blob-immutability-policies-manage/portal-image-legal-hold-selection-7.png)
-
-8. Create a legal hold with one or more tags.
-
- !["Tag name" box under the policy type](media/storage-blob-immutability-policies-manage/portal-image-set-legal-hold-tags.png)
-
-9. To clear a legal hold, remove the applied legal hold identifier tag.
-
-### [Azure CLI](#tab/azure-cli)
-
-The feature is included in the following command groups:
-`az storage container immutability-policy` and `az storage container legal-hold`. Run `-h` on them to see the commands.
-
-### [PowerShell](#tab/azure-powershell)
--
-The Az.Storage module supports immutable storage. To enable the feature, follow these steps:
-
-1. Ensure that you have the latest version of PowerShellGet installed: `Install-Module PowerShellGet ΓÇôRepository PSGallery ΓÇôForce`.
-2. Remove any previous installation of Azure PowerShell.
-3. Install Azure PowerShell: `Install-Module Az ΓÇôRepository PSGallery ΓÇôAllowClobber`.
-
-The following sample PowerShell script is for reference. This script creates a new storage account and container. It then shows you how to set and clear legal holds, create, and lock a time-based retention policy (also known as an immutability policy), and extend the retention interval.
-
-First, create an Azure Storage account:
-
-```powershell
-$resourceGroup = "<Enter your resource group>"
-$storageAccount = "<Enter your storage account name>"
-$container = "<Enter your container name>"
-$location = "<Enter the storage account location>"
-
-# Log in to Azure
-Connect-AzAccount
-Register-AzResourceProvider -ProviderNamespace "Microsoft.Storage"
-
-# Create your Azure resource group
-New-AzResourceGroup -Name $resourceGroup -Location $location
-
-# Create your Azure storage account
-$account = New-AzStorageAccount -ResourceGroupName $resourceGroup -StorageAccountName `
- $storageAccount -SkuName Standard_ZRS -Location $location -Kind StorageV2
-
-# Create a new container using the context
-$container = New-AzStorageContainer -Name $container -Context $account.Context
-
-# List the containers in the account
-Get-AzStorageContainer -Context $account.Context
-
-# Remove a container
-Remove-AzStorageContainer -Name $container -Context $account.Context
-```
-
-Set and clear legal holds:
-
-```powershell
-# Set a legal hold
-Add-AzRmStorageContainerLegalHold -ResourceGroupName $resourceGroup `
- -StorageAccountName $storageAccount -Name $container -Tag <tag1>,<tag2>,...
-
-# Clear a legal hold
-Remove-AzRmStorageContainerLegalHold -ResourceGroupName $resourceGroup `
- -StorageAccountName $storageAccount -Name $container -Tag <tag3>
-```
-
-Create or update time-based immutability policies:
-
-```powershell
-# Create a time-based immutability policy
-Set-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName $resourceGroup `
- -StorageAccountName $storageAccount -ContainerName $container -ImmutabilityPeriod 10
-```
-
-Retrieve immutability policies:
-
-```powershell
-# Get an immutability policy
-Get-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName $resourceGroup `
- -StorageAccountName $storageAccount -ContainerName $container
-```
-
-Lock immutability policies (add `-Force` to dismiss the prompt):
-
-```powershell
-# Lock immutability policies
-$policy = Get-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName `
- $resourceGroup -StorageAccountName $storageAccount -ContainerName $container
-Lock-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName `
- $resourceGroup -StorageAccountName $storageAccount -ContainerName $container `
- -Etag $policy.Etag
-```
-
-Extend immutability policies:
-
-```powershell
-# Extend immutability policies
-$policy = Get-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName `
- $resourceGroup -StorageAccountName $storageAccount -ContainerName $container
-
-Set-AzRmStorageContainerImmutabilityPolicy -ImmutabilityPolicy `
- $policy -ImmutabilityPeriod 11 -ExtendPolicy
-```
-
-Remove an unlocked immutability policy (add `-Force` to dismiss the prompt):
-
-```powershell
-# Remove an unlocked immutability policy
-$policy = Get-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName `
- $resourceGroup -StorageAccountName $storageAccount -ContainerName $container
-
-Remove-AzRmStorageContainerImmutabilityPolicy -ImmutabilityPolicy $policy
-```
---
-## Enabling allow protected append blobs writes
-
-### [Portal](#tab/azure-portal)
-
-![Allow additional append writes](media/storage-blob-immutability-policies-manage/immutable-allow-additional-append-writes.png)
-
-### [Azure CLI](#tab/azure-cli)
-
-The feature is included in the following command groups:
-`az storage container immutability-policy` and `az storage container legal-hold`. Run `-h` on them to see the commands.
-
-### [PowerShell](#tab/azure-powershell)
-
-```powershell
-# Create an immutability policy with appends allowed
-Set-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName $resourceGroup `
- -StorageAccountName $storageAccount -ContainerName $container -ImmutabilityPeriod 10 -AllowProtectedAppendWrite $true
-```
---
-## Next steps
-
-[Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md)
storage Storage Blob Immutable Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-immutable-storage.md
- Title: Immutable blob storage - Azure Storage
-description: Azure Storage offers WORM (Write Once, Read Many) support for Blob (object) storage that enables users to store data in a non-erasable, non-modifiable state for a specified interval.
----- Previously updated : 06/18/2021-----
-# Store business-critical blob data with immutable storage
-
-Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state. This state makes the data non-erasable and non-modifiable for a user-specified interval. For the duration of the retention interval, blobs can be created and read, but cannot be modified or deleted. Immutable storage is available for general-purpose v1, general-purpose v2, premium block blob, and legacy blob accounts in all Azure regions.
-
-For information about how to set and clear legal holds or create a time-based retention policy using the Azure portal, PowerShell, or Azure CLI, see [Set and manage immutability policies for Blob storage](storage-blob-immutability-policies-manage.md).
-
-> [!IMPORTANT]
-> Immutable storage for Azure Blob storage in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
->
-> To enroll in the preview, see [this form](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2EUNXd_ZNJCq_eDwZGaF5VUOUc3NTNQSUdOTjgzVUlVT1pDTzU4WlRKRy4u).
-
-## About immutable Blob storage
-
-Immutable storage helps healthcare organization, financial institutions, and related industries&mdash;particularly broker-dealer organizations&mdash;to store data securely. Immutable storage can also be leveraged in any scenario to protect critical data against modification or deletion.
-
-Typical applications include:
--- **Regulatory compliance**: Immutable storage for Azure Blob storage helps organizations address SEC 17a-4(f), CFTC 1.31(d), FINRA, and other regulations. A technical whitepaper by Cohasset Associates details how Immutable storage addresses these regulatory requirements is downloadable via the [Microsoft Service Trust Portal](https://aka.ms/AzureWormStorage). The [Azure Trust Center](https://www.microsoft.com/trustcenter/compliance/compliance-overview) contains detailed information about our compliance certifications.--- **Secure document retention**: Immutable storage for Azure Blob storage ensures that data can't be modified or deleted by any user, including users with account administrative privileges.--- **Legal hold**: Immutable storage for Azure Blob storage enables users to store sensitive information that is critical to litigation or business use in a tamper-proof state for the desired duration until the hold is removed. This feature is not limited only to legal use cases but can also be thought of as an event-based hold or an enterprise l