Updates from: 05/08/2021 03:04:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
Microsoft Graph allows you to manage resources in your Azure AD B2C directory. The following Microsoft Graph API operations are supported for the management of Azure AD B2C resources, including users, identity providers, user flows, custom policies, and policy keys. Each link in the following sections targets the corresponding page within the Microsoft Graph API reference for that operation.
+> [!NOTE]
+> You can also programmatically create an Azure AD B2C directory itself, along with the corresponding Azure resource linked to an Azure subscription. This functionality isn't exposed through the Microsoft Graph API, but through the Azure REST API. For more information, see [B2C Tenants - Create](https://docs.microsoft.com/rest/api/activedirectory/b2ctenants/create).
+ ## Prerequisites To use MS Graph API, and interact with resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so. Follow the steps in the [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-get-started.md) article to create an application registration that your management application can use.
active-directory-b2c Partner Cloudflare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-cloudflare.md
-# Tutorial: Configure Clouldflare with Azure Active Directory B2C
+# Tutorial: Configure Cloudflare with Azure Active Directory B2C
-In this sample tutorial, learn how to enable [Cloudflare Web Application Firewall (WAF)](https://www.cloudflare.com/waf/) solution for Azure Active Directory (AD) B2C tenant with custom domain. Clouldflare WAF helps organization protect against malicious attacks that aim to exploit vulnerabilities such as SQLi, and XSS.
+In this sample tutorial, learn how to enable [Cloudflare Web Application Firewall (WAF)](https://www.cloudflare.com/waf/) solution for Azure Active Directory (AD) B2C tenant with custom domain. Cloudflare WAF helps organization protect against malicious attacks that aim to exploit vulnerabilities such as SQLi, and XSS.
## Prerequisites
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/what-is-application-proxy.md
Many organizations believe they are in control and protected when resources exis
Perhaps you're already using Azure AD to manage users in the cloud who need to access Microsoft 365 and other SaaS applications, as well as web apps hosted on-premises. If you already have Azure AD, you can leverage it as one control plane to allow seamless and secure access to your on-premises applications. Or, maybe you're still contemplating a move to the cloud. If so, you can begin your journey to the cloud by implementing Application Proxy and taking the first step towards building a strong identity foundation.
-While not comprehensive, the list below illustrates some of the things you can enable by implementing App Proxy in a hybrid coexistence scenario:
+While not comprehensive, the list below illustrates some of the things you can enable by implementing Application Proxy in a hybrid coexistence scenario:
* Publish on-premises web apps externally in a simplified way without a DMZ * Support single sign-on (SSO) across devices, resources, and apps in the cloud and on-premises
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
Previously updated : 04/29/2021 Last updated : 05/07/2021
To get started with FIDO2 security keys, complete the following how-to:
The following considerations apply: -- Administrators can enable passwordless authentication methods for their tenant-- Administrators can target all users or select users/groups within their tenant for each method-- End users can register and manage these passwordless authentication methods in their account portal-- End users can sign in with these passwordless authentication methods:
+- Administrators can enable passwordless authentication methods for their tenant.
+
+- Administrators can target all users or select users/groups within their tenant for each method.
+
+- Users can register and manage these passwordless authentication methods in their account portal.
+
+- Users can sign in with these passwordless authentication methods:
- Microsoft Authenticator App: Works in scenarios where Azure AD authentication is used, including across all browsers, during Windows 10 setup, and with integrated mobile apps on any operating system.
- - Security keys: Work in Windows 10 setup in OOBE with or without Windows Autopilot, on lock screen for Windows 10 and the web in supported browsers like Microsoft Edge (both legacy and new Edge).
+ - Security keys: Work on lock screen for Windows 10 and the web in supported browsers like Microsoft Edge (both legacy and new Edge).
+
+- Users can use passwordless credentials to access resources in tenants where they are a guest, but they may still be required to perform MFA in that resource tenant. For more information, see [Possible double multi-factor authentication](https://docs.microsoft.com/azure/active-directory/external-identities/current-limitations#possible-double-multi-factor-authentication).
+
+- Users may not register passwordless credentials within a tenant where they are a guest, the same way that they do not have a password managed in that tenant.
+ ## Choose a passwordless method
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-use-email-signin.md
One of the user attributes that's automatically synchronized by Azure AD Connect
## Enable user sign-in with an email address > [!NOTE]
-> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](https://docs.microsoft.com/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0).
+> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0&preserve-view=true).
Once users with the *ProxyAddresses* attribute applied are synchronized to Azure AD using Azure AD Connect, you need to enable the feature for users to sign in with email as an alternate login ID for your tenant. This feature tells the Azure AD login servers to not only check the sign-in identifier against UPN values, but also against *ProxyAddresses* values for the email address.
With the policy applied, it can take up to 1 hour to propagate and for users to
## Enable staged rollout to test user sign-in with an email address > [!NOTE]
->This configuration option uses staged rollout policy. For more information, see [featureRolloutPolicy resource type](https://docs.microsoft.com/graph/api/resources/featurerolloutpolicy?view=graph-rest-1.0).
+>This configuration option uses staged rollout policy. For more information, see [featureRolloutPolicy resource type](https://docs.microsoft.com/graph/api/resources/featurerolloutpolicy?view=graph-rest-1.0&preserve-view=true).
Staged rollout policy allows tenant administrators to enable features for specific Azure AD groups. It is recommended that tenant administrators use staged rollout to test user sign-in with an email address. When administrators are ready to deploy this feature to their entire tenant, they should use [HRD policy](#enable-user-sign-in-with-an-email-address).
active-directory Msal Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration.md
If you are already familiar with the Azure AD for developers (v1.0) endpoint (an
However, you still need to use ADAL.NET if your application needs to sign in users with earlier versions of [Active Directory Federation Services (ADFS)](/windows-server/identity/active-directory-federation-services). For more information, see [ADFS support](https://aka.ms/msal-net-adfs-support). The following picture summarizes some of the differences between ADAL.NET and MSAL.NET for a public client application
-[![Side-by-side code for public client applications](./media/msal-compare-msaldotnet-and-adaldotnet/differences.png)](./media/msal-compare-msaldotnet-and-adaldotnet/differences.png#lightbox)
+[![Screenshot showing some of the differences between ADAL.NET and MSAL.NET for a public client application.](./media/msal-compare-msaldotnet-and-adaldotnet/differences.png)](./media/msal-compare-msaldotnet-and-adaldotnet/differences.png#lightbox)
And the following picture summarizes some of the differences between ADAL.NET and MSAL.NET for a confidential client application
-[![Side-by-side code for confidential client applications](./media/msal-net-migration/confidential-client-application.png)](./media/msal-net-migration/confidential-client-application.png#lightbox)
+[![Screenshot showing some of the differences between ADAL.NET and MSAL.NET for a confidential client application.](./media/msal-net-migration/confidential-client-application.png)](./media/msal-net-migration/confidential-client-application.png#lightbox)
### NuGet packages and Namespaces
active-directory Quickstart V2 Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
See [How the sample works](#how-the-sample-works) for an illustration.
This quickstart uses MSAL Angular v2 with the authorization code flow. For a similar quickstart that uses MSAL Angular 1.x with the implicit flow, see [Quickstart: Sign in users in JavaScript single-page apps](./quickstart-v2-angular.md).
-> [!IMPORTANT]
-> MSAL Angular v2 [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
- ## Prerequisites * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
active-directory Tutorial V2 Angular Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-angular-auth-code.md
+
+ Title: "Tutorial: Create an Angular app that uses the Microsoft identity platform for authentication using auth code flow | Azure"
+
+description: In this tutorial, you build an Angular single-page app (SPA) using auth code flow that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
++++++++ Last updated : 04/14/2021++++
+# Tutorial: Sign in users and call the Microsoft Graph API from an Angular single-page application (SPA) using auth code flow
+
+In this tutorial, you build an Angular single-page application (SPA) that signs in users and calls the Microsoft Graph API by using the authorization code flow with PKCE. The SPA you build uses the Microsoft Authentication Library (MSAL) for Angular v2.
+
+In this tutorial:
+
+> [!div class="checklist"]
+> * Create an Angular project with `npm`
+> * Register the application in the Azure portal
+> * Add code to support user sign-in and sign-out
+> * Add code to call Microsoft Graph API
+> * Test the app
+
+MSAL Angular v2 improves on MSAL Angular v1 by supporting the authorization code flow in the browser instead of the implicit grant flow. MSAL Angular v2 does **NOT** support the implicit flow.
+
+## Prerequisites
+
+* [Node.js](https://nodejs.org/en/download/) for running a local web server.
+* [Visual Studio Code](https://code.visualstudio.com/download) or other editor for modifying project files.
+
+## How the sample app works
++
+The sample application created in this tutorial enables an Angular SPA to query the Microsoft Graph API or a web API that accepts tokens issued by the Microsoft identity platform. It uses the Microsoft Authentication Library (MSAL) for Angular v2, a wrapper of the MSAL.js v2 library. MSAL Angular enables Angular 9+ applications to authenticate enterprise users by using Azure Active Directory (Azure AD), and also users with Microsoft accounts and social identities like Facebook, Google, and LinkedIn. The library also enables applications to get access to Microsoft cloud services and Microsoft Graph.
+
+In this scenario, after a user signs in, an access token is requested and added to HTTP requests through the authorization header. Token acquisition and renewal are handled by MSAL.
+
+### Libraries
+
+This tutorial uses the following libraries:
+
+|Library|Description|
+|||
+|[MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular)|Microsoft Authentication Library for JavaScript Angular Wrapper|
+|[MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser)|Microsoft Authentication Library for JavaScript v2 browser package |
+
+You can find the source code for all of the MSAL.js libraries in the [AzureAD/microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) repository on GitHub.
+
+## Create your project
+
+Once you have [Node.js](https://nodejs.org/en/download/) installed, open up a terminal window and then run the following commands to generate a new Angular application:
+
+```bash
+npm install -g @angular/cli # Install the Angular CLI
+ng new msal-angular-tutorial --routing=true --style=css --strict=false # Generate a new Angular app
+cd msal-angular-tutorial # Change to the app directory
+npm install @angular/material @angular/cdk # Install the Angular Material component library (optional, for UI)
+npm install @azure/msal-browser @azure/msal-angular # Install MSAL Browser and MSAL Angular in your application
+ng generate component home # To add a home page
+ng generate component profile # To add a profile page
+```
+
+## Register your application
+
+Follow the [instructions to register a single-page application](./scenario-spa-app-registration.md) in the Azure portal.
+
+On the app **Overview** page of your registration, note the **Application (client) ID** value for later use.
+
+Register your **Redirect URI** value as **http://localhost:4200/** and type as 'SPA'.
+
+## Configure the application
+
+1. In the *src/app* folder, edit *app.module.ts* and add `MsalModule` and `MsalInterceptor` to `imports` as well as the `isIE` constant. Your code should look like this:
+
+ ```javascript
+ import { BrowserModule } from '@angular/platform-browser';
+ import { NgModule } from '@angular/core';
+
+ import { AppRoutingModule } from './app-routing.module';
+ import { AppComponent } from './app.component';
+ import { HomeComponent } from './home/home.component';
+ import { ProfileComponent } from './profile/profile.component';
+
+ import { MsalModule } from '@azure/msal-angular';
+ import { PublicClientApplication } from '@azure/msal-browser';
+
+ const isIE = window.navigator.userAgent.indexOf('MSIE ') > -1 || window.navigator.userAgent.indexOf('Trident/') > -1;
+
+ @NgModule({
+ declarations: [
+ AppComponent,
+ HomeComponent,
+ ProfileComponent
+ ],
+ imports: [
+ BrowserModule,
+ AppRoutingModule,
+ MsalModule.forRoot( new PublicClientApplication({
+ auth: {
+ clientId: 'Enter_the_Application_Id_here', // This is your client ID
+ authority: 'Enter_the_Cloud_Instance_Id_Here'/'Enter_the_Tenant_Info_Here', // This is your tenant ID
+ redirectUri: 'Enter_the_Redirect_Uri_Here'// This is your redirect URI
+ },
+ cache: {
+ cacheLocation: 'localStorage',
+ storeAuthStateInCookie: isIE, // Set to true for Internet Explorer 11
+ }
+ }), null, null)
+ ],
+ providers: [],
+ bootstrap: [AppComponent]
+ })
+ export class AppModule { }
+ ```
+
+ Replace these values:
+
+ |Value name|About|
+ |||
+ |Enter_the_Application_Id_Here|On the **Overview** page of your application registration, this is your **Application (client) ID** value. |
+ |Enter_the_Cloud_Instance_Id_Here|This is the instance of the Azure cloud. For the main or global Azure cloud, enter **https://login.microsoftonline.com**. For national clouds (for example, China), see [National clouds](./authentication-national-cloud.md).|
+ |Enter_the_Tenant_Info_Here| Set to one of the following options: If your application supports *accounts in this organizational directory*, replace this value with the directory (tenant) ID or tenant name (for example, **contoso.microsoft.com**). If your application supports *accounts in any organizational directory*, replace this value with **organizations**. If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**. |
+ |Enter_the_Redirect_Uri_Here|Replace with **http://localhost:4200**.|
+
+ For more information about available configurable options, see [Initialize client applications](msal-js-initializing-client-applications.md).
+
+2. Add routes to the home and profile components in the *src/app/app-routing.module.ts*. Your code should look like the following:
+
+ ```javascript
+ import { NgModule } from '@angular/core';
+ import { Routes, RouterModule } from '@angular/router';
+ import { HomeComponent } from './home/home.component';
+ import { ProfileComponent } from './profile/profile.component';
+
+ const routes: Routes = [
+ {
+ path: 'profile',
+ component: ProfileComponent,
+ },
+ {
+ path: '',
+ component: HomeComponent
+ },
+ ];
+
+ const isIframe = window !== window.parent && !window.opener;
+
+ @NgModule({
+ imports: [RouterModule.forRoot(routes, {
+ initialNavigation: !isIframe ? 'enabled' : 'disabled' // Don't perform initial navigation in iframes
+ })],
+ exports: [RouterModule]
+ })
+ export class AppRoutingModule { }
+ ```
+
+## Replace base UI
+
+1. Replace the placeholder code in *src/app/app.component.html* with the following:
+
+ ```HTML
+ <mat-toolbar color="primary">
+ <a class="title" href="/">{{ title }}</a>
+
+ <div class="toolbar-spacer"></div>
+
+ <a mat-button [routerLink]="['profile']">Profile</a>
+
+ <button mat-raised-button *ngIf="!loginDisplay" (click)="login()">Login</button>
+
+ </mat-toolbar>
+ <div class="container">
+ <!--This is to avoid reload during acquireTokenSilent() because of hidden iframe -->
+ <router-outlet *ngIf="!isIframe"></router-outlet>
+ </div>
+ ```
+
+2. Add material modules to *src/app/app.module.ts*. Your `AppModule` should look like this:
+
+ ```javascript
+ import { BrowserModule } from '@angular/platform-browser';
+ import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
+ import { NgModule } from '@angular/core';
+
+ import { MatButtonModule } from '@angular/material/button';
+ import { MatToolbarModule } from '@angular/material/toolbar';
+ import { MatListModule } from '@angular/material/list';
+
+ import { AppRoutingModule } from './app-routing.module';
+ import { AppComponent } from './app.component';
+ import { HomeComponent } from './home/home.component';
+ import { ProfileComponent } from './profile/profile.component';
+
+ import { MsalModule } from '@azure/msal-angular';
+ import { PublicClientApplication } from '@azure/msal-browser';
+
+ const isIE = window.navigator.userAgent.indexOf('MSIE ') > -1 || window.navigator.userAgent.indexOf('Trident/') > -1;
+
+ @NgModule({
+ declarations: [
+ AppComponent,
+ HomeComponent,
+ ProfileComponent
+ ],
+ imports: [
+ BrowserModule,
+ BrowserAnimationsModule,
+ AppRoutingModule,
+ MatButtonModule,
+ MatToolbarModule,
+ MatListModule,
+ MsalModule.forRoot( new PublicClientApplication({
+ auth: {
+ clientId: 'Enter_the_Application_Id_here',
+ authority: 'Enter_the_Cloud_Instance_Id_Here'/'Enter_the_Tenant_Info_Here',
+ redirectUri: 'Enter_the_Redirect_Uri_Here'
+ },
+ cache: {
+ cacheLocation: 'localStorage',
+ storeAuthStateInCookie: isIE,
+ }
+ }), null, null)
+ ],
+ providers: [],
+ bootstrap: [AppComponent]
+ })
+ export class AppModule { }
+ ```
+
+3. (OPTIONAL) Add CSS to *src/style.css*:
+
+ ```css
+ @import '~@angular/material/prebuilt-themes/deeppurple-amber.css';
+
+ html, body { height: 100%; }
+ body { margin: 0; font-family: Roboto, "Helvetica Neue", sans-serif; }
+ .container { margin: 1%; }
+ ```
+
+4. (OPTIONAL) Add CSS to *src/app/app.component.css*:
+
+ ```css
+ .toolbar-spacer {
+ flex: 1 1 auto;
+ }
+
+ a.title {
+ color: white;
+ }
+ ```
+
+## Sign in a user
+
+Add the code from the following sections to invoke login using a popup window or a full-frame redirect:
+
+### Sign in using popups
+
+1. Change the code in *src/app/app.component.ts* to the following to sign in a user using a popup window:
+
+ ```javascript
+ import { MsalService } from '@azure/msal-angular';
+ import { Component, OnInit } from '@angular/core';
+
+ @Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+ })
+ export class AppComponent implements OnInit {
+ title = 'msal-angular-tutorial';
+ isIframe = false;
+ loginDisplay = false;
+
+ constructor(private authService: MsalService) { }
+
+ ngOnInit() {
+ this.isIframe = window !== window.parent && !window.opener;
+ }
+
+ login() {
+ this.authService.loginPopup()
+ .subscribe({
+ next: (result) => {
+ console.log(result);
+ this.setLoginDisplay();
+ },
+ error: (error) => console.log(error)
+ });
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+ }
+ ```
+
+> [!NOTE]
+> The rest of this tutorial uses the `loginRedirect` method with Microsoft Internet Explorer because of a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md) related to the handling of pop-up windows by Internet Explorer.
+
+### Sign in using redirects
+
+1. Update *src/app/app.module.ts* to bootstrap the `MsalRedirectComponent`. This is a dedicated redirect component which will handle redirects. Your code should now look like this:
+
+ ```javascript
+ import { BrowserModule } from '@angular/platform-browser';
+ import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
+ import { NgModule } from '@angular/core';
+
+ import { MatButtonModule } from '@angular/material/button';
+ import { MatToolbarModule } from '@angular/material/toolbar';
+ import { MatListModule } from '@angular/material/list';
+
+ import { AppRoutingModule } from './app-routing.module';
+ import { AppComponent } from './app.component';
+ import { HomeComponent } from './home/home.component';
+ import { ProfileComponent } from './profile/profile.component';
+
+ import { MsalModule, MsalRedirectComponent } from '@azure/msal-angular'; // Updated import
+ import { PublicClientApplication } from '@azure/msal-browser';
+
+ const isIE = window.navigator.userAgent.indexOf('MSIE ') > -1 || window.navigator.userAgent.indexOf('Trident/') > -1;
+
+ @NgModule({
+ declarations: [
+ AppComponent,
+ HomeComponent,
+ ProfileComponent
+ ],
+ imports: [
+ BrowserModule,
+ BrowserAnimationsModule,
+ AppRoutingModule,
+ MatButtonModule,
+ MatToolbarModule,
+ MatListModule,
+ MsalModule.forRoot( new PublicClientApplication({
+ auth: {
+ clientId: 'Enter_the_Application_Id_here',
+ authority: 'Enter_the_Cloud_Instance_Id_Here'/'Enter_the_Tenant_Info_Here',
+ redirectUri: 'Enter_the_Redirect_Uri_Here'
+ },
+ cache: {
+ cacheLocation: 'localStorage',
+ storeAuthStateInCookie: isIE,
+ }
+ }), null, null)
+ ],
+ providers: [],
+ bootstrap: [AppComponent, MsalRedirectComponent] // MsalRedirectComponent bootstrapped here
+ })
+ export class AppModule { }
+ ```
+
+2. Add the `<app-redirect>` selector to *src/https://docsupdatetracker.net/index.html*. This selector is used by the `MsalRedirectComponent`. Your *src/https://docsupdatetracker.net/index.html* should look like this:
+
+ ```HTML
+ <!doctype html>
+ <html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>msal-angular-tutorial</title>
+ <base href="/">
+ <meta name="viewport" content="width=device-width, initial-scale=1">
+ <link rel="icon" type="image/x-icon" href="favicon.ico">
+ </head>
+ <body>
+ <app-root></app-root>
+ <app-redirect></app-redirect>
+ </body>
+ </html>
+ ```
+
+3. Replace the code in *src/app/app.component.ts* with the following to sign in a user using a full-frame redirect:
+
+ ```javascript
+ import { MsalService } from '@azure/msal-angular';
+ import { Component, OnInit } from '@angular/core';
+
+ @Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+ })
+ export class AppComponent implements OnInit {
+ title = 'msal-angular-tutorial';
+ isIframe = false;
+ loginDisplay = false;
+
+ constructor(private authService: MsalService) { }
+
+ ngOnInit() {
+ this.isIframe = window !== window.parent && !window.opener;
+ }
+
+ login() {
+ this.authService.loginRedirect();
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+ }
+ ```
+
+4. Replace existing code in *src/app/home/home.component.ts* to subscribe to the `LOGIN_SUCCESS` event. This will allow you to access the result from the successful login with redirect. Your code should look like this:
+
+ ```javascript
+ import { Component, OnInit } from '@angular/core';
+ import { MsalBroadcastService, MsalService } from '@azure/msal-angular';
+ import { EventMessage, EventType, InteractionStatus } from '@azure/msal-browser';
+ import { filter } from 'rxjs/operators';
+
+ @Component({
+ selector: 'app-home',
+ templateUrl: './home.component.html',
+ styleUrls: ['./home.component.css']
+ })
+ export class HomeComponent implements OnInit {
+ constructor(private authService: MsalService, private msalBroadcastService: MsalBroadcastService) { }
+
+ ngOnInit(): void {
+ this.msalBroadcastService.msalSubject$
+ .pipe(
+ filter((msg: EventMessage) => msg.eventType === EventType.LOGIN_SUCCESS),
+ )
+ .subscribe((result: EventMessage) => {
+ console.log(result);
+ });
+ }
+ }
+ ```
+
+## Conditional rendering
+
+In order to render certain UI only for authenticated users, components have to subscribe to the `MsalBroadcastService` to see if users have been signed in and interaction has completed.
+
+1. Add the `MsalBroadcastService` to *src/app/app.component.ts* and subscribe to the `inProgress$` observable to check if interaction is complete and an account is signed in before rendering UI. Your code should now look like this:
+
+ ```javascript
+ import { Component, OnInit } from '@angular/core';
+ import { MsalService, MsalBroadcastService } from '@azure/msal-angular';
+ import { InteractionStatus } from '@azure/msal-browser';
+ import { Subject } from 'rxjs';
+ import { filter, takeUntil } from 'rxjs/operators';
+
+ @Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+ })
+ export class AppComponent implements OnInit {
+ title = 'msal-angular-tutorial';
+ isIframe = false;
+ loginDisplay = false;
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(private broadcastService: MsalBroadcastService, private authService: MsalService) { }
+
+ ngOnInit() {
+ this.isIframe = window !== window.parent && !window.opener;
+
+ this.broadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None),
+ takeUntil(this._destroying$)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ login() {
+ this.authService.loginRedirect();
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
+ }
+ ```
+
+2. Update the code in *src/app/home/home.component.ts* to also check for interaction to be completed before updating UI. Your code should now look like this:
+
+ ```javascript
+ import { Component, OnInit } from '@angular/core';
+ import { MsalBroadcastService, MsalService } from '@azure/msal-angular';
+ import { EventMessage, EventType, InteractionStatus } from '@azure/msal-browser';
+ import { filter } from 'rxjs/operators';
+
+ @Component({
+ selector: 'app-home',
+ templateUrl: './home.component.html',
+ styleUrls: ['./home.component.css']
+ })
+ export class HomeComponent implements OnInit {
+ loginDisplay = false;
+
+ constructor(private authService: MsalService, private msalBroadcastService: MsalBroadcastService) { }
+
+ ngOnInit(): void {
+ this.msalBroadcastService.msalSubject$
+ .pipe(
+ filter((msg: EventMessage) => msg.eventType === EventType.LOGIN_SUCCESS),
+ )
+ .subscribe((result: EventMessage) => {
+ console.log(result);
+ });
+
+ this.msalBroadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+ }
+ ```
+
+3. Replace the code in *src/app/home/home.component.html* with the following conditional displays:
+
+ ```HTML
+ <div *ngIf="!loginDisplay">
+ <p>Please sign-in to see your profile information.</p>
+ </div>
+
+ <div *ngIf="loginDisplay">
+ <p>Login successful!</p>
+ <p>Request your profile information by clicking Profile above.</p>
+ </div>
+ ```
+
+## Guarding routes
+
+### Angular Guard
+
+MSAL Angular provides `MsalGuard`, a class you can use to protect routes and require authentication before accessing the protected route. The steps below add the `MsalGuard` to the `Profile` route. Protecting the `Profile` route means that even if a user does not sign in using the `Login` button, if they try to access the `Profile` route or click the `Profile` button, the `MsalGuard` will prompt the user to authenticate via popup or redirect before showing the `Profile` page.
+
+`MsalGuard` is a convenience class you can use improve the user experience, but it should not be relied upon for security. Attackers can potentially get around client-side guards, and you should ensure that the server does not return any data the user should not access.
+
+1. Add the `MsalGuard` class as a provider in your application in *src/app/app.module.ts*, and add the configurations for the `MsalGuard`. Scopes needed for acquiring tokens later can be provided in the `authRequest`, and the type of interaction for the Guard can be set to `Redirect` or `Popup`. Your code should look like the following:
+
+ ```javascript
+ import { BrowserModule } from '@angular/platform-browser';
+ import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
+ import { NgModule } from '@angular/core';
+
+ import { MatButtonModule } from '@angular/material/button';
+ import { MatToolbarModule } from '@angular/material/toolbar';
+ import { MatListModule } from '@angular/material/list';
+
+ import { AppRoutingModule } from './app-routing.module';
+ import { AppComponent } from './app.component';
+ import { HomeComponent } from './home/home.component';
+ import { ProfileComponent } from './profile/profile.component';
+
+ import { MsalModule, MsalRedirectComponent, MsalGuard } from '@azure/msal-angular'; // MsalGuard added to imports
+ import { PublicClientApplication, InteractionType } from '@azure/msal-browser'; // InteractionType added to imports
+
+ const isIE = window.navigator.userAgent.indexOf('MSIE ') > -1 || window.navigator.userAgent.indexOf('Trident/') > -1;
+
+ @NgModule({
+ declarations: [
+ AppComponent,
+ HomeComponent,
+ ProfileComponent
+ ],
+ imports: [
+ BrowserModule,
+ BrowserAnimationsModule,
+ AppRoutingModule,
+ MatButtonModule,
+ MatToolbarModule,
+ MatListModule,
+ MsalModule.forRoot( new PublicClientApplication({
+ auth: {
+ clientId: 'Enter_the_Application_Id_here',
+ authority: 'Enter_the_Cloud_Instance_Id_Here'/'Enter_the_Tenant_Info_Here',
+ redirectUri: 'Enter_the_Redirect_Uri_Here'
+ },
+ cache: {
+ cacheLocation: 'localStorage',
+ storeAuthStateInCookie: isIE,
+ }
+ }), {
+ interactionType: InteractionType.Redirect, // MSAL Guard Configuration
+ authRequest: {
+ scopes: ['user.read']
+ }
+ }, null)
+ ],
+ providers: [
+ MsalGuard // MsalGuard added as provider here
+ ],
+ bootstrap: [AppComponent, MsalRedirectComponent]
+ })
+ export class AppModule { }
+ ```
+
+2. Set the `MsalGuard` on the routes you wish to protect in *src/app/app-routing.module.ts*:
+
+ ```javascript
+ import { NgModule } from '@angular/core';
+ import { Routes, RouterModule } from '@angular/router';
+ import { HomeComponent } from './home/home.component';
+ import { ProfileComponent } from './profile/profile.component';
+ import { MsalGuard } from '@azure/msal-angular';
+
+ const routes: Routes = [
+ {
+ path: 'profile',
+ component: ProfileComponent,
+ canActivate: [MsalGuard]
+ },
+ {
+ path: '',
+ component: HomeComponent
+ },
+ ];
+
+ const isIframe = window !== window.parent && !window.opener;
+
+ @NgModule({
+ imports: [RouterModule.forRoot(routes, {
+ initialNavigation: !isIframe ? 'enabled' : 'disabled' // Don't perform initial navigation in iframes
+ })],
+ exports: [RouterModule]
+ })
+ export class AppRoutingModule { }
+ ```
+
+3. Adjust the login calls in *src/app/app.component.ts* to take the `authRequest` set in the guard configurations into account. Your code should now look like the following:
+
+ ```javascript
+ import { Component, OnInit, Inject } from '@angular/core';
+ import { MsalService, MsalBroadcastService, MSAL_GUARD_CONFIG, MsalGuardConfiguration } from '@azure/msal-angular';
+ import { InteractionStatus, RedirectRequest } from '@azure/msal-browser';
+ import { Subject } from 'rxjs';
+ import { filter, takeUntil } from 'rxjs/operators';
+
+ @Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+ })
+ export class AppComponent implements OnInit {
+ title = 'msal-angular-tutorial';
+ isIframe = false;
+ loginDisplay = false;
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(@Inject(MSAL_GUARD_CONFIG) private msalGuardConfig: MsalGuardConfiguration, private broadcastService: MsalBroadcastService, private authService: MsalService) { }
+
+ ngOnInit() {
+ this.isIframe = window !== window.parent && !window.opener;
+
+ this.broadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None),
+ takeUntil(this._destroying$)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ login() {
+ if (this.msalGuardConfig.authRequest){
+ this.authService.loginRedirect({...this.msalGuardConfig.authRequest} as RedirectRequest);
+ } else {
+ this.authService.loginRedirect();
+ }
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
+ }
+ ```
+
+## Acquire a token
+
+### Angular Interceptor
+
+MSAL Angular provides an `Interceptor` class that automatically acquires tokens for outgoing requests that use the Angular `http` client to known protected resources.
+
+1. Add the `Interceptor` class as a provider to your application in *src/app/app.module.ts*, with its configurations. Your code should now look like this:
+
+ ```javascript
+ import { BrowserModule } from '@angular/platform-browser';
+ import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
+ import { NgModule } from '@angular/core';
+ import { HTTP_INTERCEPTORS, HttpClientModule } from "@angular/common/http"; // Import
+
+ import { MatButtonModule } from '@angular/material/button';
+ import { MatToolbarModule } from '@angular/material/toolbar';
+ import { MatListModule } from '@angular/material/list';
+
+ import { AppRoutingModule } from './app-routing.module';
+ import { AppComponent } from './app.component';
+ import { HomeComponent } from './home/home.component';
+ import { ProfileComponent } from './profile/profile.component';
+
+ import { MsalModule, MsalRedirectComponent, MsalGuard, MsalInterceptor } from '@azure/msal-angular'; // Import MsalInterceptor
+ import { InteractionType, PublicClientApplication } from '@azure/msal-browser';
+
+ const isIE = window.navigator.userAgent.indexOf('MSIE ') > -1 || window.navigator.userAgent.indexOf('Trident/') > -1;
+
+ @NgModule({
+ declarations: [
+ AppComponent,
+ HomeComponent,
+ ProfileComponent
+ ],
+ imports: [
+ BrowserModule,
+ BrowserAnimationsModule,
+ AppRoutingModule,
+ MatButtonModule,
+ MatToolbarModule,
+ MatListModule,
+ HttpClientModule,
+ MsalModule.forRoot( new PublicClientApplication({
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here',
+ authority: 'Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here',
+ redirectUri: 'Enter_the_Redirect_Uri_Here',
+ },
+ cache: {
+ cacheLocation: 'localStorage',
+ storeAuthStateInCookie: isIE,
+ }
+ }), {
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: ['user.read']
+ }
+ }, {
+ interactionType: InteractionType.Redirect, // MSAL Interceptor Configuration
+ protectedResourceMap: new Map([
+ ['Enter_the_Graph_Endpoint_Here/v1.0/me', ['user.read']]
+ ])
+ })
+ ],
+ providers: [
+ {
+ provide: HTTP_INTERCEPTORS,
+ useClass: MsalInterceptor,
+ multi: true
+ },
+ MsalGuard
+ ],
+ bootstrap: [AppComponent, MsalRedirectComponent]
+ })
+ export class AppModule { }
+
+ ```
+
+ The protected resources are provided as a `protectedResourceMap`. The URLs you provide in the `protectedResourceMap` collection are case-sensitive. For each resource, add scopes being requested to be returned in the access token.
+
+ For example:
+
+ * `["user.read"]` for Microsoft Graph
+ * `["<Application ID URL>/scope"]` for custom web APIs (that is, `api://<Application ID>/access_as_user`)
+
+ Modify the values in the `protectedResourceMap` as described here:
+
+ |Value name| About|
+ |-||
+ |`Enter_the_Graph_Endpoint_Here`| The instance of the Microsoft Graph API the application should communicate with. For the **global** Microsoft Graph API endpoint, replace both instances of this string with `https://graph.microsoft.com`. For endpoints in **national** cloud deployments, see [National cloud deployments](/graph/deployments) in the Microsoft Graph documentation.|
+
+2. Replace the code in *src/app/profile/profile.component.ts* to retrieve a user's profile with an HTTP request:
+
+ ```JavaScript
+ import { Component, OnInit } from '@angular/core';
+ import { HttpClient } from '@angular/common/http';
+
+ const GRAPH_ENDPOINT = 'Enter_the_Graph_Endpoint_Here/v1.0/me';
+
+ type ProfileType = {
+ givenName?: string,
+ surname?: string,
+ userPrincipalName?: string,
+ id?: string
+ };
+
+ @Component({
+ selector: 'app-profile',
+ templateUrl: './profile.component.html',
+ styleUrls: ['./profile.component.css']
+ })
+ export class ProfileComponent implements OnInit {
+ profile!: ProfileType;
+
+ constructor(
+ private http: HttpClient
+ ) { }
+
+ ngOnInit() {
+ this.getProfile();
+ }
+
+ getProfile() {
+ this.http.get(GRAPH_ENDPOINT)
+ .subscribe(profile => {
+ this.profile = profile;
+ });
+ }
+ }
+ ```
+
+3. Replace the UI in *src/app/profile/profile.component.html* to display profile information:
+
+ ```HTML
+ <div>
+ <p><strong>First Name: </strong> {{profile?.givenName}}</p>
+ <p><strong>Last Name: </strong> {{profile?.surname}}</p>
+ <p><strong>Email: </strong> {{profile?.userPrincipalName}}</p>
+ <p><strong>Id: </strong> {{profile?.id}}</p>
+ </div>
+ ```
+
+## Sign out
+
+Update the code in *src/app/app.component.html* to conditionally display a `Logout` button:
+
+```HTML
+<mat-toolbar color="primary">
+ <a class="title" href="/">{{ title }}</a>
+
+ <div class="toolbar-spacer"></div>
+
+ <a mat-button [routerLink]="['profile']">Profile</a>
+
+ <button mat-raised-button *ngIf="!loginDisplay" (click)="login()">Login</button>
+ <button mat-raised-button *ngIf="loginDisplay" (click)="logout()">Logout</button>
+
+</mat-toolbar>
+<div class="container">
+ <!--This is to avoid reload during acquireTokenSilent() because of hidden iframe -->
+ <router-outlet *ngIf="!isIframe"></router-outlet>
+</div>
+```
+
+### Sign out using redirects
+
+Update the code in *src/app/app.component.ts* to sign out a user using redirects:
+
+```javascript
+import { Component, OnInit, Inject } from '@angular/core';
+import { MsalService, MsalBroadcastService, MSAL_GUARD_CONFIG, MsalGuardConfiguration } from '@azure/msal-angular';
+import { InteractionStatus, RedirectRequest } from '@azure/msal-browser';
+import { Subject } from 'rxjs';
+import { filter, takeUntil } from 'rxjs/operators';
+
+@Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+})
+export class AppComponent implements OnInit {
+ title = 'msal-angular-tutorial';
+ isIframe = false;
+ loginDisplay = false;
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(@Inject(MSAL_GUARD_CONFIG) private msalGuardConfig: MsalGuardConfiguration, private broadcastService: MsalBroadcastService, private authService: MsalService) { }
+
+ ngOnInit() {
+ this.isIframe = window !== window.parent && !window.opener;
+
+ this.broadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None),
+ takeUntil(this._destroying$)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ login() {
+ if (this.msalGuardConfig.authRequest){
+ this.authService.loginRedirect({...this.msalGuardConfig.authRequest} as RedirectRequest);
+ } else {
+ this.authService.loginRedirect();
+ }
+ }
+
+ logout() { // Add log out function here
+ this.authService.logoutRedirect({
+ postLogoutRedirectUri: 'http://localhost:4200'
+ });
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
+}
+```
+
+### Sign out using popups
+
+Update the code in *src/app/app.component.ts* to sign out a user using popups:
+
+```javascript
+import { Component, OnInit, Inject } from '@angular/core';
+import { MsalService, MsalBroadcastService, MSAL_GUARD_CONFIG, MsalGuardConfiguration } from '@azure/msal-angular';
+import { InteractionStatus, PopupRequest } from '@azure/msal-browser';
+import { Subject } from 'rxjs';
+import { filter, takeUntil } from 'rxjs/operators';
+
+@Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+})
+export class AppComponent implements OnInit {
+ title = 'msal-angular-tutorial';
+ isIframe = false;
+ loginDisplay = false;
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(@Inject(MSAL_GUARD_CONFIG) private msalGuardConfig: MsalGuardConfiguration, private broadcastService: MsalBroadcastService, private authService: MsalService) { }
+
+ ngOnInit() {
+ this.isIframe = window !== window.parent && !window.opener;
+
+ this.broadcastService.inProgress$
+ .pipe(
+ filter((status: InteractionStatus) => status === InteractionStatus.None),
+ takeUntil(this._destroying$)
+ )
+ .subscribe(() => {
+ this.setLoginDisplay();
+ })
+ }
+
+ login() {
+ if (this.msalGuardConfig.authRequest){
+ this.authService.loginPopup({...this.msalGuardConfig.authRequest} as PopupRequest)
+ .subscribe({
+ next: (result) => {
+ console.log(result);
+ this.setLoginDisplay();
+ },
+ error: (error) => console.log(error)
+ });
+ } else {
+ this.authService.loginPopup()
+ .subscribe({
+ next: (result) => {
+ console.log(result);
+ this.setLoginDisplay();
+ },
+ error: (error) => console.log(error)
+ });
+ }
+ }
+
+ logout() { // Add log out function here
+ this.authService.logoutPopup({
+ mainWindowRedirectUri: "/"
+ });
+ }
+
+ setLoginDisplay() {
+ this.loginDisplay = this.authService.instance.getAllAccounts().length > 0;
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
+}
+```
+
+## Test your code
+
+1. Start the web server to listen to the port by running the following commands at a command-line prompt from the application folder:
+
+ ```bash
+ npm install
+ npm start
+ ```
+1. In your browser, enter **http://localhost:4200** or **http://localhost:{port}**, where *port* is the port that your web server is listening on. You should see a page that looks like the one below.
+
+ :::image type="content" source="media/tutorial-v2-angular-auth-code/angular-01-not-signed-in.png" alt-text="Web browser displaying sign-in dialog":::
++
+### Provide consent for application access
+
+The first time that you start to sign in to your application, you're prompted to grant it access to your profile and allow it to sign you in:
++
+If you consent to the requested permissions, the web application shows a successful login page:
++
+### Call the Graph API
+
+After you sign in, select **Profile** to view the user profile information returned in the response from the call to the Microsoft Graph API:
++
+## Add scopes and delegated permissions
+
+The Microsoft Graph API requires the _User.Read_ scope to read a user's profile. The _User.Read_ scope is added automatically to every app registration you create in the Azure portal. Other APIs for Microsoft Graph, as well as custom APIs for your back-end server, might require additional scopes. For example, the Microsoft Graph API requires the _Mail.Read_ scope in order to list the user's email.
+
+As you add scopes, your users might be prompted to provide additional consent for the added scopes.
+
+>[!NOTE]
+>The user might be prompted for additional consents as you increase the number of scopes.
++
+## Next steps
+
+Delve deeper into single-page application (SPA) development on the Microsoft identity platform in our the multi-part article series.
+
+> [!div class="nextstepaction"]
+> [Scenario: Single-page application](scenario-spa-overview.md)
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
An app most commonly requests these permissions by specifying the scopes in requ
The Microsoft identity platform supports two types of permissions: *delegated permissions* and *application permissions*.
-* **Delegated permissions** are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests. The app is delegated permission to act as the signed-in user when it makes calls to the target resource.
+* **Delegated permissions** are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests. The app is delegated with the permission to act as a signed-in user when it makes calls to the target resource.
Some delegated permissions can be consented to by nonadministrators. But some high-privileged permissions require [administrator consent](#admin-restricted-permissions). To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure Active Directory (Azure AD)](../roles/permissions-reference.md).
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-revoke-access.md
Most browser-based applications use session tokens instead of access and refresh
- When a user opens a browser and authenticates to an application via Azure AD, the user receives two session tokens. One from Azure AD and another from the application. -- Once an application issues its own session token, access to the application is governed by the applicationΓÇÖs session. At this point, the user is affected by only the authorization policies that the application is aware of.
+- Once an application issues its own session token, access to the application is governed by the application's session. At this point, the user is affected by only the authorization policies that the application is aware of.
- The authorization policies of Azure AD are reevaluated as often as the application sends the user back to Azure AD. Reevaluation usually happens silently, though the frequency depends on how the application is configured. It's possible that the app may never send the user back to Azure AD as long as the session token is valid. -- For a session token to be revoked, the application must revoke access based on its own authorization policies. Azure AD canΓÇÖt directly revoke a session token issued by an application.
+- For a session token to be revoked, the application must revoke access based on its own authorization policies. Azure AD can't directly revoke a session token issued by an application.
## Revoke access for a user in the hybrid environment
For a hybrid environment with on-premises Active Directory synchronized with Azu
As an admin in the Active Directory, connect to your on-premises network, open PowerShell, and take the following actions:
-1. Disable the user in Active Directory. Refer to [Disable-ADAccount](/powershell/module/activedirectory/disable-adaccount?view=win10-ps).
+1. Disable the user in Active Directory. Refer to [Disable-ADAccount](/powershell/module/activedirectory/disable-adaccount).
```PowerShell Disable-ADAccount -Identity johndoe ```
-2. Reset the userΓÇÖs password twice in the Active Directory. Refer to [Set-ADAccountPassword](/powershell/module/activedirectory/set-adaccountpassword?view=win10-ps).
+2. Reset the user's password twice in the Active Directory. Refer to [Set-ADAccountPassword](/powershell/module/activedirectory/set-adaccountpassword).
> [!NOTE]
- > The reason for changing a userΓÇÖs password twice is to mitigate the risk of pass-the-hash, especially if there are delays in on-premises password replication. If you can safely assume this account isn't compromised, you may reset the password only once.
+ > The reason for changing a user's password twice is to mitigate the risk of pass-the-hash, especially if there are delays in on-premises password replication. If you can safely assume this account isn't compromised, you may reset the password only once.
> [!IMPORTANT] > Don't use the example passwords in the following cmdlets. Be sure to change the passwords to a random string.
As an admin in the Active Directory, connect to your on-premises network, open P
As an administrator in Azure Active Directory, open PowerShell, run ``Connect-AzureAD``, and take the following actions:
-1. Disable the user in Azure AD. Refer to [Set-AzureADUser](/powershell/module/azuread/Set-AzureADUser?view=azureadps-2.0).
+1. Disable the user in Azure AD. Refer to [Set-AzureADUser](/powershell/module/azuread/Set-AzureADUser).
```PowerShell Set-AzureADUser -ObjectId johndoe@contoso.com -AccountEnabled $false ```
-2. Revoke the userΓÇÖs Azure AD refresh tokens. Refer to [Revoke-AzureADUserAllRefreshToken](/powershell/module/azuread/revoke-azureaduserallrefreshtoken?view=azureadps-2.0).
+2. Revoke the user's Azure AD refresh tokens. Refer to [Revoke-AzureADUserAllRefreshToken](/powershell/module/azuread/revoke-azureaduserallrefreshtoken).
```PowerShell Revoke-AzureADUserAllRefreshToken -ObjectId johndoe@contoso.com ```
-3. Disable the userΓÇÖs devices. Refer to [Get-AzureADUserRegisteredDevice](/powershell/module/azuread/get-azureaduserregistereddevice?view=azureadps-2.0).
+3. Disable the user's devices. Refer to [Get-AzureADUserRegisteredDevice](/powershell/module/azuread/get-azureaduserregistereddevice).
```PowerShell Get-AzureADUserRegisteredDevice -ObjectId johndoe@contoso.com | Set-AzureADDevice -AccountEnabled $false
Once admins have taken the above steps, the user can't gain new tokens for any a
- For **applications using access tokens**, the user loses access when the access token expires. -- For **applications that use session tokens**, the existing sessions end as soon as the token expires. If the disabled state of the user is synchronized to the application, the application can automatically revoke the userΓÇÖs existing sessions if it's configured to do so. The time it takes depends on the frequency of synchronization between the application and Azure AD.
+- For **applications that use session tokens**, the existing sessions end as soon as the token expires. If the disabled state of the user is synchronized to the application, the application can automatically revoke the user's existing sessions if it's configured to do so. The time it takes depends on the frequency of synchronization between the application and Azure AD.
## Best practices -- Deploy an automated provisioning and deprovisioning solution. Deprovisioning users from applications is an effective way of revoking access, especially for applications that use sessions tokens. Develop a process to deprovision users to apps that donΓÇÖt support automatic provisioning and deprovisioning. Ensure applications revoke their own session tokens and stop accepting Azure AD access tokens even if theyΓÇÖre still valid.
+- Deploy an automated provisioning and deprovisioning solution. Deprovisioning users from applications is an effective way of revoking access, especially for applications that use sessions tokens. Develop a process to deprovision users to apps that don't support automatic provisioning and deprovisioning. Ensure applications revoke their own session tokens and stop accepting Azure AD access tokens even if they're still valid.
- Use [Azure AD SaaS App Provisioning](../app-provisioning/user-provisioning.md). Azure AD SaaS App Provisioning typically runs automatically every 20-40 minutes. [Configure Azure AD provisioning](../saas-apps/tutorial-list.md) to deprovision or deactivate disabled users in applications.
- - For applications that donΓÇÖt use Azure AD SaaS App Provisioning, use [Identity Manager (MIM)](/microsoft-identity-manager/mim-how-provision-users-adds) or a 3rd party solution to automate the deprovisioning of users.
+ - For applications that don't use Azure AD SaaS App Provisioning, use [Identity Manager (MIM)](/microsoft-identity-manager/mim-how-provision-users-adds) or a 3rd party solution to automate the deprovisioning of users.
- Identify and develop a process for applications that requires manual deprovisioning. Ensure admins can quickly run the required manual tasks to deprovision the user from these apps when needed. -- [Manage your devices and applications with Microsoft Intune](/mem/intune/remote-actions/device-management). Intune-managed [devices can be reset to factory settings](/mem/intune/remote-actions/devices-wipe). If the device is unmanaged, you can [wipe the corporate data from managed apps](/mem/intune/apps/apps-selective-wipe). These processes are effective for removing potentially sensitive data from end usersΓÇÖ devices. However, for either process to be triggered, the device must be connected to the internet. If the device is offline, the device will still have access to any locally stored data.
+- [Manage your devices and applications with Microsoft Intune](/mem/intune/remote-actions/device-management). Intune-managed [devices can be reset to factory settings](/mem/intune/remote-actions/devices-wipe). If the device is unmanaged, you can [wipe the corporate data from managed apps](/mem/intune/apps/apps-selective-wipe). These processes are effective for removing potentially sensitive data from end users' devices. However, for either process to be triggered, the device must be connected to the internet. If the device is offline, the device will still have access to any locally stored data.
> [!NOTE] > Data on the device cannot be recovered after a wipe.
Once admins have taken the above steps, the user can't gain new tokens for any a
- [Secure access practices for Azure AD administrators](../roles/security-planning.md) - [Add or update user profile information](../fundamentals/active-directory-users-profile-azure-portal.md)-- [Remove or Delete a former employee](https://docs.microsoft.com/microsoft-365/admin/add-users/remove-former-employee?view=o365-worldwide)
+- [Remove or Delete a former employee](/microsoft-365/admin/add-users/remove-former-employee)
active-directory Add Guest To Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-guest-to-role.md
-- Title: Add a B2B collaboration user to a role - Azure Active Directory
-description: Add a guest user to a role in Azure Active Directory
----- Previously updated : 05/08/2018---------
-# Grant permissions to users from partner organizations in your Azure Active Directory tenant
-
-Azure Active Directory (Azure AD) B2B collaboration users are added as guest users to the directory, and guest permissions in the directory are restricted by default. Your business may need some guest users to fill higher-privilege roles in your organization. To support defining higher-privilege roles, guest users can be added to any roles you desire, based on your organization's needs.
-
-If a directory role is assigned to a guest user, the guest user will be granted with additional permissions that come with the role, including basic read permissions. See [Azure AD built-in roles](../roles/permissions-reference.md).
-
-## Default role
-
-![Screenshot showing the default directory role](./media/add-guest-to-role/default-role.png)
-
-## Global Administrator Role
-
-![Screenshot showing the global administrator role](./media/add-guest-to-role/global-admin-role.png)
-
-## Limited Administrator Role
-
-![Screenshot showing the limited administrator role](./media/add-guest-to-role/limited-admin-role.png)
-
-## Next steps
--- [What is Azure AD B2B collaboration?](what-is-b2b.md)-- [B2B collaboration user properties](user-properties.md)
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-users-administrator.md
Previously updated : 05/19/2020 Last updated : 04/23/2021
As a user who is assigned any of the limited administrator directory roles, you can use the Azure portal to invite B2B collaboration users. You can invite guest users to the directory, to a group, or to an application. After you invite a user through any of these methods, the invited user's account is added to Azure Active Directory (Azure AD), with a user type of *Guest*. The guest user must then redeem their invitation to access resources. An invitation of a user does not expire.
-After you add a guest user to the directory, you can either send the guest user a direct link to a shared app, or the guest user can click the redemption URL in the invitation email. For more information about the redemption process, see [B2B collaboration invitation redemption](redemption-experience.md).
+After you add a guest user to the directory, you can either send the guest user a direct link to a shared app, or the guest user can select the redemption URL in the invitation email. For more information about the redemption process, see [B2B collaboration invitation redemption](redemption-experience.md).
> [!IMPORTANT] > You should follow the steps in [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/active-directory-properties-area.md) to add the URL of your organization's privacy statement. As part of the first time invitation redemption process, an invited user must consent to your privacy terms to continue.
To add B2B collaboration users to the directory, follow these steps:
2. Search for and select **Azure Active Directory** from any page. 3. Under **Manage**, select **Users**. 4. Select **New guest user**.
+5. On the **New user** page, select **Invite user** and then add the guest user's information.
- ![Shows where New guest user is in the UI](./media/add-users-administrator/new-guest-user-in-all-users.png)
-
-5. On the **New user** page, select **Invite user** and then add the guest user's information.
-
- > [!NOTE]
- > Group email addresses arenΓÇÖt supported; enter the email address for an individual. Also, some email providers allow users to add a plus symbol (+) and additional text to their email addresses to help with things like inbox filtering. However, Azure AD doesnΓÇÖt currently support plus symbols in email addresses. To avoid delivery issues, omit the plus symbol and any characters following it up to the @ symbol.
+ ![Guest user type image](media/add-users-administrator/invite-user.png)
- - **Name.** The first and last name of the guest user.
+ - **Name.** The first and last name of the guest user.
- **Email address (required)**. The email address of the guest user. - **Personal message (optional)** Include a personal welcome message to the guest user. - **Groups**: You can add the guest user to one or more existing groups, or you can do it later.
- - **Directory role**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role.
+ - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role by selecting **User** next to **Roles**.
-7. Select **Invite** to automatically send the invitation to the guest user.
+ > [!NOTE]
+ > Group email addresses arenΓÇÖt supported; enter the email address for an individual. Also, some email providers allow users to add a plus symbol (+) and additional text to their email addresses to help with things like inbox filtering. However, Azure AD doesnΓÇÖt currently support plus symbols in email addresses. To avoid delivery issues, omit the plus symbol and any characters following it up to the @ symbol.
+6. Select **Invite** to automatically send the invitation to the guest user.
After you send the invitation, the user account is automatically added to the directory as a guest. -
-![Shows B2B user with Guest user type](./media/add-users-administrator/GuestUserType.png)
+ ![Guest user image](media/add-users-administrator//guest-user-type.png)
## Add guest users to a group If you need to manually add B2B collaboration users to a group, follow these steps:
If you need to manually add B2B collaboration users to a group, follow these ste
1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Search for and select **Azure Active Directory** from any page. 3. Under **Manage**, select **Groups**.
-4. Select a group (or click **New group** to create a new one). It's a good idea to include in the group description that the group contains B2B guest users.
-5. Select **Members**.
+4. Select a group (or select **New group** to create a new one). It's a good idea to include in the group description that the group contains B2B guest users.
+5. Select the link under **Members**.
6. Do one of the following:
- - If the guest user already exists in the directory, search for the B2B user. Select the user, and then click **Select** to add the user to the group.
- - If the guest user does not already exist in the directory, invite them to the group by typing their email address in the search box, typing an optional personal message, and then clicking **Select**. The invitation automatically goes out to the invited user.
-
- ![Add invite button to add guest members](./media/add-users-administrator/GroupInvite.png)
+
+ - If the guest user already exists in the directory, search for the B2B user. Select the user, and then select **Select** to add the user to the group.
+ - If the guest user does not already exist in the directory, invite them to the group by typing their email address in the search box, typing an optional personal message, and then choosing **Invite**. The invitation automatically goes out to the invited user.
+![Add invite button to add guest members](./media/add-users-administrator/group-invite.png)
+
You can also use dynamic groups with Azure AD B2B collaboration. For more information, see [Dynamic groups and Azure Active Directory B2B collaboration](use-dynamic-groups.md). ## Add guest users to an application
To add B2B collaboration users to an application, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Search for and select **Azure Active Directory** from any page.
-3. Under **Manage**, select **Enterprise applications** > **All applications**.
-4. Select the application to which you want to add guest users.
-5. On the application's dashboard, select **Total Users** to open the **Users and groups** pane.
-
- ![Total Users button to add open Users and Groups](./media/add-users-administrator/AppUsersAndGroups.png)
-
+3. Under **Manage**, select **Enterprise applications**.
+4. On the **All applications** page, select the application to which you want to add guest users.
+5. Under **Manage**, select **Users and groups**.
6. Select **Add user**. 7. Under **Add Assignment**, select **User and groups**. 8. Do one of the following:
- - If the guest user already exists in the directory, search for the B2B user. Select the user, click **Select**, and then click **Assign** to add the user to the app.
- - If the guest user does not already exist in the directory, under **Select member or invite an external user**, type the user's email address. In the message box, type an optional personal message. Under the message box, click **Invite**.
-
- ![Screenshot that highlights where to add the user's email address, the personalized message, and also highlights the Invite button.](./media/add-users-administrator/AppInviteUsers.png)
-
- Click **Select**, and then click **Assign** to add the user to the app. An invitation automatically goes out to the invited user.
-
-9. The guest user appears in the application's **Users and groups** list with the assigned role of **Default Access**. If you want to change the role, do the following:
- - Select the guest user, and then select **Edit**.
- - Under **Edit Assignment**, click **Select Role**, and select the role you want to assign to the selected user.
- - Click **Select**.
- - Click **Assign**.
+ - If the guest user already exists in the directory, search for the B2B user. Select the user, choose **Select**, and then select **Assign** to add the user to the app.
+ - If the guest user does not already exist in the directory, under **Select member or invite an external user**, type the user's email address. In the message box, type an optional personal message. Under the message box, select **Invite**.
+ ![Screenshot that highlights where to add the user's email address, the personalized message, and also highlights the Invite button.](./media/add-users-administrator/app-invite-users.png)
+
+10. The guest user appears in the application's **Users and groups** list with the assigned role of **Default Access**. If the application provides different roles and you want to change the user's role, do the following:
+ - Select the check box next to the guest user, and then select the **Edit** button.
+ - On the **Edit Assignment** page, choose the link under **Select a role**, and select the role you want to assign to the user.
+ - Choose **Select**.
+ - Select **Assign**.
## Resend invitations to guest users
If a guest user has not yet redeemed their invitation, you can resend the invita
1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Search for and select **Azure Active Directory** from any page. 3. Under **Manage**, select **Users**.
-5. Select the user account.
-6. Under **Manage**, select **Profile**.
-7. If the user has not yet accepted the invitation, in the **Identity** section, **Invitation accepted** will be set to **No**. To resend the invitation, select **(manage)**. Then in the **Manage invitations** page, next to **Resend invite?" select **Yes**, and select **Done**.
+4. Select the user account.
+5. In the **Identity** section, under **Invitation accepted**, Select the **(manage)** link.
+6. If the user has not yet accepted the invitation, Select the **Yes** option to resend.
+
+ ![Resend Invitation](./media/add-users-administrator/resend-invitation.png)
> [!NOTE]
-> If you resend an invitation that originally directed the user to a specific app, understand that the link in the new invitation takes the user to the top-level Access Panel instead.
-> Additionally, only users with inviting permissions will be able to resend invitations.
+> An invitation URL will be generated.
## Next steps - To learn how non-Azure AD admins can add B2B guest users, see [How do information workers add B2B collaboration users?](add-users-information-worker.md)-- For information about the invitation email, see [The elements of the B2B collaboration invitation email](invitation-email-elements.md).
+- For information about the invitation email, see [The elements of the B2B collaboration invitation email](invitation-email-elements.md).
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
Your new tenant is created with the domain contoso.onmicrosoft.com.
When you create a new Azure AD tenant, you become the first user of that tenant. As the first user, you're automatically assigned the [Global Admin](../roles/permissions-reference.md#global-administrator) role. Check out your user account by navigating to the [**Users**](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/MsGraphUsers) page.
-By default, you're also listed as the [technical contact](/microsoft-365/admin/manage/change-address-contact-and-more?view=o365-worldwide#what-do-these-fields-mean) for the tenant. Technical contact information is something you can change in [**Properties**](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Properties).
+By default, you're also listed as the [technical contact](/microsoft-365/admin/manage/change-address-contact-and-more#what-do-these-fields-mean) for the tenant. Technical contact information is something you can change in [**Properties**](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Properties).
> [!WARNING] > Ensure your directory has at least two accounts with global administrator privileges assigned to them. This will help in the case that one global administrator is locked out. For more detail see the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/create-access-review.md
na
ms.devlang: na Previously updated : 4/27/2021 Last updated : 5/6/2021
# Create an access review of groups and applications in Azure AD access reviews
-Access to groups and applications for employees and guests changes over time. To reduce the risk associated with stale access assignments, administrators can use Azure Active Directory (Azure AD) to create access reviews for group members or application access. If you need to routinely review access, you can also create recurring access reviews. For more information about these scenarios, see [Manage user access](manage-user-access-with-access-reviews.md) and [Manage guest access](manage-guest-access-with-access-reviews.md).
+Access to groups and applications for employees and guests changes over time. To reduce the risk associated with stale access assignments, administrators can use Azure Active Directory (Azure AD) to create access reviews for group members or application access. Microsoft 365 and Security group owners can also use Azure AD to create access reviews for group members (preview) as long as the Global or User Administrator enables the setting via Access Reviews Settings blade. If you need to routinely review access, you can also create recurring access reviews. For more information about these scenarios, see [Manage user access](manage-user-access-with-access-reviews.md) and [Manage guest access](manage-guest-access-with-access-reviews.md).
You can watch a quick video talking about enabling Access Reviews:
This article describes how to create one or more access reviews for group member
- Azure AD Premium P2 - Global administrator or User administrator
+- Microsoft 365 and Security group owner (Preview)
For more information, see [License requirements](access-reviews-overview.md#license-requirements).
For more information, see [License requirements](access-reviews-overview.md#lice
>[!NOTE] > If you selected All Microsoft 365 groups with guest users in Step 2, then your only option is to review Guest users in Step 3
-8. Click on Next: Reviews
+8. Click on **Next: Reviews**.
+ 9. In the **Select reviewers** section, select either one or more people to perform the access reviews. You can choose from: - **Group owner(s)** (Only available when performing a review on a Team or group) - **Selected user(s) or groups(s)**
For more information, see [License requirements](access-reviews-overview.md#lice
![Choose how often the review should happen](./media/create-access-review/frequency.png)
-11. Click the **Next: Settings** button at the bottom of the page
+11. Click the **Next: Settings** button at the bottom of the page.
+ 12. In the **Upon completion settings** you can specify what happens after the review completes ![Create an access review - upon completion settings](./media/create-access-review/upon-completion-settings-new.png)
For more information, see [License requirements](access-reviews-overview.md#lice
To learn more about best practices for removing guest users who no longer have access to resources in your organization read the article titled [Use Azure AD Identity Governance to review and remove external users who no longer have resource access.](access-reviews-external-users.md)
- > [!NOTE]
- > Action to apply on denied guest users is not configurable on reviews scoped to more than guest users. It is also not configurable for reviews of **All M365 groups with guest users.** When not configurable, the default option of removing user's membership from the resource is used on denied users.
-13. You can send notifications to additional users or groups (Preview) to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion.
+ > [!NOTE]
+ > Action to apply on denied guest users isn't configurable on reviews scoped to more than guest users. It's also not configurable for reviews of **All Microsoft 365 groups with guest users.** When not configurable, the default option of removing user's membership from the resource is used on denied users.
- ![Upon completion settings - Add additional users to receive notifications](./media/create-access-review/upon-completion-settings-additional-receivers.png)
+13. You can send notifications to additional users or groups (Preview) to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion.
14. In the **Enable review decision helpers** choose whether you would like your reviewer to receive recommendations during the review process.
For more information, see [License requirements](access-reviews-overview.md#lice
![additional content for reviewer](./media/create-access-review/additional-content-reviewer.png) 16. Click on **Next: Review + Create** to move to the next page+ 17. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers.
-18. Review the information and select **Create**
+
+18. Review the information and select **Create**.
![create review screen](./media/create-access-review/create-review.png)
+## Allow group owners to create and manage access reviews (Preview)
+
+Prerequisite role: Global or User Administrator
+
+1. Sign in to the Azure portal and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
+
+1. In the left menu, under **Access reviews**, **settings**.
+
+1. On the Delegate who can create and manage access reviews page, set the **(Preview) Group owners can create and manage for access reviews of groups they own** setting to **Yes**.
+
+ ![create reviews - Enable group owners to review](./media/create-access-review/group-owners-review-access.png)
+
+ > [!NOTE]
+ > By default, the setting is set to **No** so it must be updated to allow group owners to create and manage access reviews.
+ ## Start the access review Once you have specified the settings for an access review, click **Start**. The access review will appear in your list with an indicator of its status.
active-directory How To Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso.md
For more information on how SSO works with Windows 10 using PRT, see: [Primary R
|Windows 10|Yes\*|Yes|Yes|Yes\*\*\*|N/A |Windows 8.1|Yes\*|Yes*\*\*\*|Yes|Yes\*\*\*|N/A |Windows 8|Yes\*|N/A|Yes|Yes\*\*\*|N/A
-|Windows 7|Yes\*|N/A|Yes|Yes\*\*\*|N/A
|Windows Server 2012 R2 or above|Yes\*\*|N/A|Yes|Yes\*\*\*|N/A |Mac OS X|N/A|N/A|Yes\*\*\*|Yes\*\*\*|Yes\*\*\*
For more information on how SSO works with Windows 10 using PRT, see: [Primary R
>Microsoft Edge legacy is no longer supported
-\*Requires Internet Explorer version 11 or later.
+\*Requires Internet Explorer version 11 or later. ([Beginning August 17, 2021, Microsoft 365 apps and services will not support IE 11](https://techcommunity.microsoft.com/t5/microsoft-365-blog/microsoft-365-apps-say-farewell-to-internet-explorer-11-and/ba-p/1591666).)
\*\*Requires Internet Explorer version 11 or later. Disable Enhanced Protected Mode.
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-preferreddatalocation.md
In general, full synchronization cycle is required. This is because you have add
1. Run **Full import** on the on-premises Active Directory Connector:
- 1. Go to the **Operations** tab in the Synchronization Service Manager.
+ 1. Go to the **Connectors** tab in the Synchronization Service Manager.
2. Right-click the **on-premises Active Directory Connector**, and select **Run**. 3. In the dialog box, select **Full Import**, and select **OK**. 4. Wait for the operation to complete.
Learn more about the configuration model in the sync engine:
Overview topics: * [Azure AD Connect sync: Understand and customize synchronization](how-to-connect-sync-whatis.md)
-* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
+* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/tenant-restrictions.md
Large organizations that emphasize security want to move to cloud services like Microsoft 365, but need to know that their users only can access approved resources. Traditionally, companies restrict domain names or IP addresses when they want to manage access. This approach fails in a world where software as a service (or SaaS) apps are hosted in a public cloud, running on shared domain names like [outlook.office.com](https://outlook.office.com/) and [login.microsoftonline.com](https://login.microsoftonline.com/). Blocking these addresses would keep users from accessing Outlook on the web entirely, instead of merely restricting them to approved identities and resources.
-The Azure Active Directory (Azure AD) solution to this challenge is a feature called tenant restrictions. With tenant restrictions, organizations can control access to SaaS cloud applications, based on the Azure AD tenant the applications use for single sign-on. For example, you may want to allow access to your organization’s Microsoft 365 applications, while preventing access to other organizations’ instances of these same applications.  
+The Azure Active Directory (Azure AD) solution to this challenge is a feature called tenant restrictions. With tenant restrictions, organizations can control access to SaaS cloud applications, based on the Azure AD tenant the applications use for single sign-on. For example, you may want to allow access to your organization's Microsoft 365 applications, while preventing access to other organizations' instances of these same applications.  
With tenant restrictions, organizations can specify the list of tenants that their users are permitted to access. Azure AD then only grants access to these permitted tenants.
The overall solution comprises the following components:
3. **Client software**: To support tenant restrictions, client software must request tokens directly from Azure AD, so that the proxy infrastructure can intercept traffic. Browser-based Microsoft 365 applications currently support tenant restrictions, as do Office clients that use modern authentication (like OAuth 2.0).
-4. **Modern Authentication**: Cloud services must use modern authentication to use tenant restrictions and block access to all non-permitted tenants. You must configure Microsoft 365 cloud services to use modern authentication protocols by default. For the latest information on Microsoft 365 support for modern authentication, read [Updated Office 365 modern authentication](https://docs.microsoft.com/microsoft-365/enterprise/modern-auth-for-office-2013-and-2016?view=o365-worldwide).
+4. **Modern Authentication**: Cloud services must use modern authentication to use tenant restrictions and block access to all non-permitted tenants. You must configure Microsoft 365 cloud services to use modern authentication protocols by default. For the latest information on Microsoft 365 support for modern authentication, read [Updated Office 365 modern authentication](/microsoft-365/enterprise/modern-auth-for-office-2013-and-2016).
The following diagram illustrates the high-level traffic flow. Tenant restrictions requires TLS inspection only on traffic to Azure AD, not to the Microsoft 365 cloud services. This distinction is important, because the traffic volume for authentication to Azure AD is typically much lower than traffic volume to SaaS applications like Exchange Online and SharePoint Online.
To use tenant restrictions, your clients must be able to connect to the followin
### Proxy configuration and requirements
-The following configuration is required to enable tenant restrictions through your proxy infrastructure. This guidance is generic, so you should refer to your proxy vendorΓÇÖs documentation for specific implementation steps.
+The following configuration is required to enable tenant restrictions through your proxy infrastructure. This guidance is generic, so you should refer to your proxy vendor's documentation for specific implementation steps.
#### Prerequisites
Fiddler is a free web debugging proxy that can be used to capture and modify HTT
1. [Download and install Fiddler](https://www.telerik.com/fiddler).
-2. Configure Fiddler to decrypt HTTPS traffic, per [FiddlerΓÇÖs help documentation](https://docs.telerik.com/fiddler/Configure-Fiddler/Tasks/DecryptHTTPS).
+2. Configure Fiddler to decrypt HTTPS traffic, per [Fiddler's help documentation](https://docs.telerik.com/fiddler/Configure-Fiddler/Tasks/DecryptHTTPS).
3. Configure Fiddler to insert the *Restrict-Access-To-Tenants* and *Restrict-Access-Context* headers using custom rules:
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
You learn how to:
This section shows how to grant your VM access to a secret stored in a Key Vault. Using managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication.  However, not all Azure services support Azure AD authentication. To use managed identities for Azure resources with those services, store the service credentials in Azure Key Vault, and use the VM's managed identity to access Key Vault to retrieve the credentials.
-First, we need to create a Key Vault and grant our VMΓÇÖs system-assigned managed identity access to the Key Vault.
+First, we need to create a Key Vault and grant our VM's system-assigned managed identity access to the Key Vault.
1. Open the Azure [portal](https://portal.azure.com/) 1. At the top of the left navigation bar, select **Create a resource**
To complete these steps, you need an SSH client.  If you are using Windows, you
>[!IMPORTANT] > All Azure SDKs support the Azure.Identity library that makes it easy to acquire Azure AD tokens to access target services. Learn more about [Azure SDKs](https://azure.microsoft.com/downloads/) and leverage the Azure.Identity library.
-> - [.NET](/dotnet/api/overview/azure/identity-readme?view=azure-dotnet)
-> - [JAVA](/java/api/overview/azure/identity-readme?view=azure-java-stable)
-> - [Javascript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest)
-> - [Python](/python/api/overview/azure/identity-readme?view=azure-python)
+> - [.NET](/dotnet/api/overview/azure/identity-readme)
+> - [JAVA](/java/api/overview/azure/identity-readme)
+> - [Javascript](/javascript/api/overview/azure/identity-readme)
+> - [Python](/python/api/overview/azure/identity-readme)
1. In the portal, navigate to your Linux VM and in the **Overview**, click **Connect**. 
To complete these steps, you need an SSH client.  If you are using Windows, you
"token_type":"Bearer"}  ```
- You can use this access token to authenticate to Azure Key Vault.  The next CURL request shows how to read a secret from Key Vault using CURL and the Key Vault REST API.  You’ll need the URL of your Key Vault, which is in the **Essentials** section of the **Overview** page of the Key Vault.  You will also need the access token you obtained on the previous call. 
+ You can use this access token to authenticate to Azure Key Vault.  The next CURL request shows how to read a secret from Key Vault using CURL and the Key Vault REST API.  You'll need the URL of your Key Vault, which is in the **Essentials** section of the **Overview** page of the Key Vault.  You will also need the access token you obtained on the previous call. 
```bash curl 'https://<YOUR-KEY-VAULT-URL>/secrets/<secret-name>?api-version=2016-10-01' -H "Authorization: Bearer <ACCESS TOKEN>" 
To complete these steps, you need an SSH client.  If you are using Windows, you
{"value":"p@ssw0rd!","id":"https://mytestkeyvault.vault.azure.net/secrets/MyTestSecret/7c2204c6093c4d859bc5b9eff8f29050","attributes":{"enabled":true,"created":1505088747,"updated":1505088747,"recoveryLevel":"Purgeable"}}  ```
-Once youΓÇÖve retrieved the secret from the Key Vault, you can use it to authenticate to a service that requires a name and password.
+Once you've retrieved the secret from the Key Vault, you can use it to authenticate to a service that requires a name and password.
## Clean up resources
active-directory Azure Ad Roles Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/azure-ad-roles-features.md
Both user-initiated actions require an approval from a Global administrator or P
## API changes
-When customers have the updated version rolled out to their Azure AD organization, the existing graph API will stop working. You must transition to use the [Graph API for Azure resource roles](/graph/api/resources/privilegedidentitymanagement-resources?view=graph-rest-beta). To manage Azure AD roles using that API, swap `/azureResources` with `/aadroles` in the signature and use the Directory ID for the `resourceId`.
+When customers have the updated version rolled out to their Azure AD organization, the existing graph API will stop working. You must transition to use the [Graph API for Azure resource roles](/graph/api/resources/privilegedidentitymanagement-resources?view=graph-rest-beta&preserve-view=true). To manage Azure AD roles using that API, swap `/azureResources` with `/aadroles` in the signature and use the Directory ID for the `resourceId`.
We have tried our best to reach out to all customers who are using the previous API to let them know about this change ahead of time. If your Azure AD organization was moved on to the new version and you still depend on the old API, reach out to the team at pim_preview@microsoft.com.
For customers who are using the Privileged Identity Management PowerShell module
- [Assign an Azure AD custom role](azure-ad-custom-roles-assign.md) - [Remove or update an Azure AD custom role assignment](azure-ad-custom-roles-update-remove.md) - [Configure an Azure AD custom role assignment](azure-ad-custom-roles-configure.md)-- [Role definitions in Azure AD](../roles/permissions-reference.md)
+- [Role definitions in Azure AD](../roles/permissions-reference.md)
active-directory Fedramp Access Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/fedramp-access-controls.md
The following list of controls and control enhancements in the Access Control fa
| AC-07| Unsuccessful logon attempts | | AC-08| System use notification | | AC-10| Concurrent session control |
-| AC-11| Session lock | pattern hiding displays |
+| AC-11| Session lock |
| AC-12| Session termination | | AC-20| Use of external information systems |
-Each row in the table below provides prescriptive guidance to aid you in developing your organizationΓÇÖs response to any shared responsibilities for the control or control enhancement.
+Each row in the table below provides prescriptive guidance to aid you in developing your organization's response to any shared responsibilities for the control or control enhancement.
## Configurations | Control ID | Customer responsibilities and guidance | | - | - |
-| AC-02 | **Implement account lifecycle management for customer-controlled accounts. Monitor the use of accounts and notify account managers of account lifecycle events. Review accounts for compliance with account management requirements every month for privileged access and every six (6) months for non-privileged access**.<p>Use Azure AD to provision accounts from external HR systems, on-premises Active Directory, or directly in the cloud. All account lifecycle operations are audited within the Azure AD audit logs. Logs can be collected and analyzed by a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, you can use Azure Event Hub to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts.<p>Provision accounts<br>[Plan cloud HR application to Azure Active Directory user provisioning](https://docs.microsoft.com/azure/active-directory/app-provisioning/plan-cloud-hr-provision)<br>[Azure AD Connect sync: Understand and customize synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis)<br>[Add or delete users - Azure Active Directory](https://docs.microsoft.com/azure/active-directory/fundamentals/add-users-azure-active-directory)<p>Monitor accounts<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Azure Sentinel](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory) <br>[Tutorial - Stream logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub)<p>Review accounts<br>[What is entitlement management? - Azure AD](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-overview)<br>[Create an access review of an access package in Azure AD entitlement management ](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-access-reviews-create)<br>[Review access of an access package in Azure AD entitlement management](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-access-reviews-review-access)<p>Resources:<br>[Administrator role permissions in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference)<br>[Dynamic Groups in Azure AD](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) |
-| AC-02(1)| **Employ automated mechanisms to support management of customer-controlled accounts.**<p>Configure automated provisioning of customer-controlled accounts from external HR systems or on-premises Active Directory. For applications that support application provisioning, configure Azure AD to automatically create user identities and roles in cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Ease monitoring of account usage by streaming Identity Protection logs (risky users, risky sign-ins, and risk detections) and audit logs directly into Azure Sentinel or Azure Event Hub.<p>Provision<br>[Plan cloud HR application to Azure Active Directory user provisioning](https://docs.microsoft.com/azure/active-directory/app-provisioning/plan-cloud-hr-provision)<br>[Azure AD Connect sync: Understand and customize synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis)<br>[What is automated SaaS app user provisioning in Azure AD?](https://docs.microsoft.com/azure/active-directory/app-provisioning/user-provisioning)<br>[SaaS App Integration Tutorials for use with Azure AD](https://docs.microsoft.com/azure/active-directory/saas-apps/tutorial-list)<p>Monitor & Audit<br>[How To: Investigate risk](https://docs.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[What is Azure Sentinel?](https://docs.microsoft.com/azure/sentinel/overview)<br>[Azure Sentinel: Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory)<br>[Stream Azure Active Directory logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub)ΓÇÄ|
-| AC-02(2)<br>AC-02(3)| **Employ automated mechanisms to support automatically removing or disabling temporary and emergency accounts after 24 hours from last use and all customer-controlled accounts after 35 days of inactivity**.<p>Implement account management automation with Microsoft Graph and Microsoft Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required timeframe. <p>Determine Inactivity<br>[How to manage inactive user accounts in Azure AD](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts)<br>[How to manage stale devices in Azure AD](https://docs.microsoft.com/azure/active-directory/devices/manage-stale-devices)<p>Remove or Disable Accounts<br>[Working with users in Microsoft Graph](https://docs.microsoft.com/graph/api/resources/users?view=graph-rest-1.0)<br>[Get a user](https://docs.microsoft.com/graph/api/user-get?view=graph-rest-1.0&tabs=http)<br>[Update user](https://docs.microsoft.com/graph/api/user-update?view=graph-rest-1.0&tabs=http)<br>[Delete a user](https://docs.microsoft.com/graph/api/user-delete?view=graph-rest-1.0&tabs=http)<p>Working with devices in Microsoft Graph<br>[Get device](https://docs.microsoft.com/graph/api/device-get?view=graph-rest-1.0&tabs=http)<br>[Update device](https://docs.microsoft.com/graph/api/device-update?view=graph-rest-1.0&tabs=http)<br>[Delete device](https://docs.microsoft.com/graph/api/device-delete?view=graph-rest-1.0&tabs=http)<p>Using [Azure AD PowerShell](https://docs.microsoft.com/powershell/module/azuread/?view=azureadps-2.0)<br>[Get-AzureADUser](https://docs.microsoft.com/powershell/module/azuread/get-azureaduser?view=azureadps-2.0)<br>[Set-AzureADUser](https://docs.microsoft.com/powershell/module/azuread/set-azureaduser?view=azureadps-2.0)<br>[Get-AzureADDevice](https://docs.microsoft.com/powershell/module/azuread/get-azureaddevice?view=azureadps-2.0)<br>[Set-AzureADDevice](https://docs.microsoft.com/powershell/module/azuread/set-azureaddevice?view=azureadps-2.0) |
-| AC-02(4)| **Implement an automated audit and notification system for the lifecycle of managing customer-controlled accounts**.<p>All account lifecycle operations (account creation, modification, enabling, disabling, and removal actions) are audited within the Azure audit logs and can be streamed directly into Azure Sentinel or Azure Event Hub to facilitate notification.<p>Audit<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Azure Sentinel: Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory)<P>Notification<br>[What is Azure Sentinel?](https://docs.microsoft.com/azure/sentinel/overview)<br>[Stream Azure Active Directory logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
-| AC-02(5)| **Implement device log out after a 15-minute period of inactivity**.<p>Implement device lock using a Conditional Access policy that restricts access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<P>Conditional Access<br>[Require device to be marked as compliant](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>MDM Policy<br>Configure devices for maximum minutes of inactivity until screen locks and requires password to unlock ([Android](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-android), [iOS](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-ios), [Windows 10](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-windows-10)) |
-| AC-02(7)| **Administer and monitor privileged role assignments in accordance with a role-based access (RBAC) scheme for customer-controlled accounts including disabling or revoking privilege access for accounts when no longer appropriate**.<p>Implement Privileged Identity Management (PIM) with access reviews for privileged roles in Azure AD to monitor role assignments and remove role assignments when no longer appropriate. Audit logs can be streamed directly into Azure Sentinel or Azure Event Hub to facilitate monitoring.<p>Administer<br>[What is Azure AD Privileged Identity Management?](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-configure)<br>[Activation maximum duration](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-change-default-settings?tabs=new)<p>Monitor<br>[Create an access review of Azure AD roles in Privileged Identity Management](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-start-security-review)<br>[View audit history for Azure AD roles in Privileged Identity Management](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-use-audit-log?tabs=new)<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[What is Azure Sentinel?](https://docs.microsoft.com/azure/sentinel/overview)<br>[Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream Azure Active Directory logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
-| AC-02(11)| **Enforce usage of customer-controlled accounts to meet customer defined conditions or circumstances**.<p>Create Conditional Access policies to enforce access control decisions across users and devices.<p>Conditional Access<br>[Create a Conditional Access policy](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json)<br>[What is Conditional Access?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview) |
-| AC-02(12)| **Monitor and report customer-controlled accounts with privileged access for atypical usage**.<p>Facilitate monitoring of atypical usage by streaming Identity Protection logs (for example, risky users, risky sign-ins, and risk detections) and audit logs (to facilitate correlation with privilege assignment) directly into a SIEM solution such as Azure Sentinel. You can also use Azure Event Hub to integrate logs with third-party SIEM solutions.<p>Identity Protection<br>[What is Identity Protection?](https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection)<br>[How To: Investigate risk](https://docs.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<br>[Azure Active Directory Identity Protection notifications](https://docs.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-configure-notifications)<p>Monitor accounts<br>[What is Azure Sentinel?](https://docs.microsoft.com/azure/sentinel/overview)<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Azure Sentinel](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory) <br>[Tutorial - Stream logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
-| AC-02(13)|**Disable customer-controlled accounts of users posing a significant risk within 1 hour**.<p>In Azure AD Identity Protection, configure and enable a user risk policy with the threshold set to High. Create Conditional Access policies to block access for risky users and risky sign-ins. Configure risk policies to allow users to self-remediate and unblock subsequent sign-in attempts.<p>Identity Protection<br>[What is Identity Protection?](https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection)<p>Conditional Access<br>[What is Conditional Access?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview)<br>[Create a Conditional Access policy](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json)<br>[Conditional Access: User risk-based Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Conditional Access: Sign-in risk-based Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Self-remediation with risk policy](https://docs.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-remediate-unblock) |
-| AC-06(7)| **Review and validate all users with privileged access every year and ensure privileges are reassigned (or removed if necessary) to align with organizational mission and business requirements**.<p>Use Azure AD entitlement management with access reviews for privileged users to verify if privileged access is required. <p>Access Reviews<br>[What is Azure AD entitlement management?](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-overview)<br>[Create an access review of Azure AD roles in Privileged Identity Management](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-start-security-review)<br>[Review access of an access package in Azure AD entitlement management](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-access-reviews-review-access) |
-| AC-07| **Enforce a limit of no more than three consecutive failed login attempts on customer-deployed resources within a 15-minute period and lock the account for a minimum of three (3) hours or until unlocked by an administrator**.<p>Enable custom Smart Lockout settings. Configure lockout threshold and lockout duration in seconds to implement these requirements. <p>Smart Lockout<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](https://docs.microsoft.com/azure/active-directory/authentication/howto-password-smart-lockout)<br>[Manage Azure AD smart lockout values](https://docs.microsoft.com/azure/active-directory/authentication/howto-password-smart-lockout) |
-| AC-08| **Display and require user acknowledgment of privacy and security notices before granting access to information systems**.<p>Azure AD provides administrators with the ability to deliver notification or banner messages for all apps that require and record acknowledgment before granting access. These terms of use policies can be granularly targeted to specific users (Member or Guest) and customized per application via Conditional Access policies.<p>Terms of Use<br>[Azure Active Directory terms of use](https://docs.microsoft.com/azure/active-directory/conditional-access/terms-of-use)<br>[View report of who has accepted and declined](https://docs.microsoft.com/azure/active-directory/conditional-access/terms-of-use) |
-| AC-10|**Limit concurrent sessions to three sessions for privileged access and two for non-privileged access**. <p>In todayΓÇÖs world where users connect from multiple devices (sometimes simultaneously), limiting concurrent sessions leads to a degraded user experience while providing limited security value. A better approach to address the intent behind this control is to adopt a zero trust security posture where the conditions are explicitly validated before a session is created, and continually throughout the life of a session. <p>Additionally, use the following compensating controls. <p>Use Conditional Access policies to restrict access to compliant devices. Configure policy settings on the device to enforce user sign on restrictions at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments.<p> Use Privileged Identity Management (PIM) to further restrict and control privileged accounts. <p> Configure Smart Account lockout for invalid sign in attempts.<p>**Implementation guidance** <p>Zero Trust<br> [Securing identity with Zero Trust](https://docs.microsoft.com/security/zero-trust/identity)<br>[Continuous access evaluation in Azure AD](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-continuous-access-evaluation)<p>Conditional Access<br>[What is Conditional Access in Azure AD?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview)<br>[Require device to be marked as compliant](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>Device Policies<br>[Use PowerShell scripts on Windows 10 devices in Intune](https://docs.microsoft.com/mem/intune/apps/intune-management-extension)<br>[Additional smart card Group Policy settings and registry keys](https://docs.microsoft.com/windows/security/identity-protection/smart-cards/smart-card-group-policy-and-registry-settings)<br>[Microsoft Endpoint Manager overview](https://docs.microsoft.com/mem/endpoint-manager-overview)<p>Resources<br>[What is Azure AD Privileged Identity Management?](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-configure)<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](https://docs.microsoft.com/azure/active-directory/authentication/howto-password-smart-lockout)<p>See AC-12 for additional session re-evaluation & risk mitigation guidance. |
-| AC-11<br>AC-11(1)| **Implement a session lock after a 15-minute period of inactivity or upon receiving a request from a user and retain the session lock until the user reauthenticates. Conceal previously visible information when a session lock is initiated.**<p> Implement device lock using a Conditional Access policy to restrict access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<p>Conditional Access<br>[Require device to be marked as compliant](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>MDM Policy<br>Configure devices for maximum minutes of inactivity until screen locks ([Android](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-android), [iOS](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-ios), [Windows 10](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-windows-10)) |
-| AC-12| **Automatically terminate user sessions when organizational defined conditions or trigger events occur**.<p>Implement automatic user session re-evaluation with Azure AD features such as Risk-Based Conditional Access and Continuous Access Evaluation. Inactivity conditions can be implemented at a device level as described in AC-11.<br>[Sign-in risk-based Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-risk)<br>[User risk-based Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Continuous Access Evaluation](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-continuous-access-evaluation)
-| AC-12(1)| **Provide a logout capability for all sessions and display an explicit logout message**. <p>All Azure AD surfaced web interfaces provide a logout capability for user-initiated communications sessions. When SAML applications are integrated with Azure AD, implement single sign-out. <p>Logout capability<br>When user selects ΓÇ£[Sign-out everywhere](https://aka.ms/mysignins)ΓÇ¥ all current issued tokens are revoked. <p>Display Message<br>Azure AD automatically displays a message after user-initiated logout.<br>![Image of access control message.](media/fedramp/fedramp-access-controls-image-1.png)<p>Additional Resources<br>[View and search your recent sign-in activity from the My Sign-ins page](https://docs.microsoft.com/azure/active-directory/user-help/my-account-portal-sign-ins-page)<br>[Single Sign-Out SAML Protocol](https://docs.microsoft.com/azure/active-directory/develop/single-sign-out-saml-protocol) |
-| AC-20<br>AC-20(1)| **Establish terms and conditions allowing authorized individuals to access the customer-deployed resources from external information systems such as unmanaged devices and external networks**.<p>Require terms of use acceptance for authorized users accessing resources from external systems. Implement Conditional Access policies to restrict access from external systems. Conditional Access policies may also be integrated with Microsoft Cloud App Security (MCAS) to provide additional controls for both cloud and on-premises applications from external systems. Mobile application management (MAM) in Intune can protect organization data at the application level, including custom apps and store apps, from managed devices interacting with external systems (for example, accessing cloud services). App management can be used on organization-owned devices, and personal devices.<P>Terms and Conditions<br>[Terms of use - Azure Active Directory](https://docs.microsoft.com/azure/active-directory/conditional-access/terms-of-use)<p>Conditional Access<br>[Require device to be marked as compliant](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices)<br>[Conditions in Conditional Access policy - Device State (Preview)](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-conditional-access-conditions)<br>[Protect with Microsoft Cloud App Security Conditional Access App Control](https://docs.microsoft.com/cloud-app-security/proxy-intro-aad)<br>[Location condition in Azure Active Directory Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/location-condition)<p>Mobile Device management<br>[What is Microsoft Intune?](https://docs.microsoft.com/mem/intune/fundamentals/what-is-intune)<br>[What is Cloud App Security?](https://docs.microsoft.com/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](https://docs.microsoft.com/mem/intune/apps/app-management)<p>Resources<br>[Integrate on-premises apps with Cloud App Security](https://docs.microsoft.com/azure/active-directory/manage-apps/application-proxy-integrate-with-microsoft-cloud-application-security) |
+| AC-02 | **Implement account lifecycle management for customer-controlled accounts. Monitor the use of accounts and notify account managers of account lifecycle events. Review accounts for compliance with account management requirements every month for privileged access and every six (6) months for non-privileged access**.<p>Use Azure AD to provision accounts from external HR systems, on-premises Active Directory, or directly in the cloud. All account lifecycle operations are audited within the Azure AD audit logs. Logs can be collected and analyzed by a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, you can use Azure Event Hub to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts.<p>Provision accounts<br>[Plan cloud HR application to Azure Active Directory user provisioning](/azure/active-directory/app-provisioning/plan-cloud-hr-provision)<br>[Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis)<br>[Add or delete users - Azure Active Directory](/azure/active-directory/fundamentals/add-users-azure-active-directory)<p>Monitor accounts<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Azure Sentinel](/azure/sentinel/connect-azure-active-directory) <br>[Tutorial - Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub)<p>Review accounts<br>[What is entitlement management? - Azure AD](/azure/active-directory/governance/entitlement-management-overview)<br>[Create an access review of an access package in Azure AD entitlement management ](/azure/active-directory/governance/entitlement-management-access-reviews-create)<br>[Review access of an access package in Azure AD entitlement management](/azure/active-directory/governance/entitlement-management-access-reviews-review-access)<p>Resources:<br>[Administrator role permissions in Azure Active Directory](/azure/active-directory/roles/permissions-reference)<br>[Dynamic Groups in Azure AD](/azure/active-directory/enterprise-users/groups-create-rule) |
+| AC-02(1)| **Employ automated mechanisms to support management of customer-controlled accounts.**<p>Configure automated provisioning of customer-controlled accounts from external HR systems or on-premises Active Directory. For applications that support application provisioning, configure Azure AD to automatically create user identities and roles in cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Ease monitoring of account usage by streaming Identity Protection logs (risky users, risky sign-ins, and risk detections) and audit logs directly into Azure Sentinel or Azure Event Hub.<p>Provision<br>[Plan cloud HR application to Azure Active Directory user provisioning](/azure/active-directory/app-provisioning/plan-cloud-hr-provision)<br>[Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis)<br>[What is automated SaaS app user provisioning in Azure AD?](/azure/active-directory/app-provisioning/user-provisioning)<br>[SaaS App Integration Tutorials for use with Azure AD](/azure/active-directory/saas-apps/tutorial-list)<p>Monitor & Audit<br>[How To: Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[What is Azure Sentinel?](/azure/sentinel/overview)<br>[Azure Sentinel: Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory)<br>[Stream Azure Active Directory logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub)ΓÇÄ|
+| AC-02(2)<br>AC-02(3)| **Employ automated mechanisms to support automatically removing or disabling temporary and emergency accounts after 24 hours from last use and all customer-controlled accounts after 35 days of inactivity**.<p>Implement account management automation with Microsoft Graph and Microsoft Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required timeframe. <p>Determine Inactivity<br>[How to manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts)<br>[How to manage stale devices in Azure AD](/azure/active-directory/devices/manage-stale-devices)<p>Remove or Disable Accounts<br>[Working with users in Microsoft Graph](/graph/api/resources/users)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<p>Working with devices in Microsoft Graph<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<p>Using [Azure AD PowerShell](/powershell/module/azuread/)<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice) |
+| AC-02(4)| **Implement an automated audit and notification system for the lifecycle of managing customer-controlled accounts**.<p>All account lifecycle operations (account creation, modification, enabling, disabling, and removal actions) are audited within the Azure audit logs and can be streamed directly into Azure Sentinel or Azure Event Hub to facilitate notification.<p>Audit<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Azure Sentinel: Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory)<P>Notification<br>[What is Azure Sentinel?](/azure/sentinel/overview)<br>[Stream Azure Active Directory logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AC-02(5)| **Implement device log out after a 15-minute period of inactivity**.<p>Implement device lock using a Conditional Access policy that restricts access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<P>Conditional Access<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>MDM Policy<br>Configure devices for maximum minutes of inactivity until screen locks and requires password to unlock ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)) |
+| AC-02(7)| **Administer and monitor privileged role assignments in accordance with a role-based access (RBAC) scheme for customer-controlled accounts including disabling or revoking privilege access for accounts when no longer appropriate**.<p>Implement Privileged Identity Management (PIM) with access reviews for privileged roles in Azure AD to monitor role assignments and remove role assignments when no longer appropriate. Audit logs can be streamed directly into Azure Sentinel or Azure Event Hub to facilitate monitoring.<p>Administer<br>[What is Azure AD Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure)<br>[Activation maximum duration](/azure/active-directory/privileged-identity-management/pim-how-to-change-default-settings?tabs=new)<p>Monitor<br>[Create an access review of Azure AD roles in Privileged Identity Management](/azure/active-directory/privileged-identity-management/pim-how-to-start-security-review)<br>[View audit history for Azure AD roles in Privileged Identity Management](/azure/active-directory/privileged-identity-management/pim-how-to-use-audit-log?tabs=new)<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[What is Azure Sentinel?](/azure/sentinel/overview)<br>[Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream Azure Active Directory logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AC-02(11)| **Enforce usage of customer-controlled accounts to meet customer defined conditions or circumstances**.<p>Create Conditional Access policies to enforce access control decisions across users and devices.<p>Conditional Access<br>[Create a Conditional Access policy](/azure/active-directory/authentication/tutorial-enable-azure-mfa?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json)<br>[What is Conditional Access?](/azure/active-directory/conditional-access/overview) |
+| AC-02(12)| **Monitor and report customer-controlled accounts with privileged access for atypical usage**.<p>Facilitate monitoring of atypical usage by streaming Identity Protection logs (for example, risky users, risky sign-ins, and risk detections) and audit logs (to facilitate correlation with privilege assignment) directly into a SIEM solution such as Azure Sentinel. You can also use Azure Event Hub to integrate logs with third-party SIEM solutions.<p>Identity Protection<br>[What is Identity Protection?](/azure/active-directory/identity-protection/overview-identity-protection)<br>[How To: Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<br>[Azure Active Directory Identity Protection notifications](/azure/active-directory/identity-protection/howto-identity-protection-configure-notifications)<p>Monitor accounts<br>[What is Azure Sentinel?](/azure/sentinel/overview)<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Azure Sentinel](/azure/sentinel/connect-azure-active-directory) <br>[Tutorial - Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AC-02(13)|**Disable customer-controlled accounts of users posing a significant risk within 1 hour**.<p>In Azure AD Identity Protection, configure and enable a user risk policy with the threshold set to High. Create Conditional Access policies to block access for risky users and risky sign-ins. Configure risk policies to allow users to self-remediate and unblock subsequent sign-in attempts.<p>Identity Protection<br>[What is Identity Protection?](/azure/active-directory/identity-protection/overview-identity-protection)<p>Conditional Access<br>[What is Conditional Access?](/azure/active-directory/conditional-access/overview)<br>[Create a Conditional Access policy](/azure/active-directory/authentication/tutorial-enable-azure-mfa?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json)<br>[Conditional Access: User risk-based Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Conditional Access: Sign-in risk-based Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Self-remediation with risk policy](/azure/active-directory/identity-protection/howto-identity-protection-remediate-unblock) |
+| AC-06(7)| **Review and validate all users with privileged access every year and ensure privileges are reassigned (or removed if necessary) to align with organizational mission and business requirements**.<p>Use Azure AD entitlement management with access reviews for privileged users to verify if privileged access is required. <p>Access Reviews<br>[What is Azure AD entitlement management?](/azure/active-directory/governance/entitlement-management-overview)<br>[Create an access review of Azure AD roles in Privileged Identity Management](/azure/active-directory/privileged-identity-management/pim-how-to-start-security-review)<br>[Review access of an access package in Azure AD entitlement management](/azure/active-directory/governance/entitlement-management-access-reviews-review-access) |
+| AC-07| **Enforce a limit of no more than three consecutive failed login attempts on customer-deployed resources within a 15-minute period and lock the account for a minimum of three (3) hours or until unlocked by an administrator**.<p>Enable custom Smart Lockout settings. Configure lockout threshold and lockout duration in seconds to implement these requirements. <p>Smart Lockout<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](/azure/active-directory/authentication/howto-password-smart-lockout)<br>[Manage Azure AD smart lockout values](/azure/active-directory/authentication/howto-password-smart-lockout) |
+| AC-08| **Display and require user acknowledgment of privacy and security notices before granting access to information systems**.<p>Azure AD provides administrators with the ability to deliver notification or banner messages for all apps that require and record acknowledgment before granting access. These terms of use policies can be granularly targeted to specific users (Member or Guest) and customized per application via Conditional Access policies.<p>Terms of Use<br>[Azure Active Directory terms of use](/azure/active-directory/conditional-access/terms-of-use)<br>[View report of who has accepted and declined](/azure/active-directory/conditional-access/terms-of-use) |
+| AC-10|**Limit concurrent sessions to three sessions for privileged access and two for non-privileged access**. <p>In today's world where users connect from multiple devices (sometimes simultaneously), limiting concurrent sessions leads to a degraded user experience while providing limited security value. A better approach to address the intent behind this control is to adopt a zero trust security posture where the conditions are explicitly validated before a session is created, and continually throughout the life of a session. <p>Additionally, use the following compensating controls. <p>Use Conditional Access policies to restrict access to compliant devices. Configure policy settings on the device to enforce user sign on restrictions at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments.<p> Use Privileged Identity Management (PIM) to further restrict and control privileged accounts. <p> Configure Smart Account lockout for invalid sign in attempts.<p>**Implementation guidance** <p>Zero Trust<br> [Securing identity with Zero Trust](/security/zero-trust/identity)<br>[Continuous access evaluation in Azure AD](/azure/active-directory/conditional-access/concept-continuous-access-evaluation)<p>Conditional Access<br>[What is Conditional Access in Azure AD?](/azure/active-directory/conditional-access/overview)<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>Device Policies<br>[Use PowerShell scripts on Windows 10 devices in Intune](/mem/intune/apps/intune-management-extension)<br>[Additional smart card Group Policy settings and registry keys](/windows/security/identity-protection/smart-cards/smart-card-group-policy-and-registry-settings)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview)<p>Resources<br>[What is Azure AD Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure)<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](/azure/active-directory/authentication/howto-password-smart-lockout)<p>See AC-12 for additional session re-evaluation & risk mitigation guidance. |
+| AC-11<br>AC-11(1)| **Implement a session lock after a 15-minute period of inactivity or upon receiving a request from a user and retain the session lock until the user reauthenticates. Conceal previously visible information when a session lock is initiated.**<p> Implement device lock using a Conditional Access policy to restrict access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<p>Conditional Access<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>MDM Policy<br>Configure devices for maximum minutes of inactivity until screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)) |
+| AC-12| **Automatically terminate user sessions when organizational defined conditions or trigger events occur**.<p>Implement automatic user session re-evaluation with Azure AD features such as Risk-Based Conditional Access and Continuous Access Evaluation. Inactivity conditions can be implemented at a device level as described in AC-11.<br>[Sign-in risk-based Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-policy-risk)<br>[User risk-based Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Continuous Access Evaluation](/azure/active-directory/conditional-access/concept-continuous-access-evaluation)
+| AC-12(1)| **Provide a logout capability for all sessions and display an explicit logout message**. <p>All Azure AD surfaced web interfaces provide a logout capability for user-initiated communications sessions. When SAML applications are integrated with Azure AD, implement single sign-out. <p>Logout capability<br>When user selects "[Sign-out everywhere](https://aka.ms/mysignins)" all current issued tokens are revoked. <p>Display Message<br>Azure AD automatically displays a message after user-initiated logout.<br>![Image of access control message.](media/fedramp/fedramp-access-controls-image-1.png)<p>Additional Resources<br>[View and search your recent sign-in activity from the My Sign-ins page](/azure/active-directory/user-help/my-account-portal-sign-ins-page)<br>[Single Sign-Out SAML Protocol](/azure/active-directory/develop/single-sign-out-saml-protocol) |
+| AC-20<br>AC-20(1)| **Establish terms and conditions allowing authorized individuals to access the customer-deployed resources from external information systems such as unmanaged devices and external networks**.<p>Require terms of use acceptance for authorized users accessing resources from external systems. Implement Conditional Access policies to restrict access from external systems. Conditional Access policies may also be integrated with Microsoft Cloud App Security (MCAS) to provide additional controls for both cloud and on-premises applications from external systems. Mobile application management (MAM) in Intune can protect organization data at the application level, including custom apps and store apps, from managed devices interacting with external systems (for example, accessing cloud services). App management can be used on organization-owned devices, and personal devices.<P>Terms and Conditions<br>[Terms of use - Azure Active Directory](/azure/active-directory/conditional-access/terms-of-use)<p>Conditional Access<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/require-managed-devices)<br>[Conditions in Conditional Access policy - Device State (Preview)](/azure/active-directory/conditional-access/concept-conditional-access-conditions)<br>[Protect with Microsoft Cloud App Security Conditional Access App Control](/cloud-app-security/proxy-intro-aad)<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition)<p>Mobile Device management<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)<br>[What is Cloud App Security?](/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management)<p>Resources<br>[Integrate on-premises apps with Cloud App Security](/azure/active-directory/manage-apps/application-proxy-integrate-with-microsoft-cloud-application-security) |
## Next steps [FedRAMP compliance overview](configure-azure-active-directory-for-fedramp-high-impact.md)
active-directory Fedramp Identification And Authentication Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/fedramp-identification-and-authentication-controls.md
The following list of controls (and control enhancements) in the Identification and authentication family may require configuration in your Azure AD tenant.
-Each row in the table below provides prescriptive guidance to aid you in developing your organizationΓÇÖs response to any shared responsibilities regarding the control and/or control enhancement.
+Each row in the table below provides prescriptive guidance to aid you in developing your organization's response to any shared responsibilities regarding the control and/or control enhancement.
IA-02 Identification and Authentication (Organizational Users)
IA-08 Identification and Authentication (Non-Organizational Users)
| Control ID and subpart| Customer responsibilities and guidance | | - | - |
-| IA-02| **Uniquely identify and authenticate users or processes acting on behalf of users.<p>** Azure AD uniquely identifies user and service principal objects directly and provides multiple authentication methods including methods adhering to NIST Authentication Assurance Level (AAL) 3 that can be configured.<p>Identifiers <br> Users - [Working with users in Microsoft Graph : ID Property](https://docs.microsoft.com///graph/api/resources/users?view=graph-rest-1.0)<br>Service Principals - [ServicePrincipal resource type : ID Property](https://docs.microsoft.com///graph/api/resources/serviceprincipal?view=graph-rest-1.0)<p>Authentication & Multi-Factor Authentication<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) |
-| IA-02(1)<br>IA-02(3)| **Multi-factor authentication (MFA) for all access to privileged accounts**. <p>Configure the following elements for a complete solution to ensure all access to privileged accounts requires MFA.<p>Configure Conditional Access policies to require MFA for all users.<br> Implement Privileged Identity Management (PIM) to require MFA for activation of privileged role assignment prior to use.<p>With PIM activation requirement in place, privilege account activation is not possible without network access. Hence, local access is never privileged.<p>MFA & PIM<br> [Conditional Access - Require MFA for all users](https://docs.microsoft.com///azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa)<br> [Configure Azure AD role settings in PIM](https://docs.microsoft.com///azure/active-directory/privileged-identity-management/pim-how-to-change-default-settings?tabs=new) |
-| IA-02(2)<br>IA-02(4)| **Implement multi-factor authentication for all access to non-privileged accounts**<p>Configure the following elements as an overall solution to ensure all access to non-privileged accounts requires MFA.<p> Configure Conditional Access policies to require MFA for all users.<br> Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce use of specific authentication methods.<br> Configure Conditional Access policies to enforce device compliance.<p>Microsoft recommends using a multi-factor cryptographic hardware authenticator (e.g., FIDO2 security keys, Windows Hello for Business (with hardware TPM), or smart card) to achieve AAL3. If your organization is completely cloud-based, we recommend using FIDO2 security keys or Windows Hello for Business.<p>FIDO2 keys and Windows Hello for Business have not been validated at the required FIPS 140 Security Level and as such federal customers would need to conduct risk assessment and evaluation before accepting these authenticators as AAL3. For additional details regarding FIDO2 and Windows Hello for Business FIPS 140 validation please refer to [Microsoft NIST AALs](nist-overview.md).<p>Guidance regarding MDM polices differ slightly based on authentication methods, they are broken out below. <p>Smart Card / Windows Hello for Business<br> [Passwordless Strategy - Require Windows Hello for Business or smart card](https://docs.microsoft.com///windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](https://docs.microsoft.com///azure/active-directory/conditional-access/require-managed-devices)<br> [Conditional Access - Require MFA for all users](https://docs.microsoft.com///azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa)<p> Hybrid Only<br> [Passwordless Strategy - Configure user accounts to disallow password authentication](https://docs.microsoft.com///windows/security/identity-protection/hello-for-business/passwordless-strategy)<p> Smart Card Only<br>[Create a Rule to Send an Authentication Method Claim](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/create-a-rule-to-send-an-authentication-method-claim)<br>[Configure Authentication Policies](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-authentication-policies)<p>FIDO2 Security Key<br> [Passwordless Strategy - Excluding the password credential provider](https://docs.microsoft.com///windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](https://docs.microsoft.com///azure/active-directory/conditional-access/require-managed-devices)<br> [Conditional Access - Require MFA for all users](https://docs.microsoft.com///azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa)<p>Authentication Methods<br> [Azure Active Directory passwordless sign-in (preview) | FIDO2 security keys](https://docs.microsoft.com///azure/active-directory/authentication/concept-authentication-passwordless)<br> [Passwordless security key sign-in Windows - Azure Active Directory](https://docs.microsoft.com///azure/active-directory/authentication/howto-authentication-passwordless-security-key-windows)<br> [ADFS: Certificate Authentication with Azure AD & Office 365](https://docs.microsoft.com///archive/blogs/samueld/adfs-certauth-aad-o365)<br> [How Smart Card Sign-in Works in Windows (Windows 10)](https://docs.microsoft.com///windows/security/identity-protection/smart-cards/smart-card-how-smart-card-sign-in-works-in-windows)<br> [Windows Hello for Business Overview (Windows 10)](https://docs.microsoft.com///windows/security/identity-protection/hello-for-business/hello-overview)<p>Additional Resources:<br> [Policy CSP - Windows Client Management](https://docs.microsoft.com///windows/client-management/mdm/policy-configuration-service-provider)<br> [Use PowerShell scripts on Windows 10 devices in Intune](https://docs.microsoft.com///mem/intune/apps/intune-management-extension)<br> [Plan a passwordless authentication deployment with Azure AD](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-passwordless-deployment)<br> |
-| IA-02(5)| **When multiple users have access to a shared or group account password, require each user to first authenticate using an individual authenticator**<p>Use an individual account per user. If a shared account is required, Azure AD permits binding of multiple authenticators to an account such that each user has an individual authenticator. <p> [How it works: Azure multi-factor authentication](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks)<br> [Manage authentication methods for Azure AD multi-factor authentication](https://docs.microsoft.com///azure/active-directory/authentication/howto-mfa-userdevicesettings) |
-| IA-02(8)| **Implement replay-resistant authentication mechanisms for network access to privileged accounts**<p>Configure Conditional Access policies to require MFA for all users. All Azure AD authentication methods at Authentication Assurance Level 2 & 3 use either nonce or challenges and are resistant to replay attacks.p>References:<br> [Conditional Access - Require MFA for all users](https://docs.microsoft.com///azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa)<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) |
-| IA-02(11)| **Implement Azure multi-factor authentication to access customer-deployed resources remotely such that one of the factors is provided by a device separate from the system gaining access where the device meets FIPS-140-2, NIAP Certification, or NSA approval**<p>See guidance for IA-02(1-4). Azure AD authentication methods to consider at AAL3 meeting the separate device requirements are:<p> FIDO2 Security Keys<br> Windows Hello for Business with Hardware TPM (TPM is recognized as a valid ΓÇ£something you haveΓÇ¥ factor by NIST 800-63B Section 5.1.7.1)<br> Smart Card<p>References:<br>[Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md)<br> [NIST 800-63B Section 5.1.7.1](https://pages.nist.gov/800-63-3/sp800-63b.html) |
-| IA-02(12)| **Accept and verify Personal Identity Verification (PIV) credentials. This control is not applicable if the customer does not deploy PIV credentials.**<p>Configure federated authentication using Active Directory Federation Services (ADFS) to accept PIV (certificate authentication) as both primary and multi-factor authentication methods and issue the MFA (MultipleAuthN) claim when PIV is used. Configure the federated domain in Azure AD with SupportsMFA to direct MFA requests originating at Azure AD to the ADFS. Alternatively, PIV can be used for sign-in on Windows devices and subsequently leverage Integrated Windows Authentication (IWA) along with Seamless Single Sign-On (SSSO). Windows Server & Client verify certificates by default when used for authentication. <p> [What is federation with Azure AD?](https://docs.microsoft.com///azure/active-directory/hybrid/whatis-fed)<br> [Configure AD FS support for user certificate authentication](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)[Configure AD FS support for user certificate authentication](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br> [Configure Authentication Policies](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-authentication-policies)[Configure Authentication Policies](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-authentication-policies)<br> [Secure resources with Azure AD MFA and ADFS](https://docs.microsoft.com///azure/active-directory/authentication/howto-mfa-adfs)[Secure resources with Azure AD MFA and ADFS](https://docs.microsoft.com///azure/active-directory/authentication/howto-mfa-adfs)<br>[Set-MsolDomainFederationSettings](https://docs.microsoft.com///powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0)[Set-MsolDomainFederationSettings](https://docs.microsoft.com///powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0)<br> [Azure AD Connect: Seamless Single Sign-On](https://docs.microsoft.com///azure/active-directory/hybrid/how-to-connect-sso)[Azure AD Connect: Seamless Single Sign-On](https://docs.microsoft.com///azure/active-directory/hybrid/how-to-connect-sso) |
-| IA-03| **Implement device identification and authentication prior to establishing a connection.**<p>Configure Azure AD to identify and authenticate Azure AD Registered, Azure AD Joined, and Azure AD Hybrid joined devices.<p> [What is a device identity?](https://docs.microsoft.com/azure/active-directory/devices/overview)<br> [Plan an Azure AD devices deployment](https://docs.microsoft.com///azure/active-directory/devices/plan-device-deployment)<br>[How To: Require managed devices for cloud app access with Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices) |
-| IA-04<br>IA-04(4)| **Disable account identifiers after 35 days of inactivity and prevent their reuse for 2 years. Manage individual identifiers by uniquely identifying each individual (e.g., contractors, foreign nationals, etc.).**<p>Assign and manage individual account identifiers and status in Azure AD in accordance with existing organizational policies defined in AC-02. Follow AC-02(3) to automatically disable user and device accounts after 35 days of inactivity. Ensure that organizational policy maintains all accounts that remain in the disabled state for at least 2 years after which they can be removed. <p>Determine Inactivity<br> [How to manage inactive user accounts in Azure AD](https://docs.microsoft.com///azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts)<br> [How to manage stale devices in Azure AD](https://docs.microsoft.com///azure/active-directory/devices/manage-stale-devices)[How to manage stale devices in Azure AD](https://docs.microsoft.com///azure/active-directory/devices/manage-stale-devices)<br> [See AC-02 guidance](fedramp-access-controls.md) |
-| IA-05| **Configure and manage information system authenticators.**<p>Azure AD supports a wide variety of authentication methods and can be managed using your existing organizational policies. See guidance for authenticator selection in IA-02(1-4). Enable users in combined registration for SSPR and Azure AD MFA and require users to register a minimum of two acceptable multi-factor authentication methods to facilitate self-remediation. Administrators can revoke user configured authenticators at any time with the authentication methods API. <p>Authenticator Strength/Protect Authenticator Content<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md)<p>Authentication Methods & Combined Registration<br> [What authentication and verification methods are available in Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-methods)<br> [Combined registration for SSPR and Azure AD multi-factor authentication](https://docs.microsoft.com/azure/active-directory/authentication/concept-registration-mfa-sspr-combined)<p>Authenticator Revoke<br> [Azure AD authentication methods API overview](https://docs.microsoft.com/graph/api/resources/authenticationmethods-overview?view=graph-rest-beta) |
-| IA-05(1)| **Implement password-based authentication requirements.**<p>Per NIST SP 800-63B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<p>With Azure AD Password Protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your own business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<p>Microsoft strongly encourages passwordless strategies. This control is only applicable to password authenticators. Therefore, removing passwords as an available authenticator renders this control not applicable.<p>NIST Reference Documents:<br>[NIST Special Publication 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf) - IA-5 - Control Enhancement (1)<p>Additional Resources:<br>[Eliminate bad passwords using Azure Active Directory Password Protection](https://docs.microsoft.com///azure/active-directory/authentication/concept-password-ban-bad) |
-| IA-05(2)| **Implement PKI-Based authentication requirements.**<p>Federate Azure AD via ADFS to implement PKI-based authentication. By default, ADFS validates certificates, locally caches revocation data and maps users to the authenticated identity in Active Directory. <p> Additional Resources:<br> [What is federation with Azure AD?](https://docs.microsoft.com/azure/active-directory/hybrid/whatis-fed)<br> [Configure AD FS support for user certificate authentication](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) |
-| IA-05(4)| **Employ automated tools to validate password strength requirements.** <p>Azure AD implements automated mechanisms which enforce password authenticator strength at creation. This automated mechanism can also be extended to enforce password authenticator strength for on-premises Active Directory. Revision 5 of NIST 800-53 has withdrawn IA-04(4) and incorporated the requirement into IA-5(1).<p>Additional Resources:<p> [Eliminate bad passwords using Azure Active Directory Password Protection](https://docs.microsoft.com/azure/active-directory/authentication/concept-password-ban-bad)<br> [Azure AD Password Protection for Active Directory Domain Services](https://docs.microsoft.com///azure/active-directory/authentication/concept-password-ban-bad-on-premises)<br>[NIST Special Publication 800-53 Revision 5](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf) - IA-5 - Control Enhancement (4) |
+| IA-02| **Uniquely identify and authenticate users or processes acting on behalf of users.<p>** Azure AD uniquely identifies user and service principal objects directly and provides multiple authentication methods including methods adhering to NIST Authentication Assurance Level (AAL) 3 that can be configured.<p>Identifiers <br> Users - [Working with users in Microsoft Graph : ID Property](/graph/api/resources/users?view=graph-rest-1.0&preserve-view=true)<br>Service Principals - [ServicePrincipal resource type : ID Property](/graph/api/resources/serviceprincipal?view=graph-rest-1.0&preserve-view=true)<p>Authentication & Multi-Factor Authentication<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) |
+| IA-02(1)<br>IA-02(3)| **Multi-factor authentication (MFA) for all access to privileged accounts**. <p>Configure the following elements for a complete solution to ensure all access to privileged accounts requires MFA.<p>Configure Conditional Access policies to require MFA for all users.<br> Implement Privileged Identity Management (PIM) to require MFA for activation of privileged role assignment prior to use.<p>With PIM activation requirement in place, privilege account activation is not possible without network access. Hence, local access is never privileged.<p>MFA & PIM<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<br> [Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md?tabs=new) |
+| IA-02(2)<br>IA-02(4)| **Implement multi-factor authentication for all access to non-privileged accounts**<p>Configure the following elements as an overall solution to ensure all access to non-privileged accounts requires MFA.<p> Configure Conditional Access policies to require MFA for all users.<br> Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce use of specific authentication methods.<br> Configure Conditional Access policies to enforce device compliance.<p>Microsoft recommends using a multi-factor cryptographic hardware authenticator (e.g., FIDO2 security keys, Windows Hello for Business (with hardware TPM), or smart card) to achieve AAL3. If your organization is completely cloud-based, we recommend using FIDO2 security keys or Windows Hello for Business.<p>FIDO2 keys and Windows Hello for Business have not been validated at the required FIPS 140 Security Level and as such federal customers would need to conduct risk assessment and evaluation before accepting these authenticators as AAL3. For additional details regarding FIDO2 and Windows Hello for Business FIPS 140 validation please refer to [Microsoft NIST AALs](nist-overview.md).<p>Guidance regarding MDM polices differ slightly based on authentication methods, they are broken out below. <p>Smart Card / Windows Hello for Business<br> [Passwordless Strategy - Require Windows Hello for Business or smart card](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<p> Hybrid Only<br> [Passwordless Strategy - Configure user accounts to disallow password authentication](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<p> Smart Card Only<br>[Create a Rule to Send an Authentication Method Claim](/windows-server/identity/ad-fs/operations/create-a-rule-to-send-an-authentication-method-claim)<br>[Configure Authentication Policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)<p>FIDO2 Security Key<br> [Passwordless Strategy - Excluding the password credential provider](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<p>Authentication Methods<br> [Azure Active Directory passwordless sign-in (preview) | FIDO2 security keys](../authentication/concept-authentication-passwordless.md)<br> [Passwordless security key sign-in Windows - Azure Active Directory](../authentication/howto-authentication-passwordless-security-key-windows.md)<br> [ADFS: Certificate Authentication with Azure AD & Office 365](/archive/blogs/samueld/adfs-certauth-aad-o365)<br> [How Smart Card Sign-in Works in Windows (Windows 10)](/windows/security/identity-protection/smart-cards/smart-card-how-smart-card-sign-in-works-in-windows)<br> [Windows Hello for Business Overview (Windows 10)](/windows/security/identity-protection/hello-for-business/hello-overview)<p>Additional Resources:<br> [Policy CSP - Windows Client Management](/windows/client-management/mdm/policy-configuration-service-provider)<br> [Use PowerShell scripts on Windows 10 devices in Intune](/mem/intune/apps/intune-management-extension)<br> [Plan a passwordless authentication deployment with Azure AD](../authentication/howto-authentication-passwordless-deployment.md)<br> |
+| IA-02(5)| **When multiple users have access to a shared or group account password, require each user to first authenticate using an individual authenticator**<p>Use an individual account per user. If a shared account is required, Azure AD permits binding of multiple authenticators to an account such that each user has an individual authenticator. <p> [How it works: Azure multi-factor authentication](../authentication/concept-mfa-howitworks.md)<br> [Manage authentication methods for Azure AD multi-factor authentication](../authentication/howto-mfa-userdevicesettings.md) |
+| IA-02(8)| **Implement replay-resistant authentication mechanisms for network access to privileged accounts**<p>Configure Conditional Access policies to require MFA for all users. All Azure AD authentication methods at Authentication Assurance Level 2 & 3 use either nonce or challenges and are resistant to replay attacks.p>References:<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) |
+| IA-02(11)| **Implement Azure multi-factor authentication to access customer-deployed resources remotely such that one of the factors is provided by a device separate from the system gaining access where the device meets FIPS-140-2, NIAP Certification, or NSA approval**<p>See guidance for IA-02(1-4). Azure AD authentication methods to consider at AAL3 meeting the separate device requirements are:<p> FIDO2 Security Keys<br> Windows Hello for Business with Hardware TPM (TPM is recognized as a valid "something you have" factor by NIST 800-63B Section 5.1.7.1)<br> Smart Card<p>References:<br>[Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md)<br> [NIST 800-63B Section 5.1.7.1](https://pages.nist.gov/800-63-3/sp800-63b.html) |
+| IA-02(12)| **Accept and verify Personal Identity Verification (PIV) credentials. This control is not applicable if the customer does not deploy PIV credentials.**<p>Configure federated authentication using Active Directory Federation Services (ADFS) to accept PIV (certificate authentication) as both primary and multi-factor authentication methods and issue the MFA (MultipleAuthN) claim when PIV is used. Configure the federated domain in Azure AD with SupportsMFA to direct MFA requests originating at Azure AD to the ADFS. Alternatively, PIV can be used for sign-in on Windows devices and subsequently leverage Integrated Windows Authentication (IWA) along with Seamless Single Sign-On (SSSO). Windows Server & Client verify certificates by default when used for authentication. <p> [What is federation with Azure AD?](../hybrid/whatis-fed.md)<br> [Configure AD FS support for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)[Configure AD FS support for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br> [Configure Authentication Policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)[Configure Authentication Policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)<br> [Secure resources with Azure AD MFA and ADFS](../authentication/howto-mfa-adfs.md)<br>[Set-MsolDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings)<br> [Azure AD Connect: Seamless Single Sign-On](../hybrid/how-to-connect-sso.md) |
+| IA-03| **Implement device identification and authentication prior to establishing a connection.**<p>Configure Azure AD to identify and authenticate Azure AD Registered, Azure AD Joined, and Azure AD Hybrid joined devices.<p> [What is a device identity?](../devices/overview.md)<br> [Plan an Azure AD devices deployment](../devices/plan-device-deployment.md)<br>[How To: Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md) |
+| IA-04<br>IA-04(4)| **Disable account identifiers after 35 days of inactivity and prevent their reuse for 2 years. Manage individual identifiers by uniquely identifying each individual (e.g., contractors, foreign nationals, etc.).**<p>Assign and manage individual account identifiers and status in Azure AD in accordance with existing organizational policies defined in AC-02. Follow AC-02(3) to automatically disable user and device accounts after 35 days of inactivity. Ensure that organizational policy maintains all accounts that remain in the disabled state for at least 2 years after which they can be removed. <p>Determine Inactivity<br> [How to manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br> [How to manage stale devices in Azure AD](../devices/manage-stale-devices.md)[How to manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br> [See AC-02 guidance](fedramp-access-controls.md) |
+| IA-05| **Configure and manage information system authenticators.**<p>Azure AD supports a wide variety of authentication methods and can be managed using your existing organizational policies. See guidance for authenticator selection in IA-02(1-4). Enable users in combined registration for SSPR and Azure AD MFA and require users to register a minimum of two acceptable multi-factor authentication methods to facilitate self-remediation. Administrators can revoke user configured authenticators at any time with the authentication methods API. <p>Authenticator Strength/Protect Authenticator Content<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md)<p>Authentication Methods & Combined Registration<br> [What authentication and verification methods are available in Azure Active Directory?](../authentication/concept-authentication-methods.md)<br> [Combined registration for SSPR and Azure AD multi-factor authentication](../authentication/concept-registration-mfa-sspr-combined.md)<p>Authenticator Revoke<br> [Azure AD authentication methods API overview](/graph/api/resources/authenticationmethods-overview?view=graph-rest-beta&preserve-view=true) |
+| IA-05(1)| **Implement password-based authentication requirements.**<p>Per NIST SP 800-63B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<p>With Azure AD Password Protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your own business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<p>Microsoft strongly encourages passwordless strategies. This control is only applicable to password authenticators. Therefore, removing passwords as an available authenticator renders this control not applicable.<p>NIST Reference Documents:<br>[NIST Special Publication 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf) - IA-5 - Control Enhancement (1)<p>Additional Resources:<br>[Eliminate bad passwords using Azure Active Directory Password Protection](../authentication/concept-password-ban-bad.md) |
+| IA-05(2)| **Implement PKI-Based authentication requirements.**<p>Federate Azure AD via ADFS to implement PKI-based authentication. By default, ADFS validates certificates, locally caches revocation data and maps users to the authenticated identity in Active Directory. <p> Additional Resources:<br> [What is federation with Azure AD?](../hybrid/whatis-fed.md)<br> [Configure AD FS support for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) |
+| IA-05(4)| **Employ automated tools to validate password strength requirements.** <p>Azure AD implements automated mechanisms which enforce password authenticator strength at creation. This automated mechanism can also be extended to enforce password authenticator strength for on-premises Active Directory. Revision 5 of NIST 800-53 has withdrawn IA-04(4) and incorporated the requirement into IA-5(1).<p>Additional Resources:<p> [Eliminate bad passwords using Azure Active Directory Password Protection](../authentication/concept-password-ban-bad.md)<br> [Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md)<br>[NIST Special Publication 800-53 Revision 5](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf) - IA-5 - Control Enhancement (4) |
| IA-05(6)| **Protect authenticators as defined in FedRAMP High**.<p>For further details on how Azure AD protects authenticators see [Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) |
-| IA-05(7)| **Ensure unencrypted static authenticators (e.g., a password) are not embedded in applications or access scripts or stored on function keys.**<p>Implement managed identities or service principal objects (configured with only certificate).<p>[What are managed identities for Azure resources?](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview)<br>[Create an Azure AD app & service principal in the portal](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal) |
-| IA-05(8)| **Implement security safeguards when individuals have accounts on multiple information systems.**<p>Implement single sign-on (SSO) by connecting all applications to Azure AD, as opposed to having individual accounts on multiple information systems.<p>[What is Azure single sign-on (SSO)?](https://docs.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on) |
+| IA-05(7)| **Ensure unencrypted static authenticators (e.g., a password) are not embedded in applications or access scripts or stored on function keys.**<p>Implement managed identities or service principal objects (configured with only certificate).<p>[What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)<br>[Create an Azure AD app & service principal in the portal](../develop/howto-create-service-principal-portal.md) |
+| IA-05(8)| **Implement security safeguards when individuals have accounts on multiple information systems.**<p>Implement single sign-on (SSO) by connecting all applications to Azure AD, as opposed to having individual accounts on multiple information systems.<p>[What is Azure single sign-on (SSO)?](../manage-apps/what-is-single-sign-on.md) |
| IA-05(11)| **Require hardware token quality requirements as required by FedRAMP High.**<p>Require the use of hardware tokens that meet AAL3.<p>Resources:<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform](https://azure.microsoft.com/resources/microsoft-nist/) |
-| IA-05(13)| **Enforce the expiration of cached authenticators.**<p>Cached authenticators are used to authenticate to the local machine when the network is not available. To limit the use of cached authenticators, configure Windows devices to disable their use. Where this is not possible or practical, use the following compensating controls:<p>Configure conditional access session controls using application enforced restrictions for Office applications.<br> Configure conditional access using application controls for other applications.<p>Resources:<br> [Interactive logon Number of previous logons to cache](https://docs.microsoft.com/windows/security/threat-protection/security-policy-settings/interactive-logon-number-of-previous-logons-to-cache-in-case-domain-controller-is-not-available)<br> [Session controls in Conditional Access policy - Application enforced restrictions](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-conditional-access-session)<br>[Session controls in Conditional Access policy - Conditional Access application control](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-conditional-access-session) |
+| IA-05(13)| **Enforce the expiration of cached authenticators.**<p>Cached authenticators are used to authenticate to the local machine when the network is not available. To limit the use of cached authenticators, configure Windows devices to disable their use. Where this is not possible or practical, use the following compensating controls:<p>Configure conditional access session controls using application enforced restrictions for Office applications.<br> Configure conditional access using application controls for other applications.<p>Resources:<br> [Interactive logon Number of previous logons to cache](/windows/security/threat-protection/security-policy-settings/interactive-logon-number-of-previous-logons-to-cache-in-case-domain-controller-is-not-available)<br> [Session controls in Conditional Access policy - Application enforced restrictions](../conditional-access/concept-conditional-access-session.md)<br>[Session controls in Conditional Access policy - Conditional Access application control](../conditional-access/concept-conditional-access-session.md) |
| IA-06| **Obscure authentication feedback information during the authentication process.**<p>By default, Azure AD obscures all authenticator feedback. <p>
-| IA-07| **Implement mechanisms for authentication to a cryptographic module that meets applicable federal laws.**<p>FedRAMP High requires AAL3 authenticator. All authenticators supported by Azure AD at AAL3 provide mechanisms to authenticate operator access to the module as required. For example, in a Windows Hello for Business deployment with hardware TPM, configure the level of TPM owner authorization.<p> Resources:<br>See IA-02(2 & 4) for additional detail. Resources<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) <br> [TPM Group Policy settings](https://docs.microsoft.com/windows/security/information-protection/tpm/trusted-platform-module-services-group-policy-settings) |
-| IA-08| **The information system uniquely identifies and authenticates non-organizational users (or processes acting on behalf of non-organizational users).**<p>Azure AD uniquely identifies and authenticates non-organizational users homed in the organizations tenant or in external directories using FICAM approved protocols.<p> [What is B2B collaboration in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/external-identities/what-is-b2b)<br> [Direct federation with an identity provider for B2B](https://docs.microsoft.com/azure/active-directory/external-identities/direct-federation)<br> [Properties of a B2B guest user](https://docs.microsoft.com/azure/active-directory/external-identities/user-properties) |
-| IA-08(1)<br>IA-08(4)| **Accept and verify Personal Identity Verification (PIV) credentials issued by other federal agencies. Conform to the profiles issued by the Federal Identity, Credential, and Access Management (FICAM).**<p>Configure Azure AD to accept PIV credentials via federation (OIDC, SAML) or locally via Windows Integrated Authentication (WIA)<p>Resources:<br> [What is federation with Azure AD?](https://docs.microsoft.com/azure/active-directory/hybrid/whatis-fed)<br> [Configure AD FS support for user certificate authentication](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br>[What is B2B collaboration in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/external-identities/what-is-b2b)<br> [Direct federation with an identity provider for B2B](https://docs.microsoft.com/azure/active-directory/external-identities/direct-federation) |
-| IA-08(2)| **Accept only Federal Identity, Credential, and Access Management (FICAM) approved credentials.**<p>Azure AD supports authenticators at NIST Authentication Assurance Levels (AALs) 1, 2 & 3. Restrict the use of authenticators commensurate with the security category of the system being accessed. <p>Azure Active Directory supports a wide variety of authentication methods.<p>Resources<br> [What authentication and verification methods are available in Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-methods)<br> [Azure AD authentication methods policy API overview](https://docs.microsoft.com/graph/api/resources/authenticationmethodspolicies-overview?view=graph-rest-beta)<br> [Achieving National Institute of Standards and Technology Authenticator <br>Assurance Levels with the Microsoft Identity Platform](https://azure.microsoft.com/resources/microsoft-nist/) |
-
+| IA-07| **Implement mechanisms for authentication to a cryptographic module that meets applicable federal laws.**<p>FedRAMP High requires AAL3 authenticator. All authenticators supported by Azure AD at AAL3 provide mechanisms to authenticate operator access to the module as required. For example, in a Windows Hello for Business deployment with hardware TPM, configure the level of TPM owner authorization.<p> Resources:<br>See IA-02(2 & 4) for additional detail. Resources<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) <br> [TPM Group Policy settings](/windows/security/information-protection/tpm/trusted-platform-module-services-group-policy-settings) |
+| IA-08| **The information system uniquely identifies and authenticates non-organizational users (or processes acting on behalf of non-organizational users).**<p>Azure AD uniquely identifies and authenticates non-organizational users homed in the organizations tenant or in external directories using FICAM approved protocols.<p> [What is B2B collaboration in Azure Active Directory](../external-identities/what-is-b2b.md)<br> [Direct federation with an identity provider for B2B](../external-identities/direct-federation.md)<br> [Properties of a B2B guest user](../external-identities/user-properties.md) |
+| IA-08(1)<br>IA-08(4)| **Accept and verify Personal Identity Verification (PIV) credentials issued by other federal agencies. Conform to the profiles issued by the Federal Identity, Credential, and Access Management (FICAM).**<p>Configure Azure AD to accept PIV credentials via federation (OIDC, SAML) or locally via Windows Integrated Authentication (WIA)<p>Resources:<br> [What is federation with Azure AD?](../hybrid/whatis-fed.md)<br> [Configure AD FS support for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br>[What is B2B collaboration in Azure Active Directory](../external-identities/what-is-b2b.md)<br> [Direct federation with an identity provider for B2B](../external-identities/direct-federation.md) |
+| IA-08(2)| **Accept only Federal Identity, Credential, and Access Management (FICAM) approved credentials.**<p>Azure AD supports authenticators at NIST Authentication Assurance Levels (AALs) 1, 2 & 3. Restrict the use of authenticators commensurate with the security category of the system being accessed. <p>Azure Active Directory supports a wide variety of authentication methods.<p>Resources<br> [What authentication and verification methods are available in Azure Active Directory?](../authentication/concept-authentication-methods.md)<br> [Azure AD authentication methods policy API overview](/graph/api/resources/authenticationmethodspolicies-overview?view=graph-rest-beta&preserve-view=true)<br> [Achieving National Institute of Standards and Technology Authenticator <br>Assurance Levels with the Microsoft Identity Platform](https://azure.microsoft.com/resources/microsoft-nist/) |
## Next Steps [Configure access controls](fedramp-access-controls.md)
IA-08 Identification and Authentication (Non-Organizational Users)
[Configure identification & authentication controls](fedramp-identification-and-authentication-controls.md) [Configure other controls](fedramp-other-controls.md)----
active-directory Fedramp Other Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/fedramp-other-controls.md
The following list of controls (and control enhancements) in the families below may require configuration in your Azure AD tenant.
-Each row in the following tables provides prescriptive guidance to aid you in developing your organizationΓÇÖs response to any shared responsibilities regarding the control and/or control enhancement.
+Each row in the following tables provides prescriptive guidance to aid you in developing your organization's response to any shared responsibilities regarding the control and/or control enhancement.
## Audit & Accountability
Each row in the following tables provides prescriptive guidance to aid you in de
| Control ID and subpart| Customer responsibilities and guidance | | - | - |
-| AU-02 <br>AU-03 <br>AU-03(1)<br>AU-03(2)| **Ensure the system is capable of auditing events defined in AU-02 Part a and coordinate with other entities within the organizationΓÇÖs subset of auditable events to support after-the-fact investigations. Implement centralized management of audit records**.<p>All account lifecycle operations (account creation, modification, enabling, disabling, and removal actions) are audited within the Azure AD audit logs. All authentication and authorization events are audited within Azure AD sign-in logs, and any detected risks are audited in the Identity Protection logs. Each of these logs can be streamed directly into a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, use Azure Event Hub to integrate logs with third-party SIEM solutions.<p>Audit Events<li> [Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com///azure/active-directory/reports-monitoring/concept-audit-logs)<li> [Sign-in activity reports in the Azure Active Directory portal](https://docs.microsoft.com///azure/active-directory/reports-monitoring/concept-sign-ins)<li>[How To: Investigate risk](https://docs.microsoft.com///azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<p>SIEM Integrations<li> [Azure Sentinel : Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com///azure/sentinel/connect-azure-active-directory)<li>[Stream to Azure event hub and other SIEMs](https://docs.microsoft.com///azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
-| AU-06<br>AU-06(1)<br>AU-06(3)<br>AU-06(4)<br>AU-06(5)<br>AU-06(6)<br>AU-06(7)<br>AU-06(10)<br>| **Review and analyze audit records at least once each week to identify inappropriate or unusual activity and report findings to appropriate personnel**. <p>Guidance provided above for AU-02 & AU-03 allows for weekly review of audit records and reporting to appropriate personnel. You cannot meet these requirements using only Azure AD. You must also use a SIEM solution such as Azure Sentinel.<p>[What is Azure Sentinel?](https://docs.microsoft.com///azure/sentinel/overview) |
+| AU-02 <br>AU-03 <br>AU-03(1)<br>AU-03(2)| **Ensure the system is capable of auditing events defined in AU-02 Part a and coordinate with other entities within the organization's subset of auditable events to support after-the-fact investigations. Implement centralized management of audit records**.<p>All account lifecycle operations (account creation, modification, enabling, disabling, and removal actions) are audited within the Azure AD audit logs. All authentication and authorization events are audited within Azure AD sign-in logs, and any detected risks are audited in the Identity Protection logs. Each of these logs can be streamed directly into a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, use Azure Event Hub to integrate logs with third-party SIEM solutions.<p>Audit Events<li> [Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<li> [Sign-in activity reports in the Azure Active Directory portal](../reports-monitoring/concept-sign-ins.md)<li>[How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md)<p>SIEM Integrations<li> [Azure Sentinel : Connect data from Azure Active Directory (Azure AD)](../../sentinel/connect-azure-active-directory.md)<li>[Stream to Azure event hub and other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| AU-06<br>AU-06(1)<br>AU-06(3)<br>AU-06(4)<br>AU-06(5)<br>AU-06(6)<br>AU-06(7)<br>AU-06(10)<br>| **Review and analyze audit records at least once each week to identify inappropriate or unusual activity and report findings to appropriate personnel**. <p>Guidance provided above for AU-02 & AU-03 allows for weekly review of audit records and reporting to appropriate personnel. You cannot meet these requirements using only Azure AD. You must also use a SIEM solution such as Azure Sentinel.<p>[What is Azure Sentinel?](../../sentinel/overview.md) |
## Incident Response
Each row in the following tables provides prescriptive guidance to aid you in de
| Control ID and subpart| Customer responsibilities and guidance | | - | - |
-| IR-04<br>IR-04(1)<br>IR-04(2)<br>IR-04(3)<br>IR-04(4)<br>IR-04(6)<br>IR-04(8)<br>IR-05<br>IR-05(1)| **Implement incident handling and monitoring capabilities including Automated Incident Handling, Dynamic Reconfiguration, Continuity of Operations, Information Correlation, Insider Threats, Correlation with External Organizations, Incident Monitoring & Automated Tracking**. <p>All configuration changes are logged in the audit logs. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. Each of these logs can be streamed directly into a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, use Azure Event Hub to integrate logs with third-party SIEM solutions. Automate dynamic reconfiguration based on events within the SIEM using MSGraph and/or Azure AD PowerShell.<p>Audit Events<br><li>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com///azure/active-directory/reports-monitoring/concept-audit-logs)<li>[Sign-in activity reports in the Azure Active Directory portal](https://docs.microsoft.com///azure/active-directory/reports-monitoring/concept-sign-ins)<li>[How To: Investigate risk](https://docs.microsoft.com///azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<p>SIEM Integrations<li>[Azure Sentinel : Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com///azure/sentinel/connect-azure-active-directory)<li>[Stream to Azure event hub and other SIEMs](https://docs.microsoft.com///azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub)<p>Dynamic Reconfiguration<li>[AzureAD Module](https://docs.microsoft.com/powershell/module/azuread/?view=azureadps-2.0)<li>[Overview of Microsoft Graph](https://docs.microsoft.com/graph/overview?view=graph-rest-1.0) |
+| IR-04<br>IR-04(1)<br>IR-04(2)<br>IR-04(3)<br>IR-04(4)<br>IR-04(6)<br>IR-04(8)<br>IR-05<br>IR-05(1)| **Implement incident handling and monitoring capabilities including Automated Incident Handling, Dynamic Reconfiguration, Continuity of Operations, Information Correlation, Insider Threats, Correlation with External Organizations, Incident Monitoring & Automated Tracking**. <p>All configuration changes are logged in the audit logs. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. Each of these logs can be streamed directly into a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, use Azure Event Hub to integrate logs with third-party SIEM solutions. Automate dynamic reconfiguration based on events within the SIEM using MSGraph and/or Azure AD PowerShell.<p>Audit Events<br><li>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<li>[Sign-in activity reports in the Azure Active Directory portal](../reports-monitoring/concept-sign-ins.md)<li>[How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md)<p>SIEM Integrations<li>[Azure Sentinel : Connect data from Azure Active Directory (Azure AD)](../../sentinel/connect-azure-active-directory.md)<li>[Stream to Azure event hub and other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)<p>Dynamic Reconfiguration<li>[AzureAD Module](/powershell/module/azuread/)<li>[Overview of Microsoft Graph](/graph/overview?view=graph-rest-1.0&preserve-view=true) |
-
-
## Personnel Security * PS-04 Personnel termination | Control ID and subpart| Customer responsibilities and guidance | | - | - |
-| PS-04<br>PS-04(2)| **Automatically notify personnel responsible for disabling access to the system.** <p>Disable accounts and revoke all associated authenticators and credentials within 8 hours. <p>Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions. <p>Account Provisioning<li> See detailed guidance in AC-02. <p>Revoke all Associated Authenticators. <li> [Revoke user access in an emergency in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/enterprise-users/users-revoke-access) |
+| PS-04<br>PS-04(2)| **Automatically notify personnel responsible for disabling access to the system.** <p>Disable accounts and revoke all associated authenticators and credentials within 8 hours. <p>Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions. <p>Account Provisioning<li> See detailed guidance in AC-02. <p>Revoke all Associated Authenticators. <li> [Revoke user access in an emergency in Azure Active Directory](../enterprise-users/users-revoke-access.md) |
## System & Information Integrity
Each row in the following tables provides prescriptive guidance to aid you in de
[Configure identification & authentication controls](fedramp-identification-and-authentication-controls.md) [Configure other controls](fedramp-other-controls.md)
-
+
active-directory Nist About Authenticator Assurance Levels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-about-authenticator-assurance-levels.md
The standard includes AAL requirements for 11 requirement categories:
> [!TIP] > We recommend that you meet at least AAL 2, unless business reasons, industry standards, or compliance requirements dictate that you meet AAL3.
-In general, AAL1 isn't recommended because it accepts password-only solutions, and passwords are the most easily compromised form of authentication. See [Your Pa$$word doesnΓÇÖt matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984).
+In general, AAL1 isn't recommended because it accepts password-only solutions, and passwords are the most easily compromised form of authentication. See [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984).
-While NIST doesn't require verifier impersonation (also known as credential phishing) resistance until AAL3, we highly advise that you address this threat at all levels. You can select authenticators that provide verifier impersonation resistance, such as requiring Azure AD joined or hybrid Azure AD joined devices. If you're using Office 365 you can address use Office 365 Advanced Threat Protection, and specifically [Anti-phishing policies](https://docs.microsoft.com/microsoft-365/security/office-365-security/set-up-anti-phishing-policies?view=o365-worldwide).
+While NIST doesn't require verifier impersonation (also known as credential phishing) resistance until AAL3, we highly advise that you address this threat at all levels. You can select authenticators that provide verifier impersonation resistance, such as requiring Azure AD joined or hybrid Azure AD joined devices. If you're using Office 365 you can address use Office 365 Advanced Threat Protection, and specifically [Anti-phishing policies](/microsoft-365/security/office-365-security/set-up-anti-phishing-policies).
As you evaluate the appropriate NIST AAL for your organization, you can consider whether your entire organization must meet NIST standards, or if there are specific groups of users and resources that can be segregated, and the NIST AAL configurations applied to only a specific group of users and resources.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
Congratulations, you now have bootstrapped the web of trust with your DID!
## Next steps
-If during onboarding you enter the wrong domain information of you decide to change it, you will need to [opt out](how-to-opt-out.md). At this time, we don't support updating your DID document. Opting out and opting back in will create a brand new DID.
+If during onboarding you enter the wrong domain information or if you decide to change it, you will need to [opt out](how-to-opt-out.md). At this time, we don't support updating your DID document. Opting out and opting back in will create a brand new DID.
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-action.md
To deploy a container image to AKS, you will need to use the `Azure/k8s-deploy@v
| **imagepullsecrets** | (Optional) Name of a docker-registry secret that has already been set up within the cluster. Each of these secret names is added under imagePullSecrets field for the workloads found in the input manifest files | | **kubectl-version** | (Optional) Installs a specific version of kubectl binary |
+> [!NOTE]
+> The manifest files should be created manually by you. Currently there are no tools that will generate such files in an automated way, for more information see [this sample repository with example manifest files](https://github.com/MicrosoftDocs/mslearn-aks-deploy-container-app/tree/master/kubernetes).
Before you can deploy to AKS, you'll need to set target Kubernetes namespace and create an image pull secret. See [Pull images from an Azure container registry to a Kubernetes cluster](../container-registry/container-registry-auth-kubernetes.md), to learn more about how pulling images works.
When your Kubernetes cluster, container registry, and repository are no longer n
> [!div class="nextstepaction"] > [Learn about Azure Kubernetes Service](/azure/architecture/reference-architectures/containers/aks-start-here)
+> [!div class="nextstepaction"]
+> [Learn how to create multiple pipelines on GitHub Actions with AKS](https://docs.microsoft.com/learn/modules/aks-deployment-pipeline-github-actions)
+ ### More Kubernetes GitHub Actions * [Kubectl tool installer](https://github.com/Azure/setup-kubectl) (`azure/setup-kubectl`): Installs a specific version of kubectl on the runner.
api-management Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/diagnose-solve-problems.md
When you build and managed an API in Azure API Management, you want to be prepar
Although this experience is most helpful when you re having issues with your API within the last 24 hours, all the diagnostic graphs are always available for you to analyze.
+[!NOTE] Diagnose and solve problems is currenty not supported for Consumption Tier.
+ ## Open API Management Diagnostics To access API Management Diagnostics, navigate to your API Management service instance in the [Azure portal](https://portal.azure.com). In the left navigation, select **Diagnose and solve problems**.
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/zone-redundancy.md
Previously updated : 04/28/2021 Last updated : 05/07/2021
Configuring API Management for zone redundancy is currently supported in the fol
* South Central US * Southeast Asia * UK South
+* West Europe
* West US 2 * West US 3
app-service App Service Hybrid Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-hybrid-connections.md
ms.assetid: 66774bde-13f5-45d0-9a70-4e9536a4f619 Previously updated : 02/05/2020 Last updated : 05/05/2021
Hybrid Connections is both a service in Azure and a feature in Azure App Service
Within App Service, Hybrid Connections can be used to access application resources in any network that can make outbound calls to Azure over port 443. Hybrid Connections provides access from your app to a TCP endpoint and does not enable a new way to access your app. As used in App Service, each Hybrid Connection correlates to a single TCP host and port combination. This enables your apps to access resources on any OS, provided it is a TCP endpoint. The Hybrid Connections feature does not know or care what the application protocol is, or what you are accessing. It simply provides network access. ## How it works ##
-Hybrid Connections requires a relay agent to be deployed where it can reach both the desired endpoint as well as to Azure. The relay agent, Hybrid Connection Manager (HCM), calls out to Azure Relay over port 443. From the web app site, the App Service infrastructure also connects to Azure Relay on your application's behalf. Through the joined connections, your app is able to access the desired endpoint. The connection uses TLS 1.2 for security and shared access signature (SAS) keys for authentication and authorization.
+Hybrid Connections requires a relay agent to be deployed where it can reach both the desired endpoint as well as to Azure. The relay agent, Hybrid Connection Manager (HCM), calls out to Azure Relay over port 443. From the web app site, the App Service infrastructure also connects to Azure Relay on your application's behalf. Through the joined connections, your app is able to access the desired endpoint. The connection uses TLS 1.2 for security and shared access signature (SAS) keys for authentication and authorization.
-![Diagram of Hybrid Connection high-level flow][1]
When your app makes a DNS request that matches a configured Hybrid Connection endpoint, the outbound TCP traffic will be redirected through the Hybrid Connection.
Things you cannot do with Hybrid Connections include:
To create a Hybrid Connection, go to the [Azure portal][portal] and select your app. Select **Networking** > **Configure your Hybrid Connection endpoints**. Here you can see the Hybrid Connections that are configured for your app.
-![Screenshot of Hybrid Connection list][2]
-To add a new Hybrid Connection, select **[+] Add hybrid connection**. You'll see a list of the Hybrid Connections that you already created. To add one or more of them to your app, select the ones you want, and then select **Add selected Hybrid Connection**.
+To add a new Hybrid Connection, select **[+] Add hybrid connection**. You'll see a list of the Hybrid Connections that you already created. To add one or more of them to your app, select the ones you want, and then select **Add selected Hybrid Connection**.
-![Screenshot of Hybrid Connection portal][3]
If you want to create a new Hybrid Connection, select **Create new hybrid connection**. Specify the:
If you want to create a new Hybrid Connection, select **Create new hybrid connec
- Endpoint port. - Service Bus namespace you want to use.
-![Screenshot of Create new hybrid connection dialog box][4]
Every Hybrid Connection is tied to a Service Bus namespace, and each Service Bus namespace is in an Azure region. It's important to try to use a Service Bus namespace in the same region as your app, to avoid network induced latency.
If you want to remove your Hybrid Connection from your app, right-click it and s
When a Hybrid Connection is added to your app, you can see details on it simply by selecting it.
-![Screenshot of Hybrid connections details][5]
### Create a Hybrid Connection in the Azure Relay portal ###
App Service Hybrid Connections are only available in Basic, Standard, Premium, a
|-|-| | Basic | 5 per plan | | Standard | 25 per plan |
-| PremiumV2 | 200 per app |
-| Isolated | 200 per app |
+| Premium (v1-v3) | 220 per app |
+| Isolated (v1-v2) | 220 per app |
-The App Service plan UI shows you how many Hybrid Connections are being used and by what apps.
+The App Service plan UI shows you how many Hybrid Connections are being used and by what apps.
-![Screenshot of App Service plan properties][6]
Select the Hybrid Connection to see details. You can see all the information that you saw at the app view. You can also see how many other apps in the same plan are using that Hybrid Connection.
The Hybrid Connections feature requires a relay agent in the network that hosts
This tool runs on Windows Server 2012 and later. The HCM runs as a service and connects outbound to Azure Relay on port 443.
-After installing HCM, you can run HybridConnectionManagerUi.exe to use the UI for the tool. This file is in the Hybrid Connection Manager installation directory. In Windows 10, you can also just search for *Hybrid Connection Manager UI* in your search box.
+After installing HCM, you can run HybridConnectionManagerUi.exe to use the UI for the tool. This file is in the Hybrid Connection Manager installation directory. In Windows 10, you can also just search for *Hybrid Connection Manager UI* in your search box.
-![Screenshot of Hybrid Connection Manager][7]
When you start the HCM UI, the first thing you see is a table that lists all the Hybrid Connections that are configured with this instance of the HCM. If you want to make any changes, first authenticate with Azure. To add one or more Hybrid Connections to your HCM: 1. Start the HCM UI.
-2. Select **Configure another Hybrid Connection**.
-![Screenshot of Configure New Hybrid Connections][8]
+2. Select **Add a new Hybrid Connection**.
1. Sign in with your Azure account to get your Hybrid Connections available with your subscriptions. The HCM does not continue to use your Azure account beyond that. 1. Choose a subscription. 1. Select the Hybrid Connections that you want the HCM to relay.
-![Screenshot of Hybrid Connections][9]
1. Select **Save**. You can now see the Hybrid Connections you added. You can also select the configured Hybrid Connection to see details.
-![Screenshot of Hybrid Connection Details][10]
To support the Hybrid Connections it is configured with, HCM requires:
Each HCM can support multiple Hybrid Connections. Also, any given Hybrid Connect
To enable someone outside your subscription to host an HCM instance for a given Hybrid Connection, share the gateway connection string for the Hybrid Connection with them. You can see the gateway connection string in the Hybrid Connection properties in the [Azure portal][portal]. To use that string, select **Enter Manually** in the HCM, and paste in the gateway connection string.
-![Manually add a Hybrid Connection][11]
### Upgrade ###
In App Service, the **tcpping** command-line tool can be invoked from the Advanc
If you have a command-line client for your endpoint, you can test connectivity from the app console. For example, you can test access to web server endpoints by using curl.
-<!--Image references-->
-[1]: ./media/app-service-hybrid-connections/hybridconn-connectiondiagram.png
-[2]: ./media/app-service-hybrid-connections/hybridconn-portal.png
-[3]: ./media/app-service-hybrid-connections/hybridconn-addhc.png
-[4]: ./media/app-service-hybrid-connections/hybridconn-createhc.png
-[5]: ./media/app-service-hybrid-connections/hybridconn-properties.png
-[6]: ./media/app-service-hybrid-connections/hybridconn-aspproperties.png
-[7]: ./media/app-service-hybrid-connections/hybridconn-hcm.png
-[8]: ./media/app-service-hybrid-connections/hybridconn-hcmadd.png
-[9]: ./media/app-service-hybrid-connections/hybridconn-hcmadded.png
-[10]: ./medietails.png
-[11]: ./media/app-service-hybrid-connections/hybridconn-manual.png
-[12]: ./media/app-service-hybrid-connections/hybridconn-bt.png
- <!--Links--> [HCService]: /azure/service-bus-relay/relay-hybrid-connections-protocol/ [portal]: https://portal.azure.com/
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
To secure a custom domain in a TLS binding, the certificate has additional requi
The free App Service Managed Certificate is a turn-key solution for securing your custom DNS name in App Service. It's a fully functional TLS/SSL certificate that's managed by App Service and renewed automatically. The free certificate comes with the following limitations: -- Does not support wildcard certificates.
+- Does not support wildcard certificates and should not be used as a client certificate.
- Is not exportable. - Is not supported on App Service Environment (ASE). - Is not supported with root domains that are integrated with Traffic Manager.
azure-arc Scenario Migrate To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/scenario-migrate-to-azure.md
+
+ Title: Migrate Azure Arc enabled server to Azure
+description: Learn how to migrate your Azure Arc enabled servers running on-premises or other cloud environment to Azure.
Last updated : 05/06/2021+++
+# Migrate your on-premises or other cloud Arc enabled server to Azure
+
+This article is intended to help you plan and successfully migrate your on-premises server or virtual machine managed by Azure Arc enabled servers to Azure. By following these steps, you are able to transition management from Arc enabled servers based on the supported VM extensions installed and Azure services based on its Arc server resource identity.
+
+Before performing these steps, review the Azure Migrate [Prepare on-premises machines for migration to Azure](../../migrate/prepare-for-migration.md) article to understand requirements how to prepare for using Azure Migrate.
+
+In this article, you:
+
+* Inventory Azure Arc enabled servers supported VM extensions installed.
+* Uninstall all VM extensions from the Arc enabled server.
+* Identify Azure services configured to authenticate with your Arc enabled server-managed identity and prepare to update those services to use the Azure VM identity after migration.
+* Review Azure role-based access control (Azure RBAC) access rights granted to the Arc enabled server resource to maintain who has access to the resource after it has been migrated to an Azure VM.
+* Delete the Arc enabled server resource identity from Azure and remove the Arc enabled server agent.
+* Install the Azure guest agent.
+* Migrate the server or VM to Azure.
+
+## Step 1: Inventory and remove VM extensions
+
+To inventory the VM extensions installed on your Arc enabled server, you can list them using the Azure CLI or with Azure PowerShell.
+
+With Azure PowerShell, use the [Get-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/get-azconnectedmachineextension) command with the `-MachineName` and `-ResourceGroupName` parameters.
+
+With the Azure CLI, use the [az connectedmachine extension list](/cli/azure/ext/connectedmachine/connectedmachine/extension#ext_connectedmachine_az_connectedmachine_extension_list) command with the `--machine-name` and `--resource-group` parameters. By default, the output of Azure CLI commands is in JSON (JavaScript Object Notation). To change the default output to a list or table, for example, use [az configure --output](/cli/azure/reference-index). You can also add `--output` to any command for a one time change in output format.
+
+After identifying which VM extensions are deployed, you can remove them using the [Azure portal](manage-vm-extensions-portal.md), using the [Azure PowerShell](manage-vm-extensions-powershell.md), or using the [Azure CLI](manage-vm-extensions-cli.md). If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](../../azure-monitor/vm/vminsights-enable-policy.md), it is necessary to [create an exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) to prevent re-evaluation and deployment of the extensions on the Arc enabled server before the migration is complete.
+
+## Step 2: Review access rights
+
+List role assignments for the Arc enabled servers resource, using [Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md#list-role-assignments-for-a-resource) and with other PowerShell code, you can export the results to CSV or another format.
+
+If you're using a managed identity for an application or process running on an Arc enabled server, you need to make sure the Azure VM has a managed identity assigned. To view the role assignment for a managed identity, you can use the Azure PowerShell `Get-AzADServicePrincipal` cmdlet. For more information, see [List role assignments for a managed identity](../../role-based-access-control/role-assignments-list-powershell.md#list-role-assignments-for-a-managed-identity).
+
+A system-managed identity is also used when Azure Policy is used to audit settings inside a machine or server. With Arc enabled servers, the Guest Configuration agent is included, and performs validation of audit settings. After you migrate, see [Deploy requirements for Azure virtual machines](../../governance/policy/concepts/guest-configuration.md#deploy-requirements-for-azure-virtual-machines) for information on how to configure your Azure VM manually or with policy with the Guest Configuration extension.
+
+Update role assignment with any resources accessed by the managed identity to allow the new Azure VM identity to authenticate to those services. See the following to learn [how managed identities for Azure resources work for an Azure Virtual Machine (VM)](../../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
+
+## Step 3: Disconnect from Azure Arc and uninstall agent
+
+Delete the resource ID of the Arc enabled server in Azure using one of the following methods:
+
+ * Running `azcmagent disconnect` command on the machine or server.
+
+ * From the selected registered Arc enabled server in the Azure portal by selecting **Delete** from the top bar.
+
+ * Using the [Azure CLI](../../azure-resource-manager/management/delete-resource-group.md?tabs=azure-cli#delete-resource) or [Azure PowerShell](../../azure-resource-manager/management/delete-resource-group.md?tabs=azure-powershell#delete-resource). For the`ResourceType` parameter use `Microsoft.HybridCompute/machines`.
+
+Then, remove the Azure Arc enabled servers Windows or Linux agent following the [Remove agent](manage-agent.md#remove-the-agent) steps.
+
+## Step 4: Install the Azure Guest Agent
+
+The VM that is migrated to Azure from on-premises doesn't have the Linux or Windows Azure Guest Agent installed. In these scenarios, you have to manually install the VM agent. For more information about how to install the VM Agent, see [Azure Virtual Machine Windows Agent Overview](../../virtual-machines/extensions/agent-windows.md) or [Azure Virtual Machine Linux Agent Overview](../../virtual-machines/extensions/agent-linux.md).
+
+## Step 5: Migrate server or machine to Azure
+
+Before proceeding with the migration with Azure Migration, review the [Prepare on-premises machines for migration to Azure](../../migrate/prepare-for-migration.md) article to learn about requirements necessary to use Azure Migrate. To complete the migration to Azure, review the Azure Migrate [migration options](../../migrate/prepare-for-migration.md#next-steps) based on your environment.
+
+## Step 6: Deploy Azure VM extensions
+
+After migration and completion of all post-migration configuration steps, you can now deploy the Azure VM extensions based on the VM extensions originally installed on your Arc enabled server. Review [Azure virtual machine extensions and features](../../virtual-machines/extensions/overview.md) to help plan your extension deployment.
+
+To resume using audit settings inside a machine with Azure Policy Guest Configuration policy definitions, see [Enable Guest Configuration](../../governance/policy/concepts/guest-configuration.md#enable-guest-configuration).
+
+If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](../../azure-monitor/vm/vminsights-enable-policy.md), remove the [exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) you created earlier. To use Azure Policy to enable Azure virtual machines, see [Deploy Azure Monitor at scale using Azure Policy](../../azure-monitor/deploy-scale.md#vm-insights).
+
+## Next steps
+
+Troubleshooting information can be found in the [Troubleshoot Connected Machine agent](troubleshoot-agent-onboard.md) guide.
azure-functions Functions Add Output Binding Storage Queue Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
Because you are using the storage connection string, your function connects to t
Skip this section if you have already installed Azure Storage Explorer and connected it to your Azure account.
-1. Run the [Azure Storage Explorer] tool, select the connect icon on the left, and select **Add an account**.
+1. Run the [Azure Storage Explorer](https://storageexplorer.com/) tool, select the connect icon on the left, and select **Add an account**.
![Add an Azure account to Microsoft Azure Storage Explorer](./media/functions-add-output-binding-storage-queue-vs-code/storage-explorer-add-account.png)
You've updated your HTTP triggered function to write data to a Storage queue. No
+ [Examples of complete Function projects in PowerShell](/samples/browse/?products=azure-functions&languages=azurepowershell). + [Azure Functions PowerShell developer guide](functions-reference-powershell.md)
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/storage-considerations.md
Azure Functions requires an Azure Storage account when you create a function app
|Storage service | Functions usage | |||
-| [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md) | Maintain bindings state and function keys. <br/>Also used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
+| [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) | Maintain bindings state and function keys. <br/>Also used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
| [Azure Files](../storage/files/storage-files-introduction.md) | File share used to store and run your function app code in a [Consumption Plan](consumption-plan.md) and [Premium Plan](functions-premium-plan.md). |
-| [Azure Queue storage](../storage/queues/storage-queues-introduction.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
-| [Azure Table storage](../storage/tables/table-storage-overview.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
+| [Azure Queue Storage](../storage/queues/storage-queues-introduction.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
+| [Azure Table Storage](../storage/tables/table-storage-overview.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
> [!IMPORTANT] > When using the Consumption/Premium hosting plan, your function code and binding configuration files are stored in Azure File storage in the main storage account. When you delete the main storage account, this content is deleted and cannot be recovered.
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/availability-azure-functions.md
Title: Create and run custom availability tests using Azure Functions description: This doc will cover how to create an Azure Function with TrackAvailability() that will run periodically according to the configuration given in TimerTrigger function. The results of this test will be sent to your Application Insights resource, where you will be able to query for and alert on the availability results data. Customized tests will allow you to write more complex availability tests than is possible using the portal UI, monitor an app inside of your Azure VNET, change the endpoint address, or create an availability test if it's not available in your region. Previously updated : 05/04/2020 Last updated : 05/06/2021
Last updated 05/04/2020
This article will cover how to create an Azure Function with TrackAvailability() that will run periodically according to the configuration given in TimerTrigger function with your own business logic. The results of this test will be sent to your Application Insights resource, where you will be able to query for and alert on the availability results data. This allows you to create customized tests similar to what you can do via [Availability Monitoring](./monitor-web-app-availability.md) in the portal. Customized tests will allow you to write more complex availability tests than is possible using the portal UI, monitor an app inside of your Azure VNET, change the endpoint address, or create an availability test even if this feature is not available in your region. > [!NOTE]
-> This example is designed solely to show you the mechanics of how the TrackAvailability() API call works within an Azure Function. Not how to write the underlying HTTP Test code/business logic that would be required to turn this into a fully functional availability test. By default if you walk through this example you will be creating an availability test that will always generate a failure.
-
-## Create timer triggered function
--- If you have an Application Insights Resource:
- - By default Azure Functions creates an Application Insights resource but if you would like to use one of your already created resources you will need to specify that during creation.
- - Follow the instructions on how to [create an Azure Functions resource and Timer triggered function](../../azure-functions/functions-create-scheduled-function.md) (stop before clean up) with the following choices.
- - Select the **Monitoring** tab near the top.
-
- ![ Create an Azure Functions app with your own App Insights resource](media/availability-azure-functions/create-function-app.png)
-
- - Select the Application Insights dropdown box and type or select the name of your resource.
-
- ![Selecting existing Application Insights resource](media/availability-azure-functions/app-insights-resource.png)
-
- - Select **Review + create**
-- If you do not have an Application Insights Resource created yet for your timer triggered function:
- - By default when you are creating your Azure Functions application it will create an Application Insights resource for you.
- - Follow the instructions on how to [create an Azure Functions resource and Timer triggered function](../../azure-functions/functions-create-scheduled-function.md) (stop before clean-up).
-
-## Sample code
-
-Copy the code below into the run.csx file (this will replace the pre-existing code). To do this, go into your Azure Functions application and select your timer trigger function on the left.
-
->[!div class="mx-imgBorder"]
->![Azure function's run.csx in Azure portal](media/availability-azure-functions/runcsx.png)
-
-> [!NOTE]
-> For the Endpoint Address you would use: `EndpointAddress= https://dc.services.visualstudio.com/v2/track`. Unless your resource is located in a region like Azure Government or Azure China in which case consult this article on [overriding the default endpoints](./custom-endpoints.md#regions-that-require-endpoint-modification) and select the appropriate Telemetry Channel endpoint for your region.
-
-```C#
-#load "runAvailabilityTest.csx"
-
-using System;
-using System.Diagnostics;
-using Microsoft.ApplicationInsights;
-using Microsoft.ApplicationInsights.Channel;
-using Microsoft.ApplicationInsights.DataContracts;
-using Microsoft.ApplicationInsights.Extensibility;
-
-// The Application Insights Instrumentation Key can be changed by going to the overview page of your Function App, selecting configuration, and changing the value of the APPINSIGHTS_INSTRUMENTATIONKEY Application setting.
-// DO NOT replace the code below with your instrumentation key, the key's value is pulled from the environment variable/application setting key/value pair.
-private static readonly string instrumentationKey = Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY");
-
-//[CONFIGURATION_REQUIRED]
-// If your resource is in a region like Azure Government or Azure China, change the endpoint address accordingly.
-// Visit https://docs.microsoft.com/azure/azure-monitor/app/custom-endpoints#regions-that-require-endpoint-modification for more details.
-private const string EndpointAddress = "https://dc.services.visualstudio.com/v2/track";
-
-private static readonly TelemetryConfiguration telemetryConfiguration = new TelemetryConfiguration(instrumentationKey, new InMemoryChannel { EndpointAddress = EndpointAddress });
-private static readonly TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
-
-public async static Task Run(TimerInfo myTimer, ILogger log)
-{
- log.LogInformation($"Entering Run at: {DateTime.Now}");
-
- if (myTimer.IsPastDue)
- {
- log.LogWarning($"[Warning]: Timer is running late! Last ran at: {myTimer.ScheduleStatus.Last}");
- }
-
- // [CONFIGURATION_REQUIRED] provide {testName} accordingly for your test function
- string testName = "AvailabilityTestFunction";
-
- // REGION_NAME is a default environment variable that comes with App Service
- string location = Environment.GetEnvironmentVariable("REGION_NAME");
-
- log.LogInformation($"Executing availability test run for {testName} at: {DateTime.Now}");
- string operationId = Guid.NewGuid().ToString("N");
-
- var availability = new AvailabilityTelemetry
- {
- Id = operationId,
- Name = testName,
- RunLocation = location,
- Success = false
- };
-
- var stopwatch = new Stopwatch();
- stopwatch.Start();
-
- try
- {
- await RunAvailbiltyTestAsync(log);
- availability.Success = true;
- }
- catch (Exception ex)
- {
- availability.Message = ex.Message;
-
- var exceptionTelemetry = new ExceptionTelemetry(ex);
- exceptionTelemetry.Context.Operation.Id = operationId;
- exceptionTelemetry.Properties.Add("TestName", testName);
- exceptionTelemetry.Properties.Add("TestLocation", location);
- telemetryClient.TrackException(exceptionTelemetry);
- }
- finally
- {
- stopwatch.Stop();
- availability.Duration = stopwatch.Elapsed;
- availability.Timestamp = DateTimeOffset.UtcNow;
-
- telemetryClient.TrackAvailability(availability);
- // call flush to ensure telemetry is sent
- telemetryClient.Flush();
- }
-}
-
-```
-
-On the right under view files, select **Add**. Call the new file **function.proj** with the following configuration.
-
-```C#
-<Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <TargetFramework>netstandard2.0</TargetFramework>
- </PropertyGroup>
- <ItemGroup>
- <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure youΓÇÖre using the latest version -->
- </ItemGroup>
-</Project>
-
-```
-
->[!div class="mx-imgBorder"]
->![On the right select, add. Name the file function.proj](media/availability-azure-functions/addfile.png)
-
-On the right under view files, select **Add**. Call the new file **runAvailabilityTest.csx** with the following configuration.
-
-```C#
-public async static Task RunAvailbiltyTestAsync(ILogger log)
-{
- // Add your business logic here.
- throw new NotImplementedException();
-}
-
-```
+> This example is designed solely to show you the mechanics of how the TrackAvailability() API call works within an Azure Function. Not how to write the underlying HTTP Test code/business logic that would be required to turn this into a fully functional availability test. By default if you walk through this example you will be creating a basic availability HTTP GET test.
+
+## Create a timer trigger function
+
+1. Create a Azure Functions resource.
+ - If you already have an Application Insights Resource:
+ - By default Azure Functions creates an Application Insights resource but if you would like to use one of your already created resources you will need to specify that during creation.
+ - Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app) with the following modification:
+ - On the **Monitoring** tab, select the Application Insights dropdown box then type or select the name of your resource.
+ :::image type="content" source="media/availability-azure-functions/app-insights-resource.png" alt-text="On the monitoring tab select your existing Application Insights resource.":::
+ - If you do not have an Application Insights Resource created yet for your timer triggered function:
+ - By default when you are creating your Azure Functions application it will create an Application Insights resource for you. Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app).
+ > [!NOTE]
+ > You can host your functions on a Consumption, Premium, or App Service plan. If you are testing behind a V-Net or testing non public endpoints then you will need to use the premium plan in place of the consumption. Select your plan on the **Hosting** tab.
+2. Create a timer trigger function.
+ 1. In your function app, select the **Functions** tab.
+ 1. Select **Add** and in the Add function tab select the follow configurations:
+ 1. Development environment: *Develop in portal*
+ 1. Select a template: *Timer trigger*
+ 1. Select **Add** to create the Timer trigger function.
+
+ :::image type="content" source="media/availability-azure-functions/add-function.png" alt-text="Screenshot of how to add a timer trigger function to your function app." lightbox="media/availability-azure-functions/add-function.png":::
+
+## Add and edit code in the App Service Editor
+
+Navigate to your deployed function app and under *Development Tools* select the **App Service Editor** tab.
+
+To create a new file, right click under your timer trigger function (for example "TimerTrigger1") and select **New File**. Then type the name of the file and press enter.
+
+1. Create a new file called "function.proj" and paste the following code:
+
+ ```xml
+ <Project Sdk="Microsoft.NET.Sdk">
+     <PropertyGroup>
+         <TargetFramework>netstandard2.0</TargetFramework>
+     </PropertyGroup>
+     <ItemGroup>
+         <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure you’re using the latest version -->
+     </ItemGroup>
+ </Project>
+
+ ```
+
+ :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot of function.proj in App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
+
+2. Create a new file called "runAvailabilityTest.csx" and paste the following code:
+
+ ```csharp
+ using System.Net.Http;
+
+ public async static Task RunAvailabilityTestAsync(ILogger log)
+ {
+     using (var httpClient = new HttpClient())
+     {
+         // TODO: Replace with your business logic
+         await httpClient.GetStringAsync("https://www.bing.com/");
+     }
+ }
+ ```
+
+3. Copy the code below into the run.csx file (this will replace the pre-existing code):
+
+ ```csharp
+ #load "runAvailabilityTest.csx"
+
+ using System;
+
+ using System.Diagnostics;
+
+ using Microsoft.ApplicationInsights;
+
+ using Microsoft.ApplicationInsights.Channel;
+
+ using Microsoft.ApplicationInsights.DataContracts;
+
+ using Microsoft.ApplicationInsights.Extensibility;
+
+ private static TelemetryClient telemetryClient;
+
+ // =============================================================
+
+ // ****************** DO NOT MODIFY THIS FILE ******************
+
+ // Business logic must be implemented in RunAvailabilityTestAsync function in runAvailabilityTest.csx
+
+ // If this file does not exist, please add it first
+
+ // =============================================================
+
+ public async static Task Run(TimerInfo myTimer, ILogger log, ExecutionContext executionContext)
+
+ {
+     if (telemetryClient == null)
+     {
+         // Initializing a telemetry configuration for Application Insights based on connection string
+
+         var telemetryConfiguration = new TelemetryConfiguration();
+         telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
+         telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
+         telemetryClient = new TelemetryClient(telemetryConfiguration);
+     }
+
+     string testName = executionContext.FunctionName;
+     string location = Environment.GetEnvironmentVariable("REGION_NAME");
+     var availability = new AvailabilityTelemetry
+     {
+         Name = testName,
+
+         RunLocation = location,
+
+         Success = false,
+     };
+
+     availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
+     availability.Context.Operation.Id = Activity.Current.RootId;
+     var stopwatch = new Stopwatch();
+     stopwatch.Start();
+
+     try
+     {
+         using (var activity = new Activity("AvailabilityContext"))
+         {
+             activity.Start();
+             availability.Id = Activity.Current.SpanId.ToString();
+             // Run business logic
+             await RunAvailabilityTestAsync(log);
+         }
+         availability.Success = true;
+     }
+
+     catch (Exception ex)
+     {
+         availability.Message = ex.Message;
+         throw;
+     }
+
+     finally
+     {
+         stopwatch.Stop();
+         availability.Duration = stopwatch.Elapsed;
+         availability.Timestamp = DateTimeOffset.UtcNow;
+         telemetryClient.TrackAvailability(availability);
+         telemetryClient.Flush();
+     }
+ }
+
+ ```
## Check availability To make sure everything is working, you can look at the graph in the Availability tab of your Application Insights resource. > [!NOTE]
-> If you implemented your own business logic in runAvailabilityTest.csx then you will see successful results like in the screenshots below, if you did not then you will see failed results. Tests created with `TrackAvailability()` will appear with **CUSTOM** next to the test name.
+> Tests created with TrackAvailability() will appear with **CUSTOM** next to the test name.
->[!div class="mx-imgBorder"]
->![Availability tab with successful results](media/availability-azure-functions/availability-custom.png)
+ :::image type="content" source="media/availability-azure-functions/availability-custom.png" alt-text="Availability tab with successful results." lightbox="media/availability-azure-functions/availability-custom.png":::
To see the end-to-end transaction details, select **Successful** or **Failed** under drill into, then select a sample. You can also get to the end-to-end transaction details by selecting a data point on the graph.
->[!div class="mx-imgBorder"]
->![Select a sample availability test](media/availability-azure-functions/sample.png)
-
->[!div class="mx-imgBorder"]
->![End-to-end transaction details](media/availability-azure-functions/end-to-end.png)
-If you ran everything as is (without adding business logic), then you will see that the test failed.
## Query in Logs (Analytics) You can use Logs(analytics) to view you availability results, dependencies, and more. To learn more about Logs, visit [Log query overview](../logs/log-query-overview.md).
->[!div class="mx-imgBorder"]
->![Availability results](media/availability-azure-functions/availabilityresults.png)
->[!div class="mx-imgBorder"]
->![Screenshot shows New Query tab with dependencies limited to 50.](media/availability-azure-functions/dependencies.png)
## Next steps - [Application Map](./app-map.md) - [Transaction diagnostics](./transaction-diagnostics.md)-
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps.md
In order to enable telemetry collection with Application Insights, only the Appl
|App setting name | Definition | Value | |--|:|-:| |ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` |
-|XDT_MicrosoftApplicationInsights_Mode | In default mode only, essential features are enabled in order to insure optimal performance. | `default` or `recommended`. |
+|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled in order to insure optimal performance. | `default` or `recommended`. |
|InstrumentationEngine_EXTENSION_VERSION | Controls if the binary-rewrite engine `InstrumentationEngine` will be turned on. This setting has performance implications and impacts cold start/startup time. | `~1` | |XDT_MicrosoftApplicationInsights_BaseExtensions | Controls if SQL & Azure table text will be captured along with the dependency calls. Performance warning: application cold start up time will be affected. This setting requires the `InstrumentationEngine`. | `~1` | |XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with Application Insights SDK. Loads the extension side-by-side with the SDK and uses it to send telemetry (disables the Application Insights SDK). |`1`|
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Previously updated : 02/07/2021 Last updated : 05/07/2021
Log Analytics workspace data export continuously exports data from a Log Analyti
- Destination must be unique across all export rules in your workspace. - The destination storage account or event hub must be in the same region as the Log Analytics workspace. - Names of tables to be exported can be no longer than 60 characters for a storage account and no more than 47 characters to an event hub. Tables with longer names will not be exported.-- Append blob support for Azure Data Lake Storage is now in [limited public preview](https://azure.microsoft.com/updates/append-blob-support-for-azure-data-lake-storage-preview/) ## Data completeness Data export will continue to retry sending data for up to 30 minutes in the event that the destination is unavailable. If it's still unavailable after 30 minutes then data will be discarded until the destination becomes available.
The storage account data format is [JSON lines](../essentials/resource-logs-blob
Log Analytics data export can write append blobs to immutable storage accounts when time-based retention policies have the *allowProtectedAppendWrites* setting enabled. This allows writing new blocks to an append blob, while maintaining immutability protection and compliance. See [Allow protected append blobs writes](../../storage/blobs/storage-blob-immutable-storage.md#allow-protected-append-blobs-writes).
-> [!NOTE]
-> Append blob support for Azure Data Lake storage is now available in preview in all Azure regions. [Enroll to the limited public preview](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4mEEwKhLjlBjU3ziDwLH-pURDk2NjMzUTVEVzU5UU1XUlRXSTlHSlkxQS4u) before you create an export rule to Azure Data Lake storage. Export will not operate without this enrollment.
- ### Event hub Data is sent to your event hub in near-real-time as it reaches Azure Monitor. An event hub is created for each data type that you export with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would sent to an event hub named *am-SecurityEvent*. If you want the exported data to reach a specific event hub, or if you have a table with a name that exceeds the 47 character limit, you can provide your own event hub name and export all data for defined tables to it.
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 03/28/2021 Last updated : 04/30/2021
The default pricing for Log Analytics is a **Pay-As-You-Go** model based on data
- Number of VMs monitored - Type of data collected from each monitored VM
-In addition to the Pay-As-You-Go model, Log Analytics has **Capacity Reservation** tiers which enable you to save as much as 25% compared to the Pay-As-You-Go price. The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. Any usage above the reservation level will be billed at the Pay-As-You-Go rate. The Capacity Reservation tiers have a 31-day commitment period. During the commitment period, you can change to a higher level Capacity Reservation tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower Capacity Reservation tier until after the commitment period is finished. Billing for the Capacity Reservation tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Capacity Reservation pricing.
+In addition to the Pay-As-You-Go model, Log Analytics has **Capacity Reservation** tiers which enable you to save as much as 25% compared to the Pay-As-You-Go price. The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. Any usage above the reservation level (overage) will be billed at that same price per GB as provided by the current capacity reservation level. The Capacity Reservation tiers have a 31-day commitment period. During the commitment period, you can change to a higher level Capacity Reservation tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower Capacity Reservation tier until after the commitment period is finished. Billing for the Capacity Reservation tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Capacity Reservation pricing.
+
+> [!NOTE]
+> Until early May 2021, capacity reservation overage was billed at the Pay-As-You-Go price. The change to bill overage at the same price-per-GB as the current capacity reservation level reduces the need for users with large reservation levels to fine-tune their reservation level.
In all pricing tiers, an event's data size is calculated from a string representation of the properties which are stored in Log Analytics for this event, whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the AzureActivity, Heartbeat and Usage types. To determine whether an event was excluded from billing for data ingestion, you can use the `_IsBillable` property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (1.0E9 bytes).
There are some additional Log Analytics limits, some of which depend on the Log
- Change [performance counter configuration](../agents/data-sources-performance-counters.md). - To modify your event collection settings, review [event log configuration](../agents/data-sources-windows-events.md). - To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
+- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
azure-netapp-files Azure Netapp Files Configure Export Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-configure-export-policy.md
Previously updated : 07/27/2020 Last updated : 05/07/2021 # Configure export policy for NFS or dual-protocol volumes
You can create up to five export policy rules.
* **Allowed Clients**: Specify the value in one of the following formats: * IPv4 address. Example: `10.1.12.24` * IPv4 address with a subnet mask expressed as a number of bits. Example: `10.1.12.10/4`
- * Comma-separated IP addresses. You can enter multiple host IPs in a single rule by separating them with commas. Example: `10.1.12.25,10.1.12.28,10.1.12.29`
+ * Comma-separated IP addresses. You can enter multiple host IPs in a single rule by separating them with commas. The length limit is 4096 characters. Example: `10.1.12.25,10.1.12.28,10.1.12.29`
* **Access**: Select one of the following access types: * No Access
azure-portal Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quick-create-template.md
A dashboard in the Azure portal is a focused and organized view of your cloud re
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-azure-portal-dashboard%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.portal%2Fazure-portal-dashboard%2Fazuredeploy.json)
## Prerequisites
The dashboard you create in the next part of this quickstart requires an existin
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-azure-portal-dashboard/). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-azure-portal-dashboard/azuredeploy.json). One Azure resource is defined in the template, [Microsoft.Portal/dashboards](/azure/templates/microsoft.portal/dashboards) - Create a dashboard in the Azure portal.
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-azure-portal-dashboard/). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/azuredeploy.json). One Azure resource is defined in the template, [Microsoft.Portal/dashboards](/azure/templates/microsoft.portal/dashboards) - Create a dashboard in the Azure portal.
## Deploy the template 1. Select the following image to sign in to Azure and open a template.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-azure-portal-dashboard%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.portal%2Fazure-portal-dashboard%2Fazuredeploy.json)
1. Select or enter the following values, then select **Review + create**.
If you want to remove the VM and associated dashboard, delete the resource group
For more information about dashboards in the Azure portal, see: > [!div class="nextstepaction"]
-> [Create and share dashboards in the Azure portal](azure-portal-dashboards.md)
+> [Create and share dashboards in the Azure portal](azure-portal-dashboards.md)
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles. Previously updated : 05/03/2021 Last updated : 05/07/2021
You can set the lock level to **CanNotDelete** or **ReadOnly**. In the portal, t
- **CanNotDelete** means authorized users can still read and modify a resource, but they can't delete the resource. - **ReadOnly** means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role.
-## How locks are applied
+Unlike role-based access control, you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
+
+## Lock inheritance
When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence.
-Unlike role-based access control, you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
+## Understand scope of locks
+
+> [!NOTE]
+> It's important to understand that locks don't apply to all types of operations. Azure operations can be divided into two categories - control plane and data plane. **Locks only apply to control plane operations**.
+
+Control plane operations are operations sent to `https://management.azure.com`. Data plane operations are operations sent to your instance of a service, such as `https://myaccount.blob.core.windows.net/`. For more information, see [Azure control plane and data plane](control-plane-and-data-plane.md). To discover which operations use the control plane URL, see the [Azure REST API](/rest/api/azure/).
+
+This distinction means locks prevent changes to a resource, but they don't restrict how resources perform their own functions. For example, a ReadOnly lock on a SQL Database logical server prevents you from deleting or modifying the server. It doesn't prevent you from creating, updating, or deleting data in the databases on that server. Data transactions are permitted because those operations aren't sent to `https://management.azure.com`.
-Resource Manager locks apply only to operations that happen in the [management plane](control-plane-and-data-plane.md), which consists of operations sent to `https://management.azure.com`. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on a SQL Database logical server prevents you from deleting or modifying the server. It doesn't prevent you from creating, updating, or deleting data in the databases on that server. Data transactions are permitted because those operations aren't sent to `https://management.azure.com`.
+More examples of the differences between control and data plane operations are described in the next section.
## Considerations before applying locks
azure-sql-edge Deploy Onnx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-onnx.md
Title: Deploy and make predictions with ONNX
-description: Learn how to train a model, convert it to ONNX, deploy it to Azure SQL Edge or Azure SQL Managed Instance (preview), and then run native PREDICT on data using the uploaded ONNX model.
+description: Learn how to train a model, convert it to ONNX, deploy it to Azure SQL Edge or Azure SQL Managed Instance, and then run native PREDICT on data using the uploaded ONNX model.
keywords: deploy SQL Edge ms.prod: sql ms.technology: machine-learning
azure-sql Server Trust Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/server-trust-group-overview.md
Server Trust Group is a concept used for managing trust between Azure SQL Manage
## Server Trust Group setup
-Server Trust Group can be setup via [Azure PowerShell](https://docs.microsoft.com/powershell/module/az.sql/new-azsqlservertrustgroup) or [Azure CLI](https://docs.microsoft.com/cli/azure/sql/stg?view=azure-cli-latest).
+Server Trust Group can be setup via [Azure PowerShell](https://docs.microsoft.com/powershell/module/az.sql/new-azsqlservertrustgroup) or [Azure CLI](https://docs.microsoft.com/cli/azure/sql/stg).
The following section describes setup of Server Trust Group using Azure portal. 1. Go to the [Azure portal](https://portal.azure.com/).
During public preview the following limitations apply to Server Trust Groups.
* For more information about distributed transactions in Azure SQL Managed Instance, see [Distributed transactions](../database/elastic-transactions-overview.md). * For release updates and known issues state, see [Managed Instance release notes](../database/doc-changes-updates-release-notes.md).
-* If you have feature requests, add them to the [Managed Instance forum](https://feedback.azure.com/forums/915676-sql-managed-instance).
+* If you have feature requests, add them to the [Managed Instance forum](https://feedback.azure.com/forums/915676-sql-managed-instance).
azure-sql Business Continuity High Availability Disaster Recovery Hadr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/business-continuity-high-availability-disaster-recovery-hadr-overview.md
Azure supports these SQL Server technologies for business continuity:
* [Log shipping](/sql/database-engine/log-shipping/about-log-shipping-sql-server) * [SQL Server backup and restore with Azure Blob storage](/sql/relational-databases/backup-restore/sql-server-backup-and-restore-with-microsoft-azure-blob-storage-service) * [Database mirroring](/sql/database-engine/database-mirroring/database-mirroring-sql-server) - Deprecated in SQL Server 2016
+* [Azure Site Recovery](../../../site-recovery/site-recovery-sql.md)
You can combine the technologies to implement a SQL Server solution that has both high-availability and disaster recovery capabilities. Depending on the technology that you use, a hybrid deployment might require a VPN tunnel with the Azure virtual network. The following sections show you some example deployment architectures.
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
Title: "Checklist: Performance best practices & guidelines"
description: Provides a quick checklist to review your best practices and guidelines to optimize the performance of your SQL Server on Azure Virtual Machine (VM). documentationcenter: na-+ editor: '' tags: azure-service-management
ms.devlang: na
vm-windows-sql-server Previously updated : 03/25/2021- Last updated : 05/06/2021+
The following is a quick checklist of storage configuration best practices for r
To learn more, see the comprehensive [Storage best practices](performance-guidelines-best-practices-storage.md).
+## SQL Server features
-## Azure & SQL feature specific
+The following is a quick checklist of best practices for SQL Server configuration settings when running your SQL Server instances in an Azure virtual machine in production:
-The following is a quick checklist of best practices for SQL Server and Azure-specific configurations when running your SQL Server on Azure VM:
--- Register with the [SQL IaaS Agent Extension](sql-agent-extension-manually-register-single-vm.md) to unlock a number of [feature benefits](sql-server-iaas-agent-extension-automate-management.md#feature-benefits). -- Enable database page compression.-- Enable instant file initialization for data files.-- Limit autogrowth of the database.-- Disable autoshrink of the database.-- Move all databases to data disks, including system databases.
+- Enable [database page compression](/sql/relational-databases/data-compression/data-compression) where appropriate.
+- Enable [backup compression](/sql/relational-databases/backup-restore/backup-compression-sql-server).
+- Enable [instant file initialization](/sql/relational-databases/databases/database-instant-file-initialization) for data files.
+- Limit [autogrowth](/troubleshoot/sql/admin/considerations-autogrow-autoshrink#considerations-for-autogrow) of the database.
+- Disable [autoshrink](/troubleshoot/sql/admin/considerations-autogrow-autoshrink#considerations-for-auto_shrink) of the database.
+- Disable autoclose of the database.
+- Move all databases to data disks, including [system databases](/sql/relational-databases/databases/move-system-databases).
- Move SQL Server error log and trace file directories to data disks. - Configure default backup and database file locations.-- [Enable locked pages in memory](/sql/database-engine/configure-windows/enable-the-lock-pages-in-memory-option-windows).-- Evaluate and apply the [latest cumulative updates](/sql/database-engine/install-windows/latest-updates-for-microsoft-sql-server) for the installed version of SQL Server.-- Back up directly to Azure Blob storage.-- Use multiple [tempdb](/sql/relational-databases/databases/tempdb-database#optimizing-tempdb-performance-in-sql-server) files, 1 file per core, up to 8 files.--
+- Set max [SQL Server memory limit](/sql/database-engine/configure-windows/server-memory-server-configuration-options#use-) to leave enough memory for the Operating System. ([Leverage Memory\Available Bytes](/sql/relational-databases/performance-monitor/monitor-memory-usage) to monitor the operating system memory health).
+- Enable [lock pages in memory](/sql/database-engine/configure-windows/enable-the-lock-pages-in-memory-option-windows).
+- Enable [optimize for adhoc workloads](/sql/database-engine/configure-windows/optimize-for-ad-hoc-workloads-server-configuration-option) for OLTP heavy environments.
+- Evaluate and apply the [latest cumulative updates](/sql/database-engine/install-windows/latest-updates-for-microsoft-sql-server) for the installed versions of SQL Server.
+- Enable [Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) on all production SQL Server databases [following best practices](/sql/relational-databases/performance/best-practice-with-the-query-store).
+- Enable [automatic tuning](/sql/relational-databases/automatic-tuning/automatic-tuning) on mission critical application databases.
+- Ensure that all [tempdb best practices](/sql/relational-databases/databases/tempdb-database#optimizing-tempdb-performance-in-sql-server) are followed.
+- Place tempdb on the ephemeral D:/ drive.
+- [Use the recommended number of files](/troubleshoot/sql/performance/recommendations-reduce-allocation-contention#resolution), using multiple tempdb data files starting with 1 file per core, up to 8 files.
+- Schedule SQL Server Agent jobs to run [DBCC CHECKDB](/sql/t-sql/database-console-commands/dbcc-checkdb-transact-sql#a-checking-both-the-current-and-another-database), [index reorganize](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes#reorganize-an-index), [index rebuild](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes#rebuild-an-index), and [update statistics](/sql/t-sql/statements/update-statistics-transact-sql#examples) jobs.
+- Monitor and manage the health and size of the SQL Server [transaction log file](/sql/relational-databases/logs/manage-the-size-of-the-transaction-log-file#Recommendations).
+- Take advantage of any new [SQL Server features](/sql/sql-server/what-s-new-in-sql-server-ver15) available for the version being used.
+- Be aware of the differences in [supported features](/sql/sql-server/editions-and-components-of-sql-server-version-15) between the editions you are considering deploying.
+
+## Azure features
+
+The following is a quick checklist of best practices for Azure-specific guidance when running your SQL Server on Azure VM:
+
+- Register with [the SQL IaaS Agent Extension](sql-agent-extension-manually-register-single-vm.md) to unlock a number of [feature benefits](sql-server-iaas-agent-extension-automate-management.md#feature-benefits).
+- Leverage the best [backup and restore strategy](backup-restore.md#decision-matrix) for your SQL Server workload.
+- Ensure [Accelerated Networking is enabled](../../../virtual-network/create-vm-accelerated-networking-cli.md#portal-creation) on the virtual machine.
+- Leverage [Azure Security Center](../../../security-center/index.yml) to improve the overall security posture of your virtual machine deployment.
+- Leverage [Azure Defender](../../../security-center/azure-defender.md), integrated with [Azure Security Center](https://azure.microsoft.com/services/security-center/), for specific [SQL Server VM coverage](../../../security-center/defender-for-sql-introduction.md) including vulnerability assessments, and just-in-time access, which reduces the attack service while allowing legitimate users to access virtual machines when necessary. To learn more, see [vulnerability assessments](../../../security-center/defender-for-sql-on-machines-vulnerability-assessment.md), [enable vulnerability assessments for SQL Server VMs](sql-vulnerability-assessment-enable.md) and [just-in-time access](../../../security-center/just-in-time-explained.md).
+- Leverage [Azure Advisor](../../../advisor/advisor-overview.md) to address [performance](../../../advisor/advisor-performance-recommendations.md), [cost](../../../advisor/advisor-cost-recommendations.md), [reliability](../../../advisor/advisor-high-availability-recommendations.md), [operational excellence](../../../advisor/advisor-operational-excellence-recommendations.md), and [security recommendations](../../../advisor/advisor-security-recommendations.md).
+- Leverage [Azure Monitor](../../../azure-monitor/vm/quick-monitor-azure-vm.md) to collect, analyze, and act on telemetry data from your SQL Server environment. This includes identifying infrastructure issues with [VM insights](../../../azure-monitor/vm/vminsights-overview.md) and monitoring data with [Log Analytics](../../../azure-monitor/logs/log-query-overview.md) for deeper diagnostics.
+- Enable [Auto-shutdown](../../../automation/automation-solution-vm-management.md) for development and test environments.
+- Implement a high availability and disaster recovery (HADR) solution that meets your business continuity SLAs, see the [HADR options](business-continuity-high-availability-disaster-recovery-hadr-overview.md#deployment-architectures) options available for SQL Server on Azure VMs.
+- Use the Azure portal (support + troubleshooting) to evaluate [resource health](../../../service-health/resource-health-overview.md) and history; submit new support requests when needed.
## Next steps
azure-vmware Configure Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-windows-server-failover-cluster.md
Now that you've covered setting up a WSFC in Azure VMware Solution, you may want
- Setting up your new WSFC by adding more applications that require the WSFC capability. For instance, SQL Server and SAP ASCS. - Setting up a backup solution.
- - [Setting up Azure Backup Server for Azure VMware Solution](./set-up-backup-server-for-azure-vmware-solution.md)
- - [Backup solutions for Azure VMware Solution virtual machines](./ecosystem-back-up-vms.md)
+ - [Setting up Azure Backup Server for Azure VMware Solution](/azure/backup/backup-azure-microsoft-azure-backup?context=/azure/azure-vmware/context/context)
+ - [Backup solutions for Azure VMware Solution virtual machines](/azure/backup/backup-azure-backup-server-vmware?context=/azure/azure-vmware/context/context)
azure-vmware Ecosystem Migration Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/ecosystem-migration-vms.md
Our migration partners have industry-leading migration solutions in VMware-based
You aren't required to use VMware HCX as a migration tool, which means you can also migrate physical workloads into Azure VMware Solution. Additionally, migrations to your Azure VMware Solution environment don't need an ExpressRoute connection if it's not available within your source environment. Migrations can be done to multiple locations if you decide to host those workloads in multiple Azure regions.
-You can find more information on these backup solutions here:
+You can find more information on these migration solutions here:
- [RiverMeadow](https://www.rivermeadow.com/migrating-to-vmware-on-azure).
backup About Restore Microsoft Azure Recovery Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/about-restore-microsoft-azure-recovery-services.md
+
+ Title: Restore options with Microsoft Azure Recovery Services (MARS) agent
+description: Learn about the restore options available with the Microsoft Azure Recovery Services (MARS) agent.
++ Last updated : 05/07/2021+
+# About restore using the Microsoft Azure Recovery Services (MARS) agent
+
+This article describes the restore options available with the Microsoft Azure Recovery Services (MARS) agent.
+
+## Before you begin
+
+- Ensure that the latest version of the [MARS agent](https://aka.ms/azurebackup_agent) is installed.
+- Ensure that [network throttling](backup-windows-with-mars-agent.md#enable-network-throttling) is disabled.
+- Ensure that high-speed storage with sufficient space for the [agent cache folder](/azure/backup/backup-azure-file-folder-backup-faq#manage-the-backup-cache-folder) is available.
+- Monitor memory and CPU resource, and ensure that sufficient resources are available for decompressing and decrypting data.
+- While using the **Instant Restore** feature to mount a recovery point as a disk, use **robocopy** with multi-threaded copy option (/MT switch) to copy files efficiently from the mounted recovery point.
+
+## Restore options
+
+The MARS agent offers multiple restore options. Each option provides unique benefits that makes it suitable for certain scenarios.
+
+Using the MARS agent you can:
+
+- **[Restore Windows Server System State Backup](backup-azure-restore-system-state.md):** This option helps restore the System State as files from a recovery point in Azure Backup, and apply those to the Windows Server using the Windows Server Backup utility.
+- **[Restore all backed up files in a volume](restore-all-files-volume-mars.md):** This option recovers all backed up data in a specified volume from the recovery point in Azure Backup. It allows a faster transfer speed (up to 40 MBPS).<br>We recommend you to use this option for recovering large amounts of data, or entire volumes.
+- **[Restore a specific set of backed up files and folders in a volume using PowerShell](backup-client-automation.md#restore-data-from-azure-backup):** If the paths to the files and folders relative to the volume root are known, this option allows you to restore the specified set of files and folders from a recovery point, using the faster transfer speed of the full volume restore. However, this option doesnΓÇÖt provide the convenience of browsing files and folders in the recovery point using the Instant Restore option.
+- **[Restore individual files and folders using Instant Restore](backup-azure-restore-windows-server.md):** This option allows quick access to the backup data by mounting volume in the recovery point as a drive. You can then browse, and copy files and folders. This option offers a copy speed of up to 6 MBPS, which is suitable for recovering individual files and folders of total size less than 80 GB. Once the required files are copied, you can unmount the recovery point.
+
+## Next steps
+
+- For more frequently asked questions, see [MARS agent FAQs](backup-azure-file-folder-backup-faq.yml).
+- For information about supported scenarios and limitations, see [Support Matrix for the backup with the MARS agent](backup-support-matrix-mars-agent.md).
backup Backup Afs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-afs.md
Title: Back up Azure file shares in the Azure portal description: Learn how to use the Azure portal to back up Azure file shares in the Recovery Services vault Previously updated : 01/20/2020 Last updated : 05/07/2021 # Back up Azure file shares
In this article, you'll learn how to:
* [Learn](azure-file-share-backup-overview.md) about the Azure file share snapshot-based backup solution. * Ensure that the file share is present in one of the [supported storage account types](azure-file-share-support-matrix.md).
-* Identify or create a [Recovery Services vault](#create-a-recovery-services-vault) in the same region as the storage account that hosts the file share.
+* Identify or create a [Recovery Services vault](#create-a-recovery-services-vault) in the same region and subscription as the storage account that hosts the file share.
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
backup Backup Azure Manage Mars https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-manage-mars.md
Title: Manage and monitor MARS Agent backups
description: Learn how to manage and monitor Microsoft Azure Recovery Services (MARS) Agent backups by using the Azure Backup service. Previously updated : 10/07/2019 Last updated : 04/29/2021 # Manage Microsoft Azure Recovery Services (MARS) Agent backups by using the Azure Backup service
There are two ways to stop protecting Files and Folders backup:
- You'll be able to restore the backed-up data for unexpired recovery points. - If you decide to resume protection, then you can use the *Re-enable backup schedule* option. After that, data will be retained based on the new retention policy. - **Stop protection and delete backup data**.
- - This option will stop all future backup jobs from protecting your data and delete all the recovery points.
- - You'll receive a delete Backup data alert email with a message *Your data for this Backup item has been deleted. This data will be temporarily available for 14 days, after which it will be permanently deleted* and recommended action *Reprotect the Backup item within 14 days to recover your data.*
- - To resume protection, reprotect within 14 days from delete operation.
+ - This option will stop all future backup jobs from protecting your data. If the vault security features are not enabled, all recovery points are immediately deleted.<br>If the security features are enabled, the deletion is delayed by 14 days, and you'll receive an alert email with a message *Your data for this Backup item has been deleted. This data will be temporarily available for 14 days, after which it will be permanently deleted* and a recommended action *Reprotect the Backup item within 14 days to recover your data.*<br>In this state, the retention policy continues to apply, and the backup data remains billable. [Learn more](backup-azure-security-feature.md#enable-security-features) on how to enable vault security features.
+ - To resume protection, reprotect the server within 14 days from the delete operation. In this duration, you can also restore the data to an alternate server.
### Stop protection and retain backup data
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 02/16/2021 Last updated : 05/05/2021
baremetal-infrastructure Configure Snapcenter Oracle Baremetal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/configure-snapcenter-oracle-baremetal.md
+
+ Title: Configure SnapCenter for Oracle on BareMetal Infrastructure
+description: Learn how to configure SnapCenter for Oracle on BareMetal Infrastructure.
++ Last updated : 05/05/2021++
+# Configure SnapCenter for Oracle on BareMetal Infrastructure
+
+In this article, we'll walk through the steps to configure NetApp SnapCenter to run Oracle on BareMetal Infrastructure.
+
+## Add storage hosts to SnapCenter
+
+First, let's add storage hosts to SnapCenter.
+
+When you start the SnapCenter session and save the security exemption, the application will start up. Sign in to SnapCenter on your virtual machine (VM) using Active Directory credentials.
+
+https://\<hostname\>:8146/
+
+Now you're ready to add both a production storage location and a secondary storage location.
+
+### Add the production storage location
+
+To add your production storage virtual machine (SVM):
+
+1. Select **Storage Systems** in the left menu and select **+ New**.
+
+2. Enter the **Add Storage System** information:
+
+ - Storage System: Enter the SVM IP address provided by Microsoft Operations.
+ - Enter username created; the default is **snap center**.
+ - Enter the password created when Microsoft Operations [modified customer password using REST](set-up-snapcenter-to-route-traffic.md#steps-to-integrate-snapcenter).
+ - Leave **Send Autosupport notification for failed operations to storage system** unchecked.
+ - Select **Log SnapCenter events to syslog**.
+
+3. Select **Submit**.
+
+ Once the storage is verified, the IP address of the storage system added is shown with the username entered.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/oracle-baremetal-snapcenter-add-production-storage-complete.png" alt-text="Screenshot showing the IP address of the storage system added.":::
+
+ The default is one SVM per tenant. If a customer has multiple tenants, the recommendation is to configure all SVMs here in SnapCenter.
+
+### Add secondary storage location
+
+To add the disaster recovery (DR) storage location, the customer subscriptions in both the primary and DR locations must be peered. Contact Microsoft Operations for assistance.
+
+Adding a secondary storage location is similar to adding the primary storage location. However, once the primary and DR locations are peered, access to the secondary storage location should be verified by pinging storage on the secondary location.
+
+>[!NOTE]
+>If the ping is unsuccessful, it's usually because a default route doesn't exist on the host for the client virtual LAN (VLAN).
+
+Once the ping is verified, repeat the steps you used for adding the primary storage, only now for the DR location on a DR host. We recommend using the same certificate in both locations, but it isn't required.
+
+## Set up Oracle hosts
+
+>[!NOTE]
+>This process is for all Oracle hosts regardless of their location: production or disaster recovery.
+
+1. Before adding the hosts to SnapCenter and installing the SnapCenter plugins, JAVA 1.8 must be installed. Verify that Java 1.8 is installed on each host before proceeding.
+
+2. Create the same non-root user on each of the Oracle Real Application Clusters (RAC) hosts and add to /etc/sudoers. You will need to provide a new password.
+
+3. Once the user has been created, it must be added to /etc/sudoers with an explicit set of permissions. These permissions are found by browsing to the following location on the SnapCenter VM: C:\ProgramData\NetApp\SnapCenter\Package Repository and opening the file oracle\_checksum.
+
+4. Copy the appropriate set of commands, depending on the sudo package, where LINUXUSER is replaced with the newly created username.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/netapp-snapcenter-oracle-checksum-details.png" alt-text="Screenshot of the oracle_checksum file details.":::
+
+ The set of commands for sudo package 1.8.7 or greater is copied into /etc/sudoers.
+
+5. Once the user has been added to sudoers, in SnapCenter select **Settings** > **Credential** > **New**.
+
+6. Fill out the Credential box as follows:
+
+ - **Credential Name**: Provide a name that identifies username and sudoers.
+ - **Authentication**: Linux
+ - **Username**: Provide newly created username.
+ - **Password**: <Enter Password>
+ - Check **Use sudo privileges** box
+
+7. Select **Ok**.
+
+8. Select **Hosts** on the left navigation and then select **+ Add**.
+
+9. Enter the following values for **Add Host**:
+
+ - **Host Type**: Linux
+ - **Host Name**: Enter either FQDN of the primary RAC host or the IP address of the primary RAC host.
+ - **Credentials**: Select the drop-down and select the new created credentials for sudoers.
+ - Check the **Oracle Database** box.
+
+ >[!NOTE]
+ >You must enter one of the actual Oracle Host IP addresses. Entering either a Node VIP or Scan IP is not supported.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/snapcenter-add-hosts-details.png" alt-text="Screenshot showing the details for the new host.":::
+
+10. Select **More Options**. Ensure **Add all hosts in the Oracle RAC** is selected. Select **Save** and then select **Submit**.
+
+11. The plugin installer will now attempt to communicate with the IP address provided. If communication is successful, the identity of the provided Oracle RAC host is confirmed by selecting **Confirm and Submit**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/snapcenter-add-hosts-confirm-fingerprint.png" alt-text="Screenshot showing the new host fingerprint.":::
+
+12. After the initial Oracle RAC is confirmed, SnapCenter will then attempt to communicate and confirm all other Oracle RAC servers as part of the cluster. Select **Confirm Others and Submit**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/snapcenter-add-hosts-fingerprint-authentication.png" alt-text="Screenshot showing the authentication of fingerprint.":::
+
+ A status screen appears for managed hosts letting you know that SnapCenter is installing the selected Oracle plugin. Installation progress can be checked by looking at the logs located at /opt/NetApp/snapcenter/scc/logs/DISCOVER-PLUGINS\_0.log on any of the Oracle RAC hosts.
+
+ Once the plugins are installed successfully, the following screen will appear.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/snapcenter-installed-plugins.png" alt-text="Screenshot showing all the installed SnapCenter plugins.":::
+
+## Add credentials for Oracle Recovery Manager
+
+The Oracle Recovery Manager (RMAN) catalog authentication method authenticates against the RMAN catalog database. If you have configured an external catalog mechanism and registered your database to catalog a database, you need to add RMAN catalog authentication. RMAN credentials can be added at any time.
+
+1. In SnapCenter, select **Settings** on the left menu and then select **Credential**. Select **New** in the upper right corner.
+
+2. Enter the **Credential Name** to call RMAN credentials stored in SnapCenter. In the **Authentication** drop-down, select **Oracle RMAN Catalog for Authentication**. Enter your username and password. Select **OK**.
+
+3. Once the credentials are added, the database settings must be modified within SnapCenter. Select the database in resources. Select **Database Settings** in the upper right corner of the main window.
+
+4. On the Database Settings screen, select **Configure Database**.
+
+5. Expand **Configure RMAN Catalog Settings**. Select the dropdown for **Use Existing Credential** previously set for this database and select the appropriate option. Add the TNS Name for this individual database. Select **OK**.
+
+ >[!NOTE]
+ >Different credentials should be created in the preceding step for each unique combination of username and passwords. If you prefer, SnapCenter will accept a single set of credentials for all databases.
+ >
+ >Even though RMAN credentials have been added to the database, RMAN won't be cataloged unless the Catalog Backup with Oracle Recovery Manager (RMAN) is also checked for each protection policy, as discussed in the following section on creating protection policies (per your backup policies).
+
+## Create protection policies
+
+Once your hosts have been successfully added to SnapCenter, you're ready to create your protection policies.
+
+Select **Resources** on the left menu. SnapCenter will present all of the identified databases that existed on the RAC hosts. The Single Instance types for bn6p1X and grid are ignored as part of the protection scheme. The status of all should be "Not Protected" at this point.
+
+As discussed in the [Overview](netapp-snapcenter-integration-oracle-baremetal.md#oracle-disaster-recovery-architecture), different file types have different snapshot frequencies that enable a local RPO of approximately 15 minutes by making archive log backups every 15 minutes. The datafiles are then backed up at longer intervals, like every hour or more. Therefore, two different policies are created.
+
+With the RAC database(s) identified, several protection policies are created, including a policy for datafiles and control files and another for archive logs.
+
+### Datafiles protection policy
+
+Follow these steps to create a datafiles protection policy.
+
+1. In SnapCenter, select **Settings** in the left menu and then select **Policies** on the top menu. Select **New**.
+
+2. Select the radio button for **Datafiles and control files** and scroll down.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/snapcenter-new-database-backup-policy.png" alt-text="Screenshot showing the details for the new database backup policy.":::
+
+3. Select the radio button for **hourly**. If integration with RMAN is desired for catalog purposes and the RMAN credentials have already been added, select the checkbox for **Catalog backup with Oracle Recovery Manager (RMAN)**. Select **Next**. If the catalog backup is checked, [RMAN credentials must be added](#add-credentials-for-oracle-recovery-manager) for cataloging to occur.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/snapcenter-database-backup-policy-frequency.png" alt-text="Screenshot showing options to choose schedule frequency in your backup policy.":::
+
+ >[!IMPORTANT]
+ >A selection must be made for schedule policy even if a backup frequency other than hourly or daily, etc. is desired. The actual backup frequency is set later in the process. Do not select "none" unless all backups under this policy will be on-demand only.
+
+4. Select **Retention** on the left menu. There are two types of retention settings that are set for a backup policy. The first retention setting is the maximum number of on-demand snapshots to be kept. Based on your backup policies, a set number of snapshots are kept, such as the following example of 48 on-demand snapshots kept for 14 days. Set the appropriate on-demand retention policy according to your backup policies.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/snapcenter-new-database-backup-policy-retention.png" alt-text="Screenshot showing the database backup policy retention setting.":::
+
+5. The next retention setting set is the scheduled number of snapshots to keep based on the previous entry of either hourly, daily, weekly, etc. Select **Next**.
+
+6. On the **Replication** screen, select the checkbox for **Update SnapMirror after creating a local snapshot copy** and select **Next**. The other entries are left at default.
+
+ >[!NOTE]
+ >SnapVault is not currently supported in the Oracle BareMetal Infrastructure environment.
+
+ Replication can be added later by returning to the Protection Policy screen, selecting **modify protection policy** and **Replication** in the left menu.
+
+7. The **Script** page is where you enter any scripts needed to run either before or after the Oracle backup takes place. Currently, scripts as part of the SnapCenter process aren't supported. Select **Next**.
+
+8. Select **Verification** on the left menu. If you want the ability to verify snapshots for recoverability integrity, then select the checkbox next to hourly. No verification script commands are currently supported. Select **Next**.
+
+ >[!NOTE]
+ >The actual location and schedule of the verification process is added in a later section, Assign Protection Policies to Resources.
+
+9. On the **Summary** page, verify all settings are entered as expected and select **Finish**.
+
+### Archive logs protection policy
+
+Follow the preceding steps again to create your archive logs protection policy. In step 2, select the radio button for **Archive logs** instead of "Datafiles and control files."
+
+## Assign protection policies to resources
+
+Once a policy has been created, it then needs to be associated with a resource in order for SnapCenter to start executing within that policy.
+
+### Datafiles resource group
+
+1. Select **Resources** in the left menu, and then select **New Resource Group** in the upper right corner of the main window.
+
+2. Add the **Name** for this resource group and any tags for ease of searchability.
+
+ >[!NOTE]
+ >We recommend you check **Use custom name format for Snapshot copy** and add the following entries: **$ResourceGroup**, **$Policy**, and **$ScheduleType**. This ensures that the snapshot prefix meets the SnapCenter standard and that the snapshot name gives the necessary details at a glance. If you leave **Use custom name format for Snapshot copy** unchecked, whatever entry is placed in **Name** becomes the prefix for the snapshots created. Ensure that the name entered identifies the database and what is being protected, for instance, datafiles and control files.
+
+3. Under Backup settings, add **/u95** to exclude archive logs.
+
+4. On the Resource page, move all databases that are protected by the same backup protection policy from "Available Resources" to "Selected Resources." Don't add the Oracle database instances for each host, just the databases. Select **Next**.
+
+5. Select the protection policy for datafiles and control files you previously created. After selecting the policy, select the pencil under Configure schedules.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/new-datafiles-resource-group-available-policies.png" alt-text="Screenshot showing selecting the protection policy in order to configure schedules.":::
+
+6. Add the schedule for the selected policy to execute. Ensure the start date is after the current date and time.
+
+ >[!NOTE]
+ >The scheduled frequency does not need to match the frequency selected when you created the policy. For hourly, we recommend you just leave "Repeat every 1 hours." The start time will serve as the start time each hour for backup to occur. Ensure that the schedule for protecting the datafiles does not overlap with the schedule for protecting the archive logs, as only one backup can be active at a given time.
+
+7. Verification ensures the created snapshot is usable. The verification process is extensive, including creating clones of all Oracle database volumes, mounting the database to the primary host, and verifying its recoverability. This process is time-consuming and resource-consuming. If you want to configure the verification schedule, select the **+** sign.
+
+ >[!NOTE]
+ >The verification process occupies resources on the host for a significant amount of time. We recommended that if you add verification, do so on a host in the secondary location if available.
+
+ The following screenshot shows verification of a new resource group that did not have replication enabled and snapshots replicated when it was created.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/new-resource-group-verification-schedules.png" alt-text="Screenshot showing how to configure the verification schedule for the resource group.":::
+
+ If replication is enabled and a host in the disaster recovery location will be used for verification, then skip to step 2 in the following subsection. That step allows you to load secondary locators to verify backups on the secondary location.
+
+### Add verification schedule for new resource group
+
+1. Select either **Run verification after backup** or **Run scheduled verification** and select a pre-scheduled frequency in the drop-down. If DR is enabled, you can select **Verify on secondary storage location** to execute frequency in the DR location reducing resource strain on production. Select **OK**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/new-resource-group-add-verification-schedules.png" alt-text="Screenshot showing how to add a verification schedule.":::
+
+ If verification isn't required or is already setup locally, then skip to SMTP setup (step 5) below.
+
+2. If update SnapMirror was enabled when you created the datafiles protection policy, and a secondary storage location was added, SnapCenter detects replication between the two locations. In that case, a screen appears allowing you to load secondary locators to verify backups on the secondary location. Select **Load locators**.
+
+3. After you select load locators, SnapCenter will present the SnapMirror relationships that it found containing the datafiles and control files. These should match the Disaster Recovery framework provided by Microsoft Operations. Select the pencil under Configure Schedules.
+
+4. Select checkbox for **Verify on secondary storage location**.
+
+5. If an SMTP server is available, SnapCenter can send an email to SnapCenter administrators. Enter the following if you want an email sent.
+
+ - **Email Preference**: Enter your preference for frequency of receiving email.
+ - **From**: Enter the email address that SnapCenter will use to send email from.
+ - **To**: Enter email address that SnapCenter will send email to.
+ - **Subject**: Enter subject that SnapCenter will use when sending the email.
+ - Select **Attach job report** checkbox.
+ - Select **Next**.
+
+6. Verify settings on the Summary page and select **Finish**.
+
+Once a resource group has been created, to identify the resource group:
+
+1. Select **Resources** on the left menu.
+2. Select the dropdown menu in the main window next to View and select **Resource Group**.
+
+ If this is your first Resource group, only the newly created resource group for datafiles and control files will be present. If you also created an archive log resource group, you'll see that as well.
+
+### Archive log resource group
+
+To assign protection policies to your archive log resource group, follow the same steps you followed when assigning protection policies to your Datafiles resource group. The only differences are:
+
+- On step 3, don't add **/u95** to exclude archive logs; leave this field blank.
+- On step 6, we recommend backing up archive logs every 15 minutes.
+- The verification tab is blank for archive logs.
+
+## Next steps
+
+Learn how to create an on-demand backup of your Oracle Database in SnapCenter:
+
+> [!div class="nextstepaction"]
+> [Create on-demand backup in SnapCenter](create-on-demand-backup-oracle-baremetal.md)
baremetal-infrastructure Create On Demand Backup Oracle Baremetal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/create-on-demand-backup-oracle-baremetal.md
+
+ Title: Create on-demand backup of your Oracle Database in SnapCenter
+description: Learn how to create an on-demand backup of your Oracle Database in SnapCenter on Oracle BareMetal Infrastructure.
++ Last updated : 05/07/2021++
+# Create on-demand backup of your Oracle Database in SnapCenter
+
+In this article, we'll walk through creating an on-demand backup of your Oracle Database in SnapCenter.
+
+Once you've [configured SnapCenter](configure-snapcenter-oracle-baremetal.md), backups of your datafiles, control files, and archive logs will continue based on the schedule you entered when creating the resource group(s). However, as part of normal database protection, you might also want on-demand backups.
+
+## Steps to create an on-demand backup
+
+1. Select **Resources** on the left menu. Then in the dropdown menu next to **View**, select **Resource Group**. Select the resource group name corresponding to the on-demand backup you want to create.
+
+2. Select **Back up now** in the upper right.
+
+3. Verify the resource group and protection policy are correct for the on-demand backup. Select the **Verify after backup** checkbox if you want to verify this backup. Select **Backup**.
+
+After the backup completes, it will be available in the list of backups under **Resources**. Select the database or databases associated with the resource group you backed up. This backup will be retained according to the on-demand retention policy you set when creating the protection policy.
+
+## Next steps
+
+Learn how to restore an Oracle Database in SnapCenter:
+
+> [!div class="nextstepaction"]
+> [Restore Oracle database](restore-oracle-database-baremetal.md)
baremetal-infrastructure High Availability Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/high-availability-features.md
Redo can be delayed for a pre-determined period, so user errors aren't immediate
The NetApp Files storage solution used in BareMetal allows you to create snapshots of volumes. Snapshots allow you to revert a file system to a specific point in time quickly. Snapshot technologies allow recovery time objective (RTO) times that are a fraction of the time needed to restore a database backup.
-Snapshot functionality for Oracle databases is available through Azure NetApp SnapCenter. SnapCenter enables snapshots for backup, SnapVault gives you offline vaulting, and Snap Clone enables self-service restore and other operations.
+Snapshot functionality for Oracle databases is available through Azure NetApp SnapCenter. SnapCenter enables snapshots for backup, SnapVault gives you offline vaulting, and Snap Clone enables self-service restore and other operations. For more information, see [SnapCenter integration for Oracle on BareMetal Infrastructure](netapp-snapcenter-integration-oracle-baremetal.md).
## Recovery Manager
baremetal-infrastructure Netapp Snapcenter Integration Oracle Baremetal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/netapp-snapcenter-integration-oracle-baremetal.md
+
+ Title: SnapCenter integration for Oracle on BareMetal Infrastructure
+description: Learn how to use snapshot technologies from Oracle and NetApp to create operational recovery backups for Oracle databases on BareMetal Infrastructure.
++ Last updated : 05/05/2021++
+# SnapCenter integration for Oracle on BareMetal Infrastructure
+
+This how-to guide helps you with snapshot technologies from Oracle and NetApp to create operational recovery backups for Oracle databases. The use of snapshots is just one of several data protection approaches for Oracle. Snapshots can mitigate downtime and data loss when running an Oracle database on BareMetal Infrastructure.
+
+>[!IMPORTANT]
+>SnapCenter supports Oracle for SAP applications, but it does not provide SAP BR\*Tools integration.
+
+Although Oracle offers two different backup methods for snapshots, SnapCenter from NetApp only supports one method: hot backup mode. The hot backup mode is the traditional version of backing up and creating Oracle snapshots. It requires interaction with the Oracle host to temporarily place the database into a hot backup mode to catalog the archive logs properly. Hot backup mode also enables greater interaction with the RMAN database to keep closer track of available snapshots for recovery.
+
+The articles in this guide walk you through creating an operational recovery and disaster recovery framework for Oracle in hot backup mode. Disaster recovery is recovery of a database, or part of a database, following a disaster. An example might be a bad drive or a corrupt database. Operational recovery is recovery from something other than a disaster. An example might be the loss of a few rows of data that doesn't impede your business.
+
+## SnapCenter database organization
+SnapCenter organizes databases into resource groups. A resource group can be one or many databases with the same protection policy. So you don't have to select individual volumes that are part of the backup.
++
+## Oracle disaster recovery architecture
+
+Oracle hot-backups are divided into two different backups to roll forward using archive logs after restoring the datafiles. Protection of the datafiles and control files is on an "hourly" basis, but a longer backup frequency is acceptable. The longer the intervals between backups, the longer the recovery time.
+
+SnapCenter creates local snapshots of the database in the default datafile locations (nfs01). Snapshots get created on each file system that hosts either datafiles or control files. These backups ensure the database's fast recovery. It doesn't protect data in a multi-disk failure or site failure.
+
+The number of "hourly" snapshots is dependent on the backup policies set. An Oracle database has 2-5 days of operational recovery capability from snapshots.
+
+Enough archive logs must exist and are required to roll forward the Oracle database from the most recent, viable datafiles recovery point. Use snapshots of archive logs to lessen the recovery point objective (RPO) for an Oracle database. The shorter the frequency of snapshots on the archive logs, the less the RPO. The recommended snapshot interval for the archive logs is either five minutes or 15 minutes. The shorter interval of 5 minutes has the shortest RPO. The shorter interval does increase the complexity, because more snapshots must be managed as part of the recovery process.
+
+## Next steps
+
+Learn to set up NetApp SnapCenter to route traffic from Azure to Oracle BareMetal Infrastructure servers.
+
+> [!div class="nextstepaction"]
+> [Set up SnapCenter to route traffic](set-up-snapcenter-to-route-traffic.md)
baremetal-infrastructure Oracle Baremetal Ethernet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/oracle-baremetal-ethernet.md
For BareMetal instances, the default will have nine assigned IP addresses on the
- Ethernet "C" should have an assigned IP address that is used for communication to NFS storage. This type of address shouldn't be maintained in the etc/hosts directory. - Ethernet "D" should be used exclusively for global reach setup towards accessing BareMetal instances in your DR region.
-## Next step
+## Next steps
Learn more about BareMetal Infrastructure for Oracle architecture.
baremetal-infrastructure Restore Oracle Database Baremetal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/restore-oracle-database-baremetal.md
+
+ Title: Restore Oracle Database
+description: Learn how to restore your Oracle Database on BareMetal Infrastructure using SnapCenter.
++ Last updated : 05/07/2021++
+# Restore Oracle Database
+
+In this article, we'll walk through the steps to restore your Oracle Database on BareMetal Infrastructure using SnapCenter.
+
+You have several options to restore your Oracle Database on BareMetal Infrastructure. We recommend consulting the [Oracle Restore Decision Matrix](oracle-high-availability-recovery.md#choose-your-method-of-recovery) before undertaking restoration. This matrix can help you choose the most appropriate restore method given time of recovery and potential data loss.
+
+Typically, you'll restore the most current snapshots for data and archive log volumes, as the goal is to restore the database to the last known recovery point. Restoring snapshots is a permanent process.
+
+>[!IMPORTANT]
+>Great care is required to ensure that the appropriate snapshot is restored. Restoring from a snapshot deletes all other snapshots and their associated data. This includes even more current snapshots than the one you select for restoration. So we recommend you approach the restore process conservatively. Error on the side of using a more recent snapshot first if there is any question as to which snapshot should be recovered.
+
+## Restore database locally with restored archive logs
+
+Before attempting recovery, the database must be taken offline if it isn't already. Once you've verified the database isn't running on any nodes in the Oracle Real Application Clusters (RAC), you're ready to start. First, you'll restore the archive logs.
+
+1. Identify the backups available. In SnapCenter, select **Resources** in the left menu.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/restore-database-available-backups.png" alt-text="Screenshot showing the Database view.":::
+
+2. Select the database you want to restore from the list. The database will contain the list of resource groups created and their associated policies.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/available-backups-list.png" alt-text="Screenshot showing the full list of available backups to restore.":::
+
+3. A list of primary backups is then displayed. The backups are identified by their backup name and their type: Log or Data. Handle the log restoration first, as SnapCenter isn't designed to directly restore archive log volumes. Identify the archive log volume requiring restoration. Typically the volume to restore is the latest backup, as most recoveries will roll forward all archive logs from the last known good datafiles backup.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/primary-backups-list.png" alt-text="Screenshot showing a list of the primary backups.":::
+
+4. After selecting the Archive Log volume, the mount option in the upper right-hand corner of the backup list becomes enabled. Select **Mount**. On the Mount backups page, from the drop-down, select the host that will mount the backup. Any RAC node may be selected, but both the archive log and datafiles recovery host must be the same. Copy the mount path, as it will be used in the upcoming step to recover the datafiles. Select **Mount**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/restore-database-mount-backups.png" alt-text="Screenshot showing the backups to mount.":::
+
+5. Unlike backup jobs, the job viewer on this page doesn't show status of the mount process. To see the status of the mount, select **Monitor** on the left menu. It will then highlight the status of all jobs that have been run. The mount operation should be the first entry. You can also navigate to the copied mount path to verify existence of archive logs.
+
+ The selected host for mounting that filesystem will also receive an entry into /etc/fstab with the above mount path. It can be removed once recovery is complete.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/restore-database-mount-output.png" alt-text="Screenshot showing the mount output.":::
+
+6. Next, restore the datafiles and control files. As before, select **Resources** from the left menu and select the database to be restored. Select the data backup to be restored. Again, typically the backup restored is the most recent one. This time when you select the data backup, the restore option is enabled. Select **Restore**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/restore-database-selected-data-backup.png" alt-text="Screenshot of the data backup to be restored.":::
+
+7. On the **Restore Scope** tab: From the drop-down, select the same host you chose earlier to mount the log files. Ensure **All Datafiles** is selected and select the checkbox for **Control files**. Select **Next**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/restore-database-recovery-scope.png" alt-text="Screenshot showing the restore scope details for datafiles.":::
+
+8. On the **Recovery Scope** tab: The system change number (SCN) from the log backup that was mounted is entered. **Until SCN** is also now selected.
+
+ The SCN can be found on the list of all backups for each backup. The previously copied mount point location is also entered under **Specify external archive log files locations**. It's currently beyond the scope of this document to address more than one archive log restore location. Select **Next**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/restore-database-choose-recovery-scope.png" alt-text="Screenshot showing the recovery scope options.":::
+
+9. On the **PreOps** tab: No scripts are currently recommended as part of the pre-restore process. Select **Next**.
+
+10. On the PostOps tab: Again, no scripts are supported as part of the post-restore process. Select the checkbox for **Open the database or container database in READ-WRITE mode after recovery**. Select **Next**.
+
+11. On the **Notification** tab: If an SMTP server is available, and you want an email notification of the restore process, fill in the email settings. Select **Next**.
+
+12. On the **Summary** tab: Verify all details are correct. Select **Finish**.
+
+13. You can see the restore status by selecting the restore job in the **Activity** bottom screen. You can follow the progress by selecting the green arrows to open up each restore subsection and its progress.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/restore-job-details.png" alt-text="Screenshot showing the restore job details.":::
+
+ Once restore is complete, all of the steps will change to check marks. Select **Close**.
+
+ The removed rows in the simple database created are now verified as restored.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/output-of-restored-files.png" alt-text="Screenshot showing the output verification of the restored files.":::
+
+## Clone database locally or remotely
+
+Cloning the database is similar to the restore process. The clone database operation can be used either locally or remotely depending on the outcome you want. Cloning the database in the disaster recovery (DR) location is useful for several purposes, including disaster recovery failover testing and QA staging of production data. This assumes the disaster recovery location is used for test, development, QA, and so on.
+
+### Create a clone
+
+Creating a clone is a feature within SnapCenter that allows you to use a snapshot as a point-in-time reference to capture a similar set of data between the parent volume and the cloned volume by using pointers. The cloned volume is then read-writable and expanded only through writes, while any reads still occur on the parent volume. This feature allows you to create a "duplicate" set of data available to a host without interfering with the data that exists on the parent volume.
+
+This feature is especially useful for disaster recovery testing, as a temporary file system can be stood up based on the same snapshots that will be used in an actual recovery. You can verify data and that applications work as expected, and then shut down the disaster recovery test, without impacting the disaster recovery volumes or replication.
+
+Here are the steps to clone a database.
+
+1. In SnapCenter, select **Resources** and then select the database to be cloned. If you'll create the clone locally, continue to the next step. If you'll restore the clone in the secondary location, select backups above the Mirror Copies box.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/clone-database-manage-copies.png" alt-text="Clone database managed copies diagram.":::
+
+2. Select the appropriate data backup from the provided backup list. Select **Clone**.
+
+ >[!NOTE]
+ >The data backup must have been created earlier than the timestamp or system change number (SCN) if a timestamp clone is required.
+
+3. On the **Name** tab: Enter the name of the **SID** for the clone. Verify that **Source Volume** and **Destination Volume** for Data and Archive Logs match what Microsoft Operations provided for the disaster recovery mapping. Select **Next**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/provide-clone-database-sid.png" alt-text="Screenshot showing where to enter the database SID.":::
+
+4. On the **Locations** tab: The datafile locations, control files, and redo logs are where the clone operation will create the necessary file systems for recovery. We recommend leaving these as is. It's beyond the scope of this document to explore alternatives. Ensure the appropriate host is selected for the location of the restore. Select **Next**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/clone-database-select-host.png" alt-text="Screenshot showing how to select a host to clone.":::
+
+5. On the **Credentials** tab: The Credentials and Oracle Home Settings values are pulled from the existing production location. We recommend leaving these as is unless you know the secondary location has different values than the primary location. Select **Next**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/clone-database-credentials.png" alt-text="Screenshot for entering the database credentials for clone.":::
+
+6. On the **PreOps** tab: No pre-recovery scripts are currently supported. Select **Next**.
+
+7. On the **PostOps** tab: If the database will only be recovered until a certain time or SCN, then select the appropriate radio button and add either the timestamp or SCN. SnapCenter will recover the database until it reaches that point. Otherwise, leave **Until Cancel** selected. Select **Next**.
+
+8. On the **Notification** tab: Enter necessary SMTP information if you want to send a notification email when the cloning is complete. Select **Next**.
+
+9. On the **Summary** tab: Verify the appropriate **Clone SID** is entered and the correct host has been selected. Scroll down to ensure the appropriate recovery scope has been entered. Select **Finish**.
+
+10. The clone job will show in the active pop-up at the bottom of the screen. Select the clone activity to display the job details. Once the activity has finished, the Job Details page will show all green check marks and provide the completion time. Cloning usually takes around 7-10 minutes.
+
+11. After the job is complete, switch to the host used as the target for the clone and verify mount points using cat /etc/fstab. This verification ensures the appropriate mount points exist for the database that were listed during the clone wizard, and also highlights the database SID entered in the wizard. In the example below, the SID is dbsc4 as given by the mount points on the host.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/clone-database-switch-to-target-host.png" alt-text="Screenshot of command to switch to target host.":::
+
+12. On the host, type **oraenv** and press **enter**. Enter the newly created SID.
+
+ It's up to you to verify the database is restored appropriately. However, the following added steps are based on the database created by a user other than Oracle.
+
+13. Enter **sqlplus / as sysdba**. Since the table was created under a different user, the original, invalid username and password are automatically entered. Enter the correct username and password. The SQL> prompt will display once the sign in is successful.
+
+Enter a query of the basic database to verify the appropriate data is received. In the following example, we used the archive logs to roll forward the database. The following shows the archive logs were used appropriately, as the clonetest entry was created after the data backup was created. So if the archive logs didn't roll forward, that entry would not be listed.
+
+```sql
+SQL> select * from acolvin.t;
+
+COL1
+--
+COL2
+
+first insert
+17-DEC-20
+
+log restore
+17-DEC-20
+
+clonetest
+18-DEC-20
+
+COL1
+--
+COL2
+
+after first insert
+17-DEC-20
+
+next insert
+17-DEC-20
+
+final insert
+18-DEC-20
+
+COL1
+--
+COL2
+
+BILLY
+17-DEC-20
+
+7 rows selected.
+
+```
+
+### Delete a clone
+
+It's important to delete a clone once you're finished with it. If you intend to continue to use it, you should split the clone. A snapshot that is the parent for a clone cannot be deleted and is skipped as part of the retention count. If you don't delete clones as you finish with them, the number of snapshots you're maintaining could use excessive storage.
+
+A split clone copies the data from the existing volume and snapshot into a new volume. This process severs the relationship between the clone and the parent snapshot, allowing the snapshot to be deleted when its retention number is reached. If its retention number has already been reached, the snapshot will be deleted at the next snapshot. Still, a split clone also incurs storage cost.
+
+When a clone is created, the resource tab for that database will list a clone present whether locally or remotely. To delete the clone:
+
+1. On the **Resource** tab: Select the box containing the clone you wish to delete.
+
+2. The Secondary Mirror Clone(s) page shows the clone. In this example, the clone is in the secondary location. Select **Delete** in the right upper corner of the clone list.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/delete-clone-secondary-mirror-clones.png" alt-text="Screenshot of the secondary mirror clones.":::
+
+3. Ensure you've exited from SQLPLUS before executing. Select **OK**.
+
+4. To see the job progress, select **Jobs** in the left menu. A green check mark shows when the clone has been deleted.
+
+5. Once the clone is deleted, it might also be useful to unmount the archive logs that were used as part of the clone process, if applicable. Back on the appropriate backups list (from when you created the clone), the list of backups can be sorted by whether a backup has been mounted or not. The sort algorithm isn't perfect and won't always sort all items together, but will sort generally in the right direction. The following volume was previously mounted. Select **unmount**.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/delete-clone-unmount-backup.png" alt-text="Screenshot showing the backups list.":::
+
+6. Select **OK**. You can view the status of the unmount job by selecting **Monitor** on the left menu. A green check mark will show when the backup is unmounted.
+
+### Split a Clone
+
+Splitting a clone creates a copy of the parent volume by replicating all data in the parent volume to the point where the snapshot used to create the clone was created. This process separates the parent volume from the clone volume and removes the hold on the snapshot used to create the clone volume. The snapshot can then be deleted as part of the retention policy.
+
+Splitting a clone is useful to populate data in either the production environment or the disaster recovery environment. Splitting allows the new volumes to function independently of the parent volume.
+
+>[!NOTE]
+>The split clone process cannot be reversed nor canceled.
+
+To split a clone:
+
+1. Select the database that already contains a clone.
+
+2. Once the location of the clone is selected, it shows in the list on the Secondary Mirror Clone(s) page.
+
+3. Just above the list of clones, select **Clone Split**. SnapCenter doesn't allow any change as part of the split process, so the next page simply shows what changes will occur. Select **Start**.
+
+ >[!NOTE]
+ >The split process can take a significant amount of time depending on how much data must be copied, the layout of the database on storage, and the activity level of storage.
+
+After the process is finished, the clone that was split is removed from the list of backups, and the snapshot associated with the clone is now free to be removed as part of the normal retention plan on SnapCenter.
+
+## Restore database remotely after disaster recovery event
+
+SnapCenter isn't currently designed to automate the failover process. If a disaster recovery event occurs, either Microsoft Operations or REST recovery scripts are required to restore the database in the secondary location. It's currently beyond the scope of this document to detail the process of executing the REST recovery scripts.
+
+## Restart all RAC nodes after restore
+
+After you've restored the database in SnapCenter, it's only active on the RAC node you selected when restoring the database, as shown below. In this example, we restored the database to bn6sc2 to demonstrate that you can select any RAC node to perform the restore.
++
+Start the remaining RAC nodes by using the _srvctl start database_ command. After the command finishes, verify status that all RAC nodes are participating.
++
+## Unmount log archive volume
+
+After cloning or restoring a database, the log archives volume should be unmounted, as it's only used to restore the database and then is no longer used.
+
+1. On the **Resources** tab, select the appropriate backup list on the database by either selecting the local or remote list.
+
+2. Once you've selected the appropriate backup list, sort the list of backups by whether a backup has been mounted. The sort algorithm isn't perfect and won't always sort all items together, but will give usable results.
+
+3. Once you find the volume you want that was previously mounted, select **unmount**. Select **OK**.
+
+4. You can see the status of the unmount job by selecting **Monitor** on the left menu. A green check mark will show when the volume is unmounted.
+
+>[!NOTE]
+>Selecting unmount removes the entry from the /etc/fstab on the host it is mounted on. Unmount also frees the backup for deletion as necessary as part of the retention policy set in the protection policy.
+
+## Next steps
+
+Learn more about high availability and disaster recovery for Oracle on BareMetal:
+
+> [!div class="nextstepaction"]
+> [High availability and disaster recovery for Oracle on BareMetal](concepts-oracle-high-availability.md)
baremetal-infrastructure Set Up Snapcenter To Route Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/set-up-snapcenter-to-route-traffic.md
+
+ Title: Set up SnapCenter to route traffic from Azure to Oracle BareMetal servers
+description: Learn how to set up NetApp SnapCenter to route traffic from Azure to Oracle BareMetal Infrastructure servers.
++ Last updated : 05/05/2021+++
+# Set up NetApp SnapCenter to route traffic from Azure to Oracle BareMetal servers
+
+In this article, we'll walk through setting up NetApp SnapCenter to route traffic from Azure to Oracle BareMetal Infrastructure servers.
+
+## Prerequisites
+
+> [!div class="checklist"]
+> - **SnapCenter Server system requirements:** Azure Windows 2019 or newer with 4-vCPU, 16-GB RAM, and a minimum of 500 GB managed premium SSD storage.
+> - **ExpressRoute networking requirements:** A SnapCenter user for Oracle BareMetal must work with Microsoft Operations to create the networking accounts to enable communication between your personal storage virtual machine (VM) and the SnapCenter VM in Azure.
+> - **Java 1.8 on BareMetal instances:** The SnapCenter plugins require Java 1.8 installed on the BareMetal instances.
+
+## Steps to integrate SnapCenter
+
+Here are the steps you'll need to complete to set up NetApp SnapCenter to route traffic from Azure to Oracle BareMetal Infrastructure servers:
+
+1. Raise a support ticket request to communicate the user-generated public key to the Microsoft Ops team. Support is required to set up the SnapCenter user for access to the storage system.
+
+2. Create a VM in your Azure Virtual Network (VNet) that has access to your BareMetal instances; this VM is used for SnapCenter.
+
+3. Download and install SnapCenter.
+
+4. Backup and recovery operations.
+
+>[!NOTE]
+> These steps assume that you've already created an ExpressRoute connection between your subscription in Azure and your tenant in Oracle HaaS.
+
+## Create a support ticket for user-role storage setup
+
+1. Open the Azure portal and navigate to the Subscriptions page. Select your BareMetal subscription.
+2. On your BareMetal subscription page, select **Resource Groups**.
+3. Select an appropriate resource group in a region.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/select-resource-group.png" alt-text="Screenshot showing resource groups on the subscription page.":::
+
+4. Select a SKU entry corresponding to SAP HANA on Azure storage.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/select-sku.png" alt-text="Screenshot of a resource group showing a SKU highlighted of type SAP HANA on Azure.":::
+
+5. Select **New support request**.
+
+6. On the **Basics** tab, provide the following information for the ticket:
+ - **Issue type**: Technical
+ - **Subscription**: Your subscription
+ - **Service**: BareMetal Infrastructure
+ - **Resource**: Your resource group
+ - **Summary**: SnapCenter access setup
+ - **Problem type**: Configuration and Setup
+ - **Problem subtype**: Set up SnapCenter
+
+7. In the **Description** of the support ticket, on the **Details** tab, paste the contents of a *.pem file in the text box for more details. You can also zip a *.pem file and upload it. snapcenter.pem will be your public key for SnapCenter user. Use the following example to create a *.pem file from one of your BareMetal instances.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/pem.png" alt-text="Screenshot showing sample contents of a .pem file.":::
+
+ >[!NOTE]
+ >The name of the file "snapcenter" is the username required to make REST API calls. So we recommend that you make the file name descriptive.
+
+8. Select **Review + create** to review your support ticket.
+
+9. Once the public key certificate is submitted, Microsoft sets up the SnapCenter username for your tenant along with the storage virtual machine (SVM) IP address. Microsoft will give you the SVM IP.
+
+10. After you receive the SVM IP, set a password to access the SVM, which you control.
+
+ The following is an example of the REST CALL from a HANA Large Instance or a VM in the virtual network that has access to the HANA Large Instance environment and will be used to set the password.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/sample-rest-call.png" alt-text="Screenshot showing sample REST call.":::
+
+11. Make sure there isn't a proxy variable active on the BareMetal instance used in creating the *.pem file.
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/ensure-no-proxy.png" alt-text="Screenshot showing unset http proxy to ensure there is no proxy variable active on BareMetal instance in creating *.pem file.":::
+
+12. From the client machine, it's now possible to execute commands without a username/password for enabled REST commands. Test the connection:
+
+ No proxy:
+
+ :::image type="content" source="media/netapp-snapcenter-integration-oracle-baremetal/connection-test.png" alt-text="Screenshot showing test of connection and enabled REST commands without username/password.":::
++
+ >[!NOTE]
+ > Either curl command can have "--verbose" added to provide further details on why a command was not successful.
+
+## Install SnapCenter
+
+1. On the provisioned Windows VM, browse to the [NetApp website](https://mysupport.netapp.com/site/products/all/details/snapcenter/downloads-tab).
+
+2. Sign in and download SnapCenter. Download > SnapCenter > Select **Version 4.4**.
+
+3. Install SnapCenter 4.4. Select **Next**.
+
+ The installer will check the prerequisites of the VM. Please note the size of the VM, especially in larger environments. It's okay to continue installing even though a restart may be pending.
+
+4. Configure the user credentials for SnapCenter. By Default, it populates with the Windows user who launched the application for installation. Unless there is a port conflict, we recommended using the default ports.
+
+ The installation wizard will take some time to complete and show progress.
+
+5. Once installation is complete, select **Finish**. Take note of the web address for the SnapCenter web portal. It can also be reached by double-clicking the SnapCenter icon that will appear on the desktop after installation is complete.
+
+## Disable enhanced messaging service (EMS) messages to NetApp auto support
+
+EMS data collection is enabled by default and runs every seven days after your installation date. You can disable data collection at any time.
+
+1. From a PowerShell command line, establish a session with SnapCenter:
+
+ ```powershell-interactive
+ Open-SmConnection
+ ```
+
+2. Sign in with your credentials.
+
+3. Disable EMS data collection:
+
+ ```powershell-interactive
+ Disable-SmDataCollectionEms
+ ```
+
+## Next steps
+
+Learn how to configure SnapCenter:
+
+> [!div class="nextstepaction"]
+> [Configure SnapCenter](configure-snapcenter-oracle-baremetal.md)
cdn Cdn App Dev Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-app-dev-node.md
Finally, let's delete our profile.
To see the reference for the Azure CDN SDK for JavaScript, view the [reference](/javascript/api/@azure/arm-cdn).
-To find additional documentation on the Azure SDK for JavaScript, view the [full reference](/javascript/api/?view=azure-node-latest).
+To find additional documentation on the Azure SDK for JavaScript, view the [full reference](/javascript/api/).
-Manage your CDN resources with [PowerShell](cdn-manage-powershell.md).
+Manage your CDN resources with [PowerShell](cdn-manage-powershell.md).
cognitive-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/build-enrollment-app.md
# Build a React app to add users to a Face service
-This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identity verification, attendance tracking, personalization kiosk, or identity verification, based on their face data.
+This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identity verification, attendance tracking, or personalization kiosk, based on their face data.
When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
When you're ready to release your app for production, you'll build an archive of
## Next steps
-In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](Overview.md). Read the other sections on adding users before you begin development.
+In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](Overview.md). Read the other sections on adding users before you begin development.
cognitive-services Luis How To Review Endpoint Utterances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-review-endpoint-utterances.md
Previously updated : 04/16/2021 Last updated : 05/05/2021
Use the LUIS portal to construct the correct endpoint query.
This action changes the example URL by adding the `log=true` querystring parameter. Copy and use the changed example query URL when making prediction queries to the runtime endpoint.
-## Correct intent predictions to align utterances
+## Correct predictions to align utterances
-Each utterance has a suggested intent displayed in the **Aligned intent** column.
+Each utterance has a suggested intent displayed in the **Predicted Intent** column, and the suggested entities in dotted bounding boxes.
> [!div class="mx-imgBorder"] > [![Review endpoint utterances that LUIS is unsure of](./media/label-suggested-utterances/review-endpoint-utterances.png)](./media/label-suggested-utterances/review-endpoint-utterances.png#lightbox)
-If you agree with that intent, select the check mark. If you disagree with the suggestion, select the correct intent from the aligned intent drop-down list, then select on the check mark to the right of the aligned intent. After you select on the check mark, the utterance is moved to the intent and removed from the **Review Endpoint Utterances** list.
+If you agree with the predicted intent and entities, select the check mark next to the utterance. If the check mark is disabled, this means that there is nothing to confirm.
+If you disagree with the suggested intent, select the correct intent from the Predicted intent drop-down list.
+If you disagree with the suggested entities, start labeling them.
+After you are done, select the check mark next to the utterance to confirm what you labeled. Select **save utterance** to move it from the review list and add it its respective intent.
> [!TIP] > It is important to go to the Intent details page to review and correct the entity predictions from all example utterances from the **Review Endpoint Utterances** list.
If you are unsure if you should delete the utterance, either move it to the None
## Disable active learning
-To disable active learning, don't log user queries. This is accomplished by setting the [endpoint query](luis-get-started-create-app.md#query-the-v3-api-prediction-endpoint) with the `log=false` querystring parameter and value or not using the querystring value because the default value is false.
+To disable active learning, don't log user queries. This is accomplished by changing the query parameters as shown above or by setting the `log=false` parameter in the endpoint query, or you may omit the `log` parameter because the default value is `false`.
+ ## Next steps
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Phonetic alphabets are composed of phones, which are made up of letters, numbers
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> <voice name="en-US-JennyNeural">
- <phoneme alphabet="ipa" ph="t&#x259;mei&#x325;&#x27E;ou&#x325;"> tomato </phoneme>
+ <phoneme alphabet="ipa" ph="təˈmeɪtoʊ"> tomato </phoneme>
</voice> </speak> ```
Phonetic alphabets are composed of phones, which are made up of letters, numbers
## Use custom lexicon to improve pronunciation
-Sometimes the text-to-speech service cannot accurately pronounce a word. For example, the name of a company, or a medical term. Developers can define how single entities are read in SSML using the `phoneme` and `sub` tags. However, if you need to define how multiple entities are read, you can create a custom lexicon using the `lexicon` tag.
+Sometimes the text-to-speech service cannot accurately pronounce a word. For example, the name of a company, a medical term or an emoji. Developers can define how single entities are read in SSML using the `phoneme` and `sub` tags. However, if you need to define how multiple entities are read, you can create a custom lexicon using the `lexicon` tag.
> [!NOTE] > Custom lexicon currently supports UTF-8 encoding.
To define how multiple entities are read, you can create a custom lexicon, which
<grapheme> Benigni </grapheme> <phoneme> bɛˈniːnji</phoneme> </lexeme>
+ <lexeme>
+ <grapheme>😀</grapheme>
+ <alias>test emoji</alias>
+ </lexeme>
</lexicon> ```
-The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text describing the <a href="https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography" target="_blank">orthography </a>. The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text describing how the `lexeme` is pronounced.
+The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text describing the <a href="https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography" target="_blank">orthography </a>. The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text describing how the `lexeme` is pronounced. When `alias` and `phoneme` element are provided with the same `grapheme` element, `alias` has higher priority.
+
+Lexicon contains necessary `xml:lang` attribute to indicate which locale it should be applied for. One custom lexicon is limited to one locale by design, so apply it for a different locale it won't work.
It's important to note, that you cannot directly set the pronunciation of a phrase using the custom lexicon. If you need to set the pronunciation for an acronym or an abbreviated term, first provide an `alias`, then associate the `phoneme` with that `alias`. For example:
In the sample above, we're using the International Phonetic Alphabet, also known
Considering that the IPA is not easy to remember, the Speech service defines a phonetic set for seven languages (`en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`).
-You can use the `sapi` as the value for the `alphabet` attribute with custom lexicons as demonstrated below:
+You can use the `x-microsoft-sapi` as the value for the `alphabet` attribute with custom lexicons as demonstrated below:
```xml <?xml version="1.0" encoding="UTF-8"?>
You can use the `sapi` as the value for the `alphabet` attribute with custom lex
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
- alphabet="sapi" xml:lang="en-US">
+ alphabet="x-microsoft-sapi" xml:lang="en-US">
<lexeme> <grapheme>BTW</grapheme> <alias> By the way </alias>
confidential-computing How To Fortanix Confidential Computing Manager Node Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/how-to-fortanix-confidential-computing-manager-node-agent.md
Title: How To - Run an application with Fortanix Confidential Computing Manager
-description: Learn how to use Fortanix Confidential Computing Manager to convert your containerized images
+ Title: Run an app with Fortanix Confidential Computing Manager
+description: Learn how to use Fortanix Confidential Computing Manager to convert your containerized images.
Last updated 03/24/2021
-# How To: Run an application with Fortanix Confidential Computing Manager
+# Run an application by using Fortanix Confidential Computing Manager
-Start running your application in Azure confidential computing using [Fortanix Confidential Computing Manager](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.enclave_manager?tab=Overview) and [Fortanix Node Agent](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.rte_node_agent) from [Fortanix](https://www.fortanix.com/).
+Learn how to run your application in Azure confidential computing by using [Fortanix Confidential Computing Manager](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.enclave_manager?tab=Overview) and [Node Agent](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.rte_node_agent) from [Fortanix](https://www.fortanix.com/).
-Fortanix is a third-party software vendor with products and services built on top of Azure infrastructure. There are other third-party providers offering similar confidential computing services on Azure.
+Fortanix is a third-party software vendor that provides products and services that work with the Azure infrastructure. There are other third-party providers that offer similar confidential computing services for Azure.
> [!Note]
-> The products referenced in this document are not under the control of Microsoft. Microsoft is providing this information to you only as a convenience, and the reference to these non-Microsoft products do not imply endorsement by Microsoft.
+> Some of the products referenced in this document aren't provided by Microsoft. Microsoft is providing this information only as a convenience. References to these non-Microsoft products doesn't imply endorsement by Microsoft.
-This tutorial shows you how to convert your application image to a confidential compute-protected image. This environment uses [Fortanix](https://www.fortanix.com/) software, powered by Azure's DCsv2-Series Intel SGX-enabled virtual machines. This solution orchestrates critical security policies such as identity verification and data access control.
+This tutorial shows you how to convert your application image into a confidential compute-protected image. The environment uses [Fortanix](https://www.fortanix.com/) software, powered by Azure DCsv2-series Intel SGX-enabled virtual machines. The solution orchestrates critical security policies like identity verification and data access control.
-For Fortanix-specific support, join the [Fortanix Slack community](https://fortanix.com/community/) and use the channel `#enclavemanager`.
+For Fortanix support, join the [Fortanix Slack community](https://fortanix.com/community/). Use the `#enclavemanager` channel.
## Prerequisites
-1. If you don't have a Fortanix Confidential Computing Manager account, [sign-up](https://ccm.fortanix.com/auth/sign-up) before you begin.
-1. A private [Docker](https://docs.docker.com/) registry to push converted application images.
-1. If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/) before you begin.
+- If you don't have a Fortanix Confidential Computing Manager account, [sign up](https://ccm.fortanix.com/auth/sign-up) before you start.
+- You need a private [Docker](https://docs.docker.com/) registry to push converted application images.
+- If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/) before you start.
> [!NOTE]
-> Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-Go subscription.
+> Free trial accounts don't have access to the virtual machines used in this tutorial. To complete the tutorial, you need a pay-as-you-go subscription.
## Add an application to Fortanix Confidential Computing Manager 1. Sign in to [Fortanix Confidential Computing Manager (Fortanix CCM)](https://ccm.fortanix.com).
-1. Go to the **Accounts** page and select **ADD ACCOUNT** to create a new account.
+1. Go to the **Accounts** page and select **ADD ACCOUNT** to create a new account:
:::image type="content" source="media/how-to-fortanix-confidential-computing-manager-node-agent/create-account-new.png" alt-text="Screenshot that shows how to create an account.":::
-1. After your account is created, hit **SELECT ACCOUNT** to select the newly created account. Now we can start enrolling compute nodes and creating applications.
-1. Go to the **Applications** tab and click **+ APPLICATION** to add an application. In this example, we will add an Enclave OS application running a Python Flask server.
+1. After your account is created, click **SELECT ACCOUNT** to select the newly created account. You can now start enrolling compute nodes and creating applications.
+1. On the **Applications** tab, select **+ APPLICATION** to add an application. In this example, we'll add an Enclave OS application that runs a Python Flask server.
-1. Select the **ADD** button for the Enclave OS Application.
+1. Select the **ADD** button for the **Enclave OS Application**:
:::image type="content" source="media/how-to-fortanix-confidential-computing-manager-node-agent/add-enclave-application.png" alt-text="Screenshot that shows how to add an application."::: > [!NOTE]
- > This tutorial covers adding Enclave OS Applications only. [Learn more](https://support.fortanix.com/hc/en-us/articles/360044746932-Bringing-EDP-Rust-Apps-to-Confidential-Computing-Manager) about bringing EDP Rust Applications to Fortanix Confidential Computing Manager.
+ > This tutorial covers adding only Enclave OS applications. For information about adding EDP Rust Applications, see [Bringing EDP Rust Apps to Confidential Computing Manager](https://support.fortanix.com/hc/en-us/articles/360044746932-Bringing-EDP-Rust-Apps-to-Confidential-Computing-Manager).
-1. In this tutorial, we'll use Fortanix's docker registry for the sample application. Fill in the details from the following information. Use your private docker registry to keep the output image.
+1. In this tutorial, we'll use the Fortanix Docker registry for the sample application. Enter the specified values for the following settings. Use your private Docker registry to store the output image.
- **Application name**: Python Application Server - **Description**: Python Flask Server - **Input image name**: fortanix/python-flask
- - **Output image name**: fortanix-private/python-flask-sgx (replace with your own registry)
+ - **Output image name**: fortanix-private/python-flask-sgx (Replace with your own registry.)
- **ISVPRODID**: 1 - **ISVSVM**: 1 - **Memory size**: 1 GB - **Thread count**: 128
- *Optional*: Run the non-converted application.
+ *Optional*: Run the non-converted application.
- **Docker Hub**: [https://hub.docker.com/u/fortanix](https://hub.docker.com/u/fortanix) - **App**: fortanix/python-flask
For Fortanix-specific support, join the [Fortanix Slack community](https://forta
sudo docker run fortanix/python-flask ``` > [!NOTE]
- > We recommend that you don't use your private Docker registry to store the output image.
+ > We don't recommend that you use your private Docker registry to store the output image.
-1. Add a certificate. Fill in the information using the details below and then select **NEXT**:
+1. Add a certificate. Enter the following values, and then select **NEXT**:
- **Domain**: myapp.domain.com - **Type**: Certificate Issued by Confidential Computing Manager - **Key path**: /run/key.pem
For Fortanix-specific support, join the [Fortanix Slack community](https://forta
A Fortanix CCM Image is a software release or version of an application. Each image is associated with one enclave hash (MRENCLAVE).
-1. On the **Add Image** page, enter the **REGISTRY CREDENTIALS** for **Output image name**. These credentials are used to access the private docker registry where the image will be pushed. Since the input image is stored in a public registry, there is no need to provide credentials for the input image.
-1. Provide the image tag and select **CREATE**.
+1. On the **Add image** page, enter the registry credentials for **Output image name**. These credentials are used to access the private Docker registry where the image will be pushed. Because the input image is stored in a public registry, you don't need to provide credentials for the input image.
+1. Enter the image tag and select **CREATE**:
:::image type="content" source="media/how-to-fortanix-confidential-computing-manager-node-agent/create-image.png" alt-text="Screenshot that shows how to create an image.":::
A Fortanix CCM Image is a software release or version of an application. Each im
An application whose domain is added to the allowlist will get a TLS certificate from the Fortanix Confidential Computing Manager. When an Enclave OS application starts, it will contact the Fortanix Confidential Computing Manager to receive that TLS certificate.
-Switch to the **Tasks** tab on the left and approve the pending requests to allow the domain and image.
+On the **Tasks** tab on the left side of the screen, approve the pending requests to allow the domain and image.
-## Enroll compute node agent in Azure
+## Enroll the compute node agent in Azure
-### Generate and copy join token
+### Create and copy a join token
-In Fortanix Confidential Computing Manager, you'll create a token. This token allows a compute node in Azure to authenticate itself. You'll need to give this token to your Azure virtual machine.
+You'll now create a token in Fortanix Confidential Computing Manager. This token allows a compute node in Azure to authenticate itself. Your Azure virtual machine will need this token.
-1. Go to the **Compute Nodes** tab and click the **+ ENROLL NODE** button.
-1. Click the **COPY** button to copy the Join Token. This Join Token is used by the compute node to authenticate itself.
+1. On the **Compute Nodes** tab, select **ENROLL NODE**.
+1. Select the **COPY** button to copy the join token. The compute node uses this join token to authenticate itself.
-### Enroll nodes into Fortanix Node Agent in Azure Marketplace
+### Enroll nodes into Fortanix Node Agent
-Creating a Fortanix Node Agent will deploy a virtual machine, network interface, virtual network, network security group, and a public IP address into your Azure resource group. Your Azure subscription will be billed hourly for the virtual machine. Before you create a Fortanix Node Agent, review the Azure [virtual machine pricing page](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) for DCsv2-Series. Delete Azure resources when not in use.
+Creating a Fortanix node agent will deploy a virtual machine, network interface, virtual network, network security group, and public IP address in your Azure resource group. Your Azure subscription will be billed hourly for the virtual machine. Before you create a Fortanix node agent, review the Azure [virtual machine pricing page](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) for DCsv2-series. Delete any Azure resources that you're not using.
1. Go to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/) and sign in with your Azure credentials.
-1. In the search bar, type **Fortanix Confidential Computing Node Agent**. Select the App that shows up in the search box called **Fortanix Confidential Computing Node Agent** to go to the offering's home page. Optionally, click the URL https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.rte_node_agent?tab=OverviewFortanix to access the Node Agent.
+1. In the search box, enter **Fortanix Confidential Computing Node Agent**. In the search results, select **Fortanix Confidential Computing Node Agent** to go to the [app's home page](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.rte_node_agent?tab=OverviewFortanix):
- ![search marketplace](media/how-to-fortanix-confidential-computing-manager-node-agent/search-fortanix-marketplace.png)
-1. Select **Get It Now**, fill in your information if necessary, and select **Continue**. You'll be redirected to the Azure portal.
-1. Select **Create** to enter the Fortanix Confidential Computing Node Agent deployment page.
-1. On this page, you'll be entering information to deploy a virtual machine. Specifically, this VM is a DCsv2-Series Intel SGX-enabled virtual machine from Azure with Fortanix Node Agent software installed. The Node Agent will allow your converted image to run securely on Intel SGX nodes in Azure. Select the **subscription** and **resource group** where you want to deploy the virtual machine and associated resources.
+ :::image type="content" source="media/how-to-fortanix-confidential-computing-manager-node-agent/search-fortanix-marketplace.png" alt-text="Screenshot that shows how to get to the app's home page.":::
+1. Select **Get It Now**, provide your information if necessary, and then select **Continue**. You'll be redirected to the Azure portal.
+1. Select **Create** to go to the Fortanix Confidential Computing Node Agent deployment page.
+1. On this page, you'll enter information to deploy a virtual machine. The VM is a DCsv2-series Intel SGX-enabled virtual machine from Azure that has Fortanix Node Agent software installed on it. The node agent will allow your converted image to run with increased security on Intel SGX nodes in Azure. Select the subscription and resource group where you want to deploy the virtual machine and associated resources.
> [!NOTE]
- > There are constraints when deploying DCsv2-Series virtual machines in Azure. You may need to request quota for additional cores. Read about [confidential computing solutions on Azure VMs](./virtual-machine-solutions.md) for more information.
+ > Constraints apply when you deploy DCsv2-series virtual machines in Azure. You might need to request quota for additional cores. Read about [confidential computing solutions on Azure VMs](./virtual-machine-solutions.md) for more information.
1. Select an available region.
-1. Enter a name for your virtual machine in **Node Name**.
-1. Enter a username and password (or SSH Key) for authenticating into the virtual machine.
-1. Leave the default OS Disk Size as 200 and select a VM size (Standard_DC4s_v2 will suffice for this tutorial).
-1. Paste the token generated earlier in **Join Token**.
+1. In the **Node Name** box, enter a name for your virtual machine.
+1. Enter a user name and password (or SSH key) for authenticating into the virtual machine.
+1. Leave the default **OS Disk Size** of **200**. Select a **VM Size**. (**Standard_DC4s_v2** will work for this tutorial.)
+1. In the **Join Token** box, paste in the token that you created earlier in this tutorial:
:::image type="content" source="media/how-to-fortanix-confidential-computing-manager-node-agent/deploy-fortanix-node-agent-protocol.png" alt-text="Screenshot that shows how to deploy a resource.":::
-1. Select **Review + Create**. Ensure the validation passes and then select **Create**. When all the resources deploy, the compute node is now enrolled in Fortanix Confidential Computing Manager.
+1. Select **Review + create**. Make sure the validation passes, and then select **Create**. When all the resources deploy, the compute node is enrolled in Fortanix Confidential Computing Manager.
## Run the application image on the compute node
-Run the application by executing the following command. Ensure you change the Node IP, Port, and Converted Image Name as inputs for your specific application.
+Run the application by running the following command. Be sure to change the node IP, port, and converted image name to the values for your application.
-In this tutorial, the command to execute is:
+For this tutorial, here's the command to run:
```bash sudo docker run \
In this tutorial, the command to execute is:
-e NODE_AGENT_BASE_URL=http://52.152.206.164:9092/v1/ fortanix-private/python-flask-sgx ```
-Where:
+In this command:
-- *52.152.206.164* is the Node Agent Host IP.-- *9092* is the default port on which Node Agent listens to.-- *fortanix-private/python-flask-sgx* is the converted app that can be found in the Images tab under the **Image Name** column in the **Images** table in the Fortanix Confidential Computing Manager Web Portal.
+- `52.152.206.164` is the node agent host IP.
+- `9092` is the default port that Node Agent listens to.
+- `fortanix-private/python-flask-sgx` is the converted app. You can find it in the Fortanix Confidential Computing Manager Web Portal. It's on the **Images** tab, in the **Image Name** column of the **Images** table.
## Verify and monitor the running application 1. Return to [Fortanix Confidential Computing Manager](https://ccm.fortanix.com/console).
-1. Ensure you're working inside the **Account** where you enrolled the node.
-1. Select the **Applications** tab.
-1. Verify that there's a running application with an associated compute node.
+1. Be sure you're working in the **Account** where you enrolled the node.
+1. On the **Applications** tab, verify that there's a running application with an associated compute node.
## Clean up resources
-When they are no longer needed, you can delete the resource group, virtual machine, and associated resources. Deleting the resource group will unenroll the nodes associated with your converted image.
+If you no longer need them, you can delete the resource group, virtual machine, and associated resources. Deleting the resource group will unenroll the nodes associated with your converted image.
-Select the resource group for the virtual machine, then select **Delete**. Confirm the name of the resource group to finish deleting the resources.
+Select the resource group for the virtual machine, and then select **Delete**. Confirm the name of the resource group to finish deleting the resources.
-To delete the Fortanix Confidential Computing Manager account you created, go the [Accounts Page](https://ccm.fortanix.com/accounts) in the Fortanix Confidential Computing Manager. Hover over the account you want to delete. Select the vertical black dots in the upper right-hand corner and select **DELETE ACCOUNT**.
+To delete the Fortanix Confidential Computing Manager account you created, go to the [Accounts page](https://ccm.fortanix.com/accounts) in the Fortanix Confidential Computing Manager. Hover over the account you want to delete. Select the vertical black dots in the upper-right corner and then select **DELETE ACCOUNT**.
## Next steps
-In this quickstart, you used Fortanix tooling to convert your application image to run on top of a confidential computing virtual machine. For more information about confidential computing virtual machines on Azure, see [Solutions on Virtual Machines](virtual-machine-solutions.md).
+In this tutorial, you used Fortanix tools to convert your application image to run on top of a confidential computing virtual machine. For more information about confidential computing virtual machines on Azure, see [Solutions on virtual machines](virtual-machine-solutions.md).
-To learn more about Azure's confidential computing offerings, see [Azure confidential computing overview](overview.md).
+To learn more about Azure confidential computing offerings, see [Azure confidential computing overview](overview.md).
-Learn how to complete similar tasks using other third-party offerings on Azure, like [Anjuna](https://azuremarketplace.microsoft.com/marketplace/apps/anjuna-5229812.aee-az-v1) and [Scone](https://sconedocs.github.io).
+You can also learn how to complete similar tasks by using other third-party offerings on Azure, like [Anjuna](https://azuremarketplace.microsoft.com/marketplace/apps/anjuna-5229812.aee-az-v1) and [Scone](https://sconedocs.github.io).
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/introduction.md
Previously updated : 10/23/2020 Last updated : 05/07/2021
Get started with Azure Cosmos DB with one of our quickstarts:
- [Get started with Azure Cosmos DB Cassandra API](create-cassandra-dotnet.md) - [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md) - [Get started with Azure Cosmos DB Table API](create-table-dotnet.md)
+- [A whitepaper on next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/)
> [!div class="nextstepaction"] > [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/)
cosmos-db Sql Api Sdk Dotnet Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet-changefeed.md
### v2 builds
+### <a id="2.4.0"></a>2.4.0
+* Added support for lease collections that can be partitioned with partition key defined as /partitionKey. Prior to this change lease collection's partition key would have to be defined as /id.
+* This release allows using lease collections with Gremlin API, as Gremlin collections cannot have partition key defined as /id.
+ ### <a id="2.3.2"></a>2.3.2 * Added lease store compatibility with [V3 SDK that enables hot migration paths. An application can migrate to V3 SDK and migrate back to the Change Feed processor library without losing any state.
Microsoft will provide notification at least **12 months** in advance of retirin
| Version | Release Date | Retirement Date | | | | |
+| [2.4.0](#2.4.0) |May 6, 2021 | |
| [2.3.2](#2.3.2) |August 11, 2020 | | | [2.3.1](#2.3.1) |July 30, 2020 | | | [2.3.0](#2.3.0) |April 2, 2020 | |
cosmos-db Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/whitepapers.md
Previously updated : 12/02/2019 Last updated : 05/07/2021
Whitepapers allow you to explore Azure Cosmos DB concepts at a deeper level. Thi
| | | |[Schema-Agnostic Indexing with Azure Cosmos DB](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf) | This paper describes Azure Cosmos DB's indexing subsystem. This paper includes Azure Cosmos DB capabilities such as document representation, query language, document indexing approach, core index support, and early production experiences.| | [Azure Cosmos DB and personal data](https://servicetrust.microsoft.com/ViewPage/TrustDocuments?command=Download&downloadType=Document&downloadId=87cc6456-4b23-473c-94d3-6c713b8b8956&docTab=6d000410-c9e9-11e7-9a91-892aae8839ad_FAQ_and_White_Papers)| This paper provides guidance for Azure Cosmos DB customers managing a cloud-based database, an on-premises database, or both, and who need to ensure that the personal data in their database systems is handled and protected in accordance with current rules. |-
+|[Next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/) | This paper explores how Azure Cosmos DB is uniquely positioned to address the data requirements of modern apps. It includes three customers spotlights highlighting the API for MongoDB, simplified data management, cost-effective scalability, market-leading performance, and reliability. |
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
You can create a budget using an Azure Resource Manager template. To use the tem
## Clean up resources
-If you created a budget and you no longer it, view its details and delete it.
+If you created a budget and you no longer need it, view its details and delete it.
## Next steps
In this tutorial, you learned how to:
Advance to the next tutorial to create a recurring export for your cost management data. > [!div class="nextstepaction"]
-> [Create and manage exported data](tutorial-export-acm-data.md)
+> [Create and manage exported data](tutorial-export-acm-data.md)
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/tutorial-export-acm-data.md
Title: Tutorial - Create and manage exported data from Azure Cost Management
description: This article shows you how you can create and manage exported Azure Cost Management data so that you can use it in external systems. Previously updated : 04/26/2021 Last updated : 05/06/2021
If you have a Microsoft Customer Agreement or a Microsoft Partner Agreement, you
:::image type="content" source="./media/tutorial-export-acm-data/file-partition.png" alt-text="Screenshot showing File Partitioning option." lightbox="./media/tutorial-export-acm-data/file-partition.png" :::
+If you don't have a Microsoft Customer Agreement or a Microsoft Partner Agreement, then you won't see the **File Partitioning** option.
+ #### Update existing exports to use file partitioning If you have existing exports and you want to set up file partitioning, create a new export. File partitioning is only available with the latest Exports version. There may be minor changes to some of the fields in the usage files that get created.
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
You must have an Owner role on an Enrollment Account to create a subscription. T
* The Enterprise Administrator of your enrollment can [make you an Account Owner](https://ea.azure.com/helpdocs/addNewAccount) (sign in required) which makes you an Owner of the Enrollment Account. * An existing Owner of the Enrollment Account can [grant you access](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put). Similarly, to use a service principal to create an EA subscription, you must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
- If you're using an SPN to create subscriptions, use the ObjectId of the Azure AD Application Registration as the Service Principal ObjectId using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true ) or [Azure CLI](/cli/azure/ad/sp?view=azure-cli-latest&preserve-view=true#az_ad_sp_list).
+ If you're using an SPN to create subscriptions, use the ObjectId of the Azure AD Application Registration as the Service Principal ObjectId using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true ) or [Azure CLI](/cli/azure/ad/sp?view=azure-cli-latest&preserve-view=true#az_ad_sp_list). For more information about the EA role assignment API request, see [Assign roles to Azure Enterprise Agreement service principal names](assign-roles-azure-service-principals.md). This page includes a list of roles (and role definition IDs) that can be assigned to an SPN.
> [!NOTE] > Ensure that you use the correct API version to give the enrollment account owner permissions. For this article and for the APIs documented in it, use the [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) API. If you're migrating to use the newer APIs, you must grant owner permission again using [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put). Your previous configuration made with the [2015-07-01 version](grant-access-to-create-subscription.md) doesn't automatically convert for use with the newer APIs.
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sink.md
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Azure Database for MySQL](connector-azure-database-for-mysql.md) | | Γ£ô/Γ£ô | | [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md) | | Γ£ô/Γ£ô | | [Azure SQL Database](connector-azure-sql-database.md#mapping-data-flow-properties) | | Γ£ô/- |
-| [Azure SQL Managed Instance (preview)](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/- |
+| [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/- |
| [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#mapping-data-flow-properties) | | Γ£ô/- | | [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô |
data-factory How To Create Custom Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-custom-event-trigger.md
Title: Create custom event triggers in Azure Data Factory
-description: Learn how to create a custom trigger in Azure Data Factory that runs a pipeline in response to a custom event published to Event Grid.
+description: Learn how to create a trigger in Azure Data Factory that runs a pipeline in response to a custom event published to Event Grid.
Previously updated : 03/11/2021 Last updated : 05/07/2021
-# Create a trigger that runs a pipeline in response to a custom event (Preview)
+# Create a custom event trigger to run a pipeline in Azure Data Factory (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article describes the Custom Event Triggers that you can create in your Data Factory pipelines.
-
-Event-driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require Data Factory customers to trigger pipelines based on certain events happening. Data Factory native integration with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) now covers [Custom Events](../event-grid/custom-topics.md): customers send arbitrary events to an event grid topic, and Data Factory subscribes and listens to the topic and triggers pipelines accordingly.
+Event-driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption, and reaction to events. Data integration scenarios often require Azure Data Factory customers to trigger pipelines when certain events occur. Data Factory native integration with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) now covers [custom topics](../event-grid/custom-topics.md). You send events to an event grid topic. Data Factory subscribes to the topic, listens, and then triggers pipelines accordingly.
> [!NOTE]
-> The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more info, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the *Microsoft.EventGrid/eventSubscriptions/** action. This action is part of the EventGrid EventSubscription Contributor built-in role.
+> The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more information, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the `Microsoft.EventGrid/eventSubscriptions/` action. This action is part of the [EventGrid EventSubscription Contributor](../role-based-access-control/built-in-roles.md#eventgrid-eventsubscription-contributor) built-in role.
-Furthermore, combining pipeline parameters and Custom Event Trigger, customers can parse and reference custom _data_ payload in pipeline runs. _data_ field in Custom Event payload is a free-form json key-value structure, giving customers maximum control over the event driven pipeline runs.
+If you combine pipeline parameters and a custom event trigger, you can parse and reference custom `data` payloads in pipeline runs. Because the `data` field in a custom event payload is a free-form, JSON key-value structure, you can control event-driven pipeline runs.
> [!IMPORTANT]
-> Every so often, a key referenced in parameterization may be missing in custom event payload. The _trigger run_ will fail with an error, stating that expression cannot be evaluated because property _keyName_ doesn't exist. __No__ _pipeline run_ will be triggered by the event.
+> If a key referenced in parameterization is missing in the custom event payload, `trigger run` will fail. You'll get an error that states the expression cannot be evaluated because property `keyName` doesn't exist. In this case, **no** `pipeline run` will be triggered by the event.
+
+## Set up a custom topic in Event Grid
-## Setup Event Grid Custom Topic
+To use the custom event trigger in Data Factory, you need to *first* set up a [custom topic in Event Grid](../event-grid/custom-topics.md).
-To use the Custom Event Trigger in Data Factory, you need to _first_ set up a [Custom Topic in Event Grid](../event-grid/custom-topics.md). The workflow is different from Storage Event Trigger, where Data Factory will set up the topic for you. Here you need to navigate the Azure Event Grid and create the topic yourself. For more information on how to create the custom topic, see Azure Event Grid [Portal Tutorials](../event-grid/custom-topics.md#azure-portal-tutorials) and [CLI Tutorials](../event-grid/custom-topics.md#azure-cli-tutorials)
+Go to Azure Event Grid and create the topic yourself. For more information on how to create the custom topic, see Azure Event Grid [portal tutorials](../event-grid/custom-topics.md#azure-portal-tutorials) and [CLI tutorials](../event-grid/custom-topics.md#azure-cli-tutorials).
-Data Factories expect the events to follow [Event Grid event schema](../event-grid/event-schema.md). Make sure event payloads have following fields.
+> [!NOTE]
+> The workflow is different from Storage Event Trigger. Here, Data Factory doesn't set up the topic for you.
+
+Data Factory expects events to follow the [Event Grid event schema](../event-grid/event-schema.md). Make sure event payloads have the following fields:
```json [
Data Factories expect the events to follow [Event Grid event schema](../event-gr
] ```
-## Data Factory UI
+## Use Data Factory to create a custom event trigger
-This section shows you how to create a storage event trigger within the Azure Data Factory User Interface.
+1. Go to Azure Data Factory and sign in.
-1. Switch to the **Edit** tab, shown with a pencil symbol.
+1. Switch to the **Edit** tab. Look for the pencil icon.
-1. Select **Trigger** on the menu, then select **New/Edit**.
+1. Select **Trigger** on the menu and then select **New/Edit**.
-1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**.
+1. On the **Add Triggers** page, select **Choose trigger**, and then select **+New**.
-1. Select trigger type **Custom Events**
+1. Select **Custom events** for **Type**.
- :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-1-creation.png" alt-text="Screenshot of Author page to create a new custom event trigger in Data Factory UI." lightbox="media/how-to-create-custom-event-trigger/custom-event-1-creation-expanded.png":::
+ :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-1-creation.png" alt-text="Screenshot of Author page to create a new custom event trigger in Data Factory UI." lightbox="media/how-to-create-custom-event-trigger/custom-event-1-creation-expanded.png":::
1. Select your custom topic from the Azure subscription dropdown or manually enter the event topic scope. > [!NOTE]
- > To create a new or modify an existing Custom Event Trigger, the Azure account used to log into Data Factory and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on topic. No additional permission is required: Service Principal for the Azure Data Factory does _not_ need special permission to Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.
+ > To create or modify a custom event trigger in Data Factory, you need to use an Azure account with appropriate role-based access control (Azure RBAC). No additional permission is required. The Data Factory service principle does *not* require special permission to your Event Grid. For more information about access control, see the [Role-based access control](#role-based-access-control) section.
-1. The **Subject begins with** and **Subject ends with** properties allow you to filter events for which you want to trigger pipeline. Both properties are optional.
+1. The **Subject begins with** and **Subject ends with** properties allow you to filter for trigger events. Both properties are optional.
-1. Use **+ New** to add **Event Types** you want to filter on. Custom Event trigger employee an OR relationship for the list: if a custom event has an _eventType_ property that matches any listed here, it will trigger a pipeline run. The event type is case insensitive. For instance, in the screenshot below, the trigger matches all _copycompleted_ or _copysucceeded_ events with subject starts with _factories_
+1. Use **+ New** to add **Event Types** to filter on. The list of custom event triggers uses an OR relationship. When a custom event with an `eventType` property that matches one on the list, a pipeline run is triggered. The event type is case insensitive. For example, in the following screenshot, the trigger matches all `copycompleted` or `copysucceeded` events that have a subject that begins with *factories*.
- :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-2-properties.png" alt-text="Screenshot of Edit Trigger page to explain Event Types and Subject filtering in Data Factory UI.":::
+ :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-2-properties.png" alt-text="Screenshot of Edit Trigger page to explain Event Types and Subject filtering in Data Factory UI.":::
-1. Custom event trigger can parse and send custom _data_ payload to your pipeline. First create the pipeline parameters, and fill in the values on the **Parameters** page. Use format **@triggerBody().event.data._keyName_** to parse the data payload, and pass values to pipeline parameters. For detailed explanation, see [Reference Trigger Metadata in Pipelines](how-to-use-trigger-parameterization.md) and [System Variables in Custom Event Trigger](control-flow-system-variables.md#custom-event-trigger-scope)
+1. A custom event trigger can parse and send a custom `data` payload to your pipeline. You create the pipeline parameters, and then fill in the values on the **Parameters** page. Use the format `@triggerBody().event.data._keyName_` to parse the data payload and pass values to the pipeline parameters.
+
+ For a detailed explanation, see the following articles:
+ - [Reference trigger metadata in pipelines](how-to-use-trigger-parameterization.md)
+ - [System variables in custom event trigger](control-flow-system-variables.md#custom-event-trigger-scope)
- :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-4-trigger-values.png" alt-text="Screenshot of pipeline Parameters setting.":::
+ :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-4-trigger-values.png" alt-text="Screenshot of pipeline parameters settings.":::
- :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-3-parameters.png" alt-text="Screenshot of Parameters page to reference data payload in custom event.":::
+ :::image type="content" source="media/how-to-create-custom-event-trigger/custom-event-3-parameters.png" alt-text="Screenshot of the parameters page to reference data payload in custom event.":::
-1. Click **OK** once you are done.
+1. After you've entered the parameters, select **OK**.
## JSON schema The following table provides an overview of the schema elements that are related to custom event triggers:
-| **JSON Element** | **Description** | **Type** | **Allowed Values** | **Required** |
-| - | | -- | | |
-| **scope** | The Azure Resource Manager resource ID of the event grid topic. | String | Azure Resource Manager ID | Yes |
-| **events** | The type of events that cause this trigger to fire. | Array of strings | | Yes, at least one value is expected |
-| **subjectBeginsWith** | Subject field must begin with the pattern provided for the trigger to fire. For example, `factories` only fires the trigger for event subject starting with `factories`. | String | | No |
-| **subjectEndsWith** | Subject field must end with the pattern provided for the trigger to fire. | String | | No |
+| JSON element | Description | Type | Allowed values | Required |
+||-||||
+| `scope` | The Azure Resource Manager resource ID of the event grid topic. | String | Azure Resource Manager ID | Yes |
+| `events` | The type of events that cause this trigger to fire. | Array of strings | | Yes, at least one value is expected. |
+| `subjectBeginsWith` | The `subject` field must begin with the provided pattern for the trigger to fire. For example, *factories* only fires the trigger for event subjects that start with *factories*. | String | | No |
+| `subjectEndsWith` | The `subject` field must end with the provided pattern for the trigger to fire. | String | | No |
## Role-based access control
-Azure Data Factory uses Azure role-based access control (Azure RBAC) to ensure that unauthorized access to listen to, subscribe to updates from, and trigger pipelines linked to custom events, are strictly prohibited.
+Azure Data Factory uses Azure RBAC to prohibit unauthorized access. To function properly, Data Factory requires access to:
+- Listen to events.
+- Subscribe to updates from events.
+- Trigger pipelines linked to custom events.
+
+To successfully create or update a custom event trigger, you need to sign in to Data Factory with an Azure account that has appropriate access. Otherwise, the operation will fail with an _Access Denied_ error.
-* To successfully create a new or update an existing Custom Event Trigger, the Azure account signed into the Data Factory needs to have appropriate access to the relevant storage account. Otherwise, the operation with fail with _Access Denied_.
-* Data Factory needs no special permission to your Event Grid, and you do _not_ need to assign special Azure RBAC permission to Data Factory service principal for the operation.
+Data Factory doesn't require special permission to your Event Grid. You also do *not* need to assign special Azure RBAC permission to the Data Factory service principal for the operation.
-Specifically, customer needs _Microsoft.EventGrid/EventSubscriptions/Write_ permission on _/subscriptions/####/resourceGroups//####/providers/Microsoft.EventGrid/topics/someTopics_
+Specifically, you need `Microsoft.EventGrid/EventSubscriptions/Write` permission on `/subscriptions/####/resourceGroups//####/providers/Microsoft.EventGrid/topics/someTopics`.
## Next steps
-* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution).
-* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
+* Get detailed information about [trigger execution](concepts-pipeline-execution-triggers.md#trigger-execution).
+* Learn how to [reference trigger metadata in pipeline runs](how-to-use-trigger-parameterization.md).
databox-online Azure Stack Edge Gpu Deploy Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-arc-data-controller.md
This article describes the process of creating an Azure Arc Data Controller and then deploying Azure Data Services on your Azure Stack Edge Pro GPU device.
-Azure Arc Data Controller is the local control plane that enables Azure Data Services in customer-managed environments. Once you have created the Azure Arc Data Controller on the Kubernetes cluster that runs on your Azure Stack Edge Pro GPU device, you can deploy Azure Data Services such as SQL Managed Instance (Preview) on that data controller.
+Azure Arc Data Controller is the local control plane that enables Azure Data Services in customer-managed environments. Once you have created the Azure Arc Data Controller on the Kubernetes cluster that runs on your Azure Stack Edge Pro GPU device, you can deploy Azure Data Services such as SQL Managed Instance on that data controller.
The procedure to create Data Controller and then deploy an SQL Managed Instance involves the use of PowerShell and `kubectl` - a native tool that provides command-line access to the Kubernetes cluster on the device.
dns Dns Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-custom-domain.md
Previously updated : 05/05/2021 Last updated : 05/06/2021 # Use Azure DNS to provide custom domain settings for an Azure service
-Azure DNS provides DNS for a custom domain for any of your Azure resources that support custom domains or that have a fully qualified domain name (FQDN). An example is you have an Azure web app and you want your users to access it by either using contoso.com, or www\.contoso.com as an FQDN. This article walks you through configuring your Azure service with Azure DNS for using custom domains.
+Azure DNS provides naming resolution for any of your Azure resources that support custom domains or that have a fully qualified domain name (FQDN). For example, you have an Azure web app you want your users to access using `contoso.com` or `www.contoso.com` as the FQDN. This article walks you through configuring your Azure service with Azure DNS for using custom domains.
## Prerequisites
-In order to use Azure DNS for your custom domain, you must first delegate your domain to Azure DNS. Visit [Delegate a domain to Azure DNS](./dns-delegate-domain-azure-dns.md) for instructions on how to configure your name servers for delegation. Once your domain is delegated to your Azure DNS zone, you are able to configure the DNS records needed.
+To use Azure DNS for your custom domain, you must first delegate your domain to Azure DNS. See [Delegate a domain to Azure DNS](./dns-delegate-domain-azure-dns.md) for instructions on how to configure your name servers for delegation. Once your domain is delegated to your Azure DNS zone, you now can configure your DNS records needed.
-You can configure a vanity or custom domain for [Azure Function Apps](#azure-function-app), [Public IP addresses](#public-ip-address), [App Service (Web Apps)](#app-service-web-apps), [Blob storage](#blob-storage), and [Azure CDN](#azure-cdn).
+You can configure a vanity or custom domain for Azure Function Apps, Public IP addresses, App Service (Web Apps), Blob storage, and Azure CDN.
## Azure Function App
-To configure a custom domain for Azure function apps, a CNAME record is created as well as configuration on the function app itself.
+To configure a custom domain for Azure function apps, a CNAME record is created and configured on the function app itself.
-Navigate to **Function App** and select your function app. Click **Platform features** and under **Networking** click **Custom domains**.
+1. Navigate to **Function App** and select your function app. Select **Custom domains** under *Settings*. Note the **current url** under *assigned custom domains*, this address is used as the alias for the DNS record created.
-![function app blade](./media/dns-custom-domain/functionapp.png)
+ :::image type="content" source="./media/dns-custom-domain/function-app.png" alt-text="Screenshot of custom domains for function app.":::
-Note the current url on the **Custom domains** blade, this address is used as the alias for the DNS record created.
+1. Navigate to your DNS Zone and select **+ Record set**. Enter the following information on the **Add record set** page and select **OK** to create it.
-![custom domain blade](./media/dns-custom-domain/functionshostname.png)
+ :::image type="content" source="./media/dns-custom-domain/function-app-record.png" alt-text="Screenshot of function app add record set page.":::
-Navigate to your DNS Zone and click **+ Record set**. Fill out the following information on the **Add record set** blade and click **OK** to create it.
+ |Property |Value |Description |
+ ||||
+ | Name | myfunctionapp | This value along with the domain name label is the FQDN for the custom domain name. |
+ | Type | CNAME | Use a CNAME record is using an alias. |
+ | TTL | 1 | 1 is used for 1 hour |
+ | TTL unit | Hours | Hours are used as the time measurement |
+ | Alias | contosofunction.azurewebsites.net | The DNS name you're creating the alias for, in this example it's the contosofunction.azurewebsites.net DNS name provided by default to the function app. |
+
+1. Navigate back to your function app, select **Custom domains** under *Settings*. Then select **+ Add custom domain**.
-|Property |Value |Description |
-||||
-|Name | myfunctionapp | This value along with the domain name label is the FQDN for the custom domain name. |
-|Type | CNAME | Use a CNAME record is using an alias. |
-|TTL | 1 | 1 is used for 1 hour |
-|TTL unit | Hours | Hours are used as the time measurement |
-|Alias | adatumfunction.azurewebsites.net | The DNS name you are creating the alias for, in this example it is the adatumfunction.azurewebsites.net DNS name provided by default to the function app. |
+ :::image type="content" source="./media/dns-custom-domain/function-app-add-domain.png" alt-text="Screenshot of add custom domain button for function app.":::
-Navigate back to your function app, click **Platform features**, and under **Networking** click **Custom domains**, then under **Custom Hostnames** click **+ Add hostname**.
+1. On the **Add custom domain** page, enter the CNAME record in the **Custom domain** text field and select **Validate**. If the record is found, the **Add custom domain** button appears. Select **Add custom domain** to add the alias.
-On the **Add hostname** blade, enter the CNAME record in the **hostname** text field and click **Validate**. If the record is found, the **Add hostname** button appears. Click **Add hostname** to add the alias.
-
-![function apps add host name blade](./media/dns-custom-domain/functionaddhostname.png)
+ :::image type="content" source="./media/dns-custom-domain/function-app-cname.png" alt-text="Screenshot of add custom domain page for function app.":::
## Public IP address To configure a custom domain for services that use a public IP address resource such as Application Gateway, Load Balancer, Cloud Service, Resource Manager VMs, and, Classic VMs, an A record is used.
-Navigate to **Networking** > **Public IP address**, select the Public IP resource and click **Configuration**. Notate the IP address shown.
-
-![public ip blade](./media/dns-custom-domain/publicip.png)
+1. Navigate to the Public IP resource and select **Configuration**. Note the IP address shown.
-Navigate to your DNS Zone and click **+ Record set**. Fill out the following information on the **Add record set** blade and click **OK** to create it.
+ :::image type="content" source="./media/dns-custom-domain/public-ip.png" alt-text="Screenshot of public ip configuration page.":::
+1. Navigate to your DNS Zone and select **+ Record set**. Enter the following information on the **Add record set** page and select **OK** to create it.
-|Property |Value |Description |
-||||
-|Name | mywebserver | This value along with the domain name label is the FQDN for the custom domain name. |
-|Type | A | Use an A record as the resource is an IP address. |
-|TTL | 1 | 1 is used for 1 hour |
-|TTL unit | Hours | Hours are used as the time measurement |
-|IP Address | `<your ip address>` | The public IP address.|
+ :::image type="content" source="./media/dns-custom-domain/public-ip-record.png" alt-text="Screenshot of public ip record set page.":::
-![create an A record](./media/dns-custom-domain/arecord.png)
+ | Property | Value | Description |
+ | -- | -- | |
+ | Name | webserver1 | This value along with the domain name label is the FQDN for the custom domain name. |
+ | Type | A | Use an A record as the resource is an IP address. |
+ | TTL | 1 | 1 is used for 1 hour |
+ | TTL unit | Hours | Hours are used as the time measurement |
+ | IP Address | `<your ip address>` | The public IP address. |
-Once the A record is created, run `nslookup` to validate the record resolves.
+1. Once the A record is created, run `nslookup` to validate the record resolves.
-![public ip dns lookup](./media/dns-custom-domain/publicipnslookup.png)
+ :::image type="content" source="./medi for public ip.":::
## App Service (Web Apps) The following steps take you through configuring a custom domain for an app service web app.
-Navigate to **App Service** and select the resource you are configuring a custom domain name, and click **Custom domains**.
+1. Navigate to **App Service** and select the resource you're configuring a custom domain name, and select **Custom domains** under *Settings*. Note the **current url** under *assigned custom domains*, this address is used as the alias for the DNS record created.
-Note the current url on the **Custom domains** blade, this address is used as the alias for the DNS record created.
+ :::image type="content" source="./media/dns-custom-domain/web-app.png" alt-text="Screenshot of custom domains for web app.":::
-![custom domains blade](./media/dns-custom-domain/url.png)
+1. Navigate to your DNS Zone and select **+ Record set**. Enter the following information on the **Add record set** page and select **OK** to create it.
-Navigate to your DNS Zone and click **+ Record set**. Fill out the following information on the **Add record set** blade and click **OK** to create it.
+ :::image type="content" source="./media/dns-custom-domain/web-app.png" alt-text="Screenshot of web app record set page.":::
+ | Property | Value | Description |
+ |- | -- | -- |
+ | Name | mywebserver | This value along with the domain name label is the FQDN for the custom domain name. |
+ | Type | CNAME | Use a CNAME record is using an alias. If the resource used an IP address, an A record would be used. |
+ | TTL | 1 | 1 is used for 1 hour |
+ | TTL unit | Hours | Hours are used as the time measurement |
+ | Alias | contoso.azurewebsites.net | The DNS name you're creating the alias for, in this example it's the contoso.azurewebsites.net DNS name provided by default to the web app. |
-|Property |Value |Description |
-||||
-|Name | mywebserver | This value along with the domain name label is the FQDN for the custom domain name. |
-|Type | CNAME | Use a CNAME record is using an alias. If the resource used an IP address, an A record would be used. |
-|TTL | 1 | 1 is used for 1 hour |
-|TTL unit | Hours | Hours are used as the time measurement |
-|Alias | webserver.azurewebsites.net | The DNS name you are creating the alias for, in this example it is the webserver.azurewebsites.net DNS name provided by default to the web app. |
+1. Navigate back to your web app, select **Custom domains** under *Settings*. Then select **+ Add custom domain**.
+ :::image type="content" source="./media/dns-custom-domain/web-app-add-domain.png" alt-text="Screenshot of add custom domain button for web app.":::
-![create a CNAME record](./media/dns-custom-domain/createcnamerecord.png)
+1. On the **Add custom domain** page, enter the CNAME record in the **Custom domain** text field and select **Validate**. If the record is found, the **Add custom domain** button appears. Select **Add custom domain** to add the alias.
-Navigate back to the app service that is configured for the custom domain name. Click **Custom domains**, then click **Hostnames**. To add the CNAME record you created, click **+ Add hostname**.
+ :::image type="content" source="./media/dns-custom-domain/web-app-cname.png" alt-text="Screenshot of add custom domain page for web app.":::
-![Screenshot that highlights the + Add hostname button.](./media/dns-custom-domain/figure1.png)
+1. Once the process is complete, run **nslookup** to validate name resolution is working.
-Once the process is complete, run **nslookup** to validate name resolution is working.
+ :::image type="content" source="./media/dns-custom-domain/app-service-nslookup.png" alt-text="Screenshot of nslookup for web app.":::
-![figure 1](./media/dns-custom-domain/finalnslookup.png)
+To learn more about mapping a custom domain to App Service, visit [map an existing custom DNS name to Azure Web Apps](../app-service/app-service-web-tutorial-custom-domain.md?toc=%dns%2ftoc.json).
-To learn more about mapping a custom domain to App Service, visit [Map an existing custom DNS name to Azure Web Apps](../app-service/app-service-web-tutorial-custom-domain.md?toc=%dns%2ftoc.json).
+To learn how to migrate an active DNS name, see [migrate an active DNS name to Azure App Service](../app-service/manage-custom-dns-migrate-domain.md).
-To learn how to migrate an active DNS name, see [Migrate an active DNS name to Azure App Service](../app-service/manage-custom-dns-migrate-domain.md).
-
-If you need to purchase a custom domain, visit [Buy a custom domain name for Azure Web Apps](../app-service/manage-custom-dns-buy-domain.md) to learn more about App Service domains.
+If you need to purchase a custom domain for your App Service, see [buy a custom domain name for Azure Web Apps](../app-service/manage-custom-dns-buy-domain.md).
## Blob storage
-The following steps take you through configuring a CNAME record for a blob storage account using the asverify method. This method ensures there is no downtime.
+The following steps take you through configuring a CNAME record for a blob storage account using the asverify method. This method ensures there's no downtime.
+
+1. Navigate to **Storage Accounts**, select your storage account, and select **Networking** under *Settings*. Then select the **Custom domain** tab. Note the FQDN in step 2, this name is used to create the first CNAME record.
-Navigate to **Storage** > **Storage Accounts**, select your storage account, and click **Custom domain**. Notate the FQDN under step 2, this value is used to create the first CNAME record
+ :::image type="content" source="./media/dns-custom-domain/blob-storage.png" alt-text="Screenshot of custom domains for storage account.":::
-![blob storage custom domain](./mediomain.png)
+1. Navigate to your DNS Zone and select **+ Record set**. Enter the following information on the **Add record set** page and select **OK** to create it.
-Navigate to your DNS Zone and click **+ Record set**. Fill out the following information on the **Add record set** blade and click **OK** to create it.
+ :::image type="content" source="./media/dns-custom-domain/storage-account-record.png" alt-text="Screenshot of storage account record set page.":::
+ | Property | Value | Description |
+ | -- | -- | -- |
+ | Name | asverify.mystorageaccount | This value along with the domain name label is the FQDN for the custom domain name. |
+ | Type | CNAME | Use a CNAME record is using an alias. |
+ | TTL | 1 | 1 is used for 1 hour |
+ | TTL unit | Hours | Hours are used as the time measurement |
+ | Alias | asverify.contoso.blob.core.windows.net | The DNS name you're creating the alias for, in this example it's the asverify.contoso.blob.core.windows.net DNS name provided by default to the storage account. |
-|Property |Value |Description |
-||||
-|Name | asverify.mystorageaccount | This value along with the domain name label is the FQDN for the custom domain name. |
-|Type | CNAME | Use a CNAME record is using an alias. |
-|TTL | 1 | 1 is used for 1 hour |
-|TTL unit | Hours | Hours are used as the time measurement |
-|Alias | asverify.adatumfunctiona9ed.blob.core.windows.net | The DNS name you are creating the alias for, in this example it is the asverify.adatumfunctiona9ed.blob.core.windows.net DNS name provided by default to the storage account. |
+1. Navigate back to your storage account and select **Networking** and then the **Custom domain** tab. Type in the alias you created without the asverify prefix in the text box, check **Use indirect CNAME validation**, and select **Save**.
-Navigate back to your storage account by clicking **Storage** > **Storage Accounts**, select your storage account and click **Custom domain**. Type in the alias you created without the asverify prefix in the text box, check **Use indirect CNAME validation**, and click **Save**. Once this step is complete, return to your DNS zone and create a CNAME record without the asverify prefix. After that point, you are safe to delete the CNAME record with the cdnverify prefix.
+ :::image type="content" source="./media/dns-custom-domain/blob-storage-add-domain.png" alt-text="Screenshot of storage account add custom domain page.":::
-![Screenshot that shows the Custom Domain page.](./media/dns-custom-domain/indirectvalidate.png)
+1. Return to your DNS zone and create a CNAME record without the asverify prefix. After that point, you're safe to delete the CNAME record with the asverify prefix.
-Validate DNS resolution by running `nslookup`
+ :::image type="content" source="./media/dns-custom-domain/storage-account-record-set.png" alt-text="Screenshot of storage account record without asverify prefix.":::
+
+1. Validate DNS resolution by running `nslookup`.
To learn more about mapping a custom domain to a blob storage endpoint visit [Configure a custom domain name for your Blob storage endpoint](../storage/blobs/storage-custom-domain-name.md?toc=%dns%2ftoc.json) ## Azure CDN
-The following steps take you through configuring a CNAME record for a CDN endpoint using the cdnverify method. This method ensures there is no downtime.
+The following steps take you through configuring a CNAME record for a CDN endpoint using the cdnverify method. This method ensures there's no downtime.
+
+1. Navigate to your CDN profile and select the endpoint you're working with. Select **+ Custom domain**. Note the **Endpoint hostname** as this value is the record that the CNAME record points to.
+
+ :::image type="content" source="./media/dns-custom-domain/cdn.png" alt-text="Screenshot of CDN custom domain page.":::
+
+1. Navigate to your DNS Zone and select **+ Record set**. Enter the following information on the **Add record set** page and select **OK** to create it.
-Navigate to **Networking** > **CDN Profiles**, select your CDN profile.
+ :::image type="content" source="./media/dns-custom-domain/cdn-record.png" alt-text="Screenshot of CDN record set page.":::
-Select the endpoint you are working with and click **+ Custom domain**. Note the **Endpoint hostname** as this value is the record that the CNAME record points to.
+ | Property | Value | Description |
+ | -- | -- | -- |
+ | Name | cdnverify.mycdnendpoint | This value along with the domain name label is the FQDN for the custom domain name. |
+ | Type | CNAME | Use a CNAME record is using an alias. |
+ | TTL | 1 | 1 is used for 1 hour |
+ | TTL unit | Hours | Hours are used as the time measurement |
+ | Alias | cdnverify.contoso.azureedge.net | The DNS name you're creating the alias for, in this example it's the cdnverify.contoso.azureedge.net DNS name provided by default to the CDN endpoint. |
-![CDN custom domain](./mediomain.png)
+1. Navigate back to your CDN endpoint and select **+ Custom domain**. Enter your CNAME record alias without the cdnverify prefix and select **Add**.
-Navigate to your DNS Zone and click **+ Record set**. Fill out the following information on the **Add record set** blade and click **OK** to create it.
+ :::image type="content" source="./media/dns-custom-domain/cdn-add.png" alt-text="Screenshot of add a custom domain page for a CDN endpoint.":::
-|Property |Value |Description |
-||||
-|Name | cdnverify.mycdnendpoint | This value along with the domain name label is the FQDN for the custom domain name. |
-|Type | CNAME | Use a CNAME record is using an alias. |
-|TTL | 1 | 1 is used for 1 hour |
-|TTL unit | Hours | Hours are used as the time measurement |
-|Alias | cdnverify.adatumcdnendpoint.azureedge.net | The DNS name you are creating the alias for, in this example it is the cdnverify.adatumcdnendpoint.azureedge.net DNS name provided by default to the storage account. |
+1. Return to your DNS zone and create a CNAME record without the cdnverify prefix. After that point, you're safe to delete the CNAME record with the cdnverify prefix.
-Navigate back to your CDN endpoint by clicking **Networking** > **CDN Profiles**, and select your CDN profile. Click **+ Custom domain** and enter your CNAME record alias without the cdnverify prefix and click **Add**.
+ :::image type="content" source="./media/dns-custom-domain/cdn-record-set.png" alt-text="Screenshot of CDN record without cdnverify prefix.":::
-Once this step is complete, return to your DNS zone and create a CNAME record without the cdnverify prefix. After that point, you are safe to delete the CNAME record with the cdnverify prefix. For more information on CDN and how to configure a custom domain without the intermediate registration step visit [Map Azure CDN content to a custom domain](../cdn/cdn-map-content-to-custom-domain.md?toc=%dns%2ftoc.json).
+For more information on CDN and how to configure a custom domain without the intermediate registration step visit [Map Azure CDN content to a custom domain](../cdn/cdn-map-content-to-custom-domain.md?toc=%dns%2ftoc.json).
## Next steps
expressroute Using Expressroute For Microsoft365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/using-expressroute-for-microsoft365.md
While you use ExpressRoute, you can apply the route filter associated with Micro
[ExRRF]: https://docs.microsoft.com/azure/expressroute/how-to-routefilter-portal [Teams]: https://docs.microsoft.com/microsoftteams/microsoft-teams-online-call-flows [Microsoft 365-Test]: https://connectivity.office.com/
-[Microsoft 365perf]: https://docs.microsoft.com/microsoft-365/enterprise/performance-tuning-using-baselines-and-history?view=o365-worldwide
--
+[Microsoft 365perf]: /microsoft-365/enterprise/performance-tuning-using-baselines-and-history
frontdoor Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-geo-filtering.md
ms.devlang: na
Last updated 09/28/2020 -+ # Geo-filtering on a domain for Azure Front Door
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-private-link.md
Last updated 02/18/2021-+
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Azure Front Door Standard/Premium supports both Azure managed certificate and cu
1. Validate and associate the custom domain to an endpoint by following the steps in enabling [custom domain](how-to-add-custom-domain.md).
-1. Once the custom domain gets associated to endpoint successfully, an Azure managed certificate gets deployed to Front Door. This process may take a few minutes to complete.
+1. Once the custom domain gets associated to endpoint successfully, an Azure managed certificate gets deployed to Front Door. This process may take several minutes to complete.
## Using your own certificate
frontdoor How To Enable Private Link Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md
Last updated 03/16/2021-+ # Connect Azure Front Door Premium to an internal load balancer origin with Private Link
frontdoor How To Enable Private Link Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-enable-private-link-storage-account.md
Last updated 03/04/2021-+ # Connect Azure Front Door Premium to a storage account origin with Private Link
frontdoor How To Enable Private Link Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app.md
Last updated 02/18/2021-+ # Connect Azure Front Door Premium to a Web App origin with Private Link
hdinsight Hdinsight Autoscale Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-autoscale-clusters.md
During the cluster scaling down process, Autoscale decommissions the nodes to me
The running jobs will continue. The pending jobs will wait for scheduling with fewer available worker nodes.
+### Configure schedule-based Autoscale based on usage pattern
+
+You need to understand your cluster usage pattern when you configure schedule based Autoscale. [Grafana dashboard](https://docs.microsoft.com/azure/hdinsight/interactive-query/hdinsight-grafana) can help you understand your query load and execution slots. You can get the available executor slots and total executor slots from the dashboard.
+
+Here is a way you can estimate how many worker nodes will be needed. We recommend giving additional 10% buffer to handle the variation of the workload.
+
+Number of executor slots actually used = Total executor slots ΓÇô Total available executor slots.
+
+Number of worker nodes required = Number of executor slots actually used / (hive.llap.daemon.num.executors + hive.llap.daemon.task.scheduler.wait.queue.size)
+
+*hive.llap.daemon.num.executors is configurable and default is 4
+
+*hive.llap.daemon.task.scheduler.wait.queue.size is configurable and default is 10
++ ### Be aware of the minimum cluster size Don't scale your cluster down to fewer than three nodes. Scaling your cluster to fewer than three nodes can result in it getting stuck in safe mode because of insufficient file replication. For more information, see [getting stuck in safe mode](hdinsight-scaling-best-practices.md#getting-stuck-in-safe-mode).
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 03/23/2021 Last updated : 05/07/2021 # Azure HDInsight release notes
Azure HDInsight is one of the most popular services among enterprise customers f
If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/hdinsight/release-notes/releases).
+## Price Correction for HDInsight Dv2 Virtual Machines
+
+A pricing error was corrected on April 25th, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25th, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
+
+- Canada Central
+- Canada East
+- East Asia
+- South Africa North
+- Southeast Asia
+- UAE Central
+
+Starting on April 25, 2021, the corrected amount for the Dv2 VMs will be on your account. Customer notifications were sent to subscription owners prior to the change. You can use the Pricing calculator, HDInsight pricing page, or the Create HDInsight cluster blade in the Azure portal to see the corrected costs for Dv2 VMs in your region.
+
+No other action is needed from you. The price correction will only apply for usage on or after April 25, 2021 in the specified regions, and not to any usage prior to this date. To ensure you have the most performant and cost-effective solution, we recommended that you review the pricing, VCPU, and RAM for your Dv2 clusters and compare the Dv2 specifications to the Ev3 VMs to see if your solution would benefit from utilizing one of the newer VM series.
+ ## Release date: 03/24/2021 This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days.
HDInsight is gradually migrating to Azure virtual machine scale sets. Network in
## Upcoming changes The following changes will happen in upcoming releases.
+### HDInsight Interactive Query only supports schedule-based Autoscale
+
+As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
+
+Starting from May 15, 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
+
+Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You can analyze your cluster's current usage pattern through the Grafana Hive dashboard. For more information, see [Automatically scale Azure HDInsight clusters](hdinsight-autoscale-clusters.md).
+ ### OS version upgrade HDInsight clusters are currently running on Ubuntu 16.04 LTS. As referenced in [UbuntuΓÇÖs release cycle](https://ubuntu.com/about/release-cycle), the Ubuntu 16.04 kernel will reach End of Life (EOL) in April 2021. WeΓÇÖll start rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 in May 2021. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
You can find the current component versions for HDInsight 4.0 and HDInsight 3.6
## Recommanded features ### Service tags
-Service tags simplify restricting network access to the Azure services for Azure virtual machines and Azure virtual networks. Service tags in your network security group (NSG) rules allow or deny traffic to a specific Azure service. The rule can be set globally or per Azure region. Azure provides the maintenance of IP addresses underlying each tag. HDInsight service tags for network security groups (NSGs) are groups of IP addresses for health and management services. These groups help minimize complexity for security rule creation. HDInsight customers can enable service tag through Azure portal, PowerShell, and REST API. For more information, see [Network security group (NSG) service tags for Azure HDInsight](./hdinsight-service-tags.md).
+Service tags simplify restricting network access to the Azure services for Azure virtual machines and Azure virtual networks. Service tags in your network security group (NSG) rules allow or deny traffic to a specific Azure service. The rule can be set globally or per Azure region. Azure provides the maintenance of IP addresses underlying each tag. HDInsight service tags for network security groups (NSGs) are groups of IP addresses for health and management services. These groups help minimize complexity for security rule creation. HDInsight customers can enable service tag through Azure portal, PowerShell, and REST API. For more information, see [Network security group (NSG) service tags for Azure HDInsight](./hdinsight-service-tags.md).
iot-develop Quickstart Send Telemetry Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-iot-hub.md
Last updated 05/04/2021
-zone_pivot_groups: iot-develop-set2
+zone_pivot_groups: iot-develop-set1
#Customer intent: As a device application developer, I want to learn the basic workflow of using an Azure IoT device SDK to build a client app on a device, connect the device securely to Azure IoT Hub, and send telemetry.
zone_pivot_groups: iot-develop-set2
In this quickstart, you learn a basic Azure IoT application development workflow. You use the Azure CLI to create an Azure IoT hub and a device. Then you use an Azure IoT device SDK sample to run a simulated temperature controller, connect it securely to the hub, and send telemetry. ++++++ :::zone pivot="programming-language-nodejs" [!INCLUDE [iot-develop-send-telemetry-iot-hub-node](../../includes/iot-develop-send-telemetry-iot-hub-node.md)]
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-monitor-module-twins.md
The [az iot hub module-twin](/cli/azure/iot/hub/module-twin) structure provides
* **az iot hub module-twin update** - Update a module twin definition. * **az iot hub module-twin replace** - Replace a module twin definition with a target JSON.
+>[!TIP]
+>To target the runtime modules with CLI commands, you may need to escape the `$` character in the module ID. For example:
+>
+>```azurecli
+>az iot hub module-twin show -m '$edgeAgent' -n <hub name> -d <device name>
+>```
+>
+>Or:
+>
+>```azurecli
+>az iot hub module-twin show -m \$edgeAgent -n <hub name> -d <device name>
+>```
+ ## Next steps Learn how to [communicate with EdgeAgent using built-in direct methods](how-to-edgeagent-direct-method.md).
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
For example:
The following invocation uploads the last 100 log lines from all modules, in compressed JSON format: ```azurecli
-az iot hub invoke-module-method --method-name UploadModuleLogs -n <hub name> -d <device id> -m \$edgeAgent --method-payload \
+az iot hub invoke-module-method --method-name UploadModuleLogs -n <hub name> -d <device id> -m '$edgeAgent' --method-payload \
' { "schemaVersion": "1.0",
az iot hub invoke-module-method --method-name UploadModuleLogs -n <hub name> -d
The following invocation uploads the last 100 log lines from edgeAgent and edgeHub with the last 1000 log lines from tempSensor module in uncompressed text format: ```azurecli
-az iot hub invoke-module-method --method-name UploadModuleLogs -n <hub name> -d <device id> -m \$edgeAgent --method-payload \
+az iot hub invoke-module-method --method-name UploadModuleLogs -n <hub name> -d <device id> -m '$edgeAgent' --method-payload \
' { "schemaVersion": "1.0",
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot.md
You can verify the installation of IoT Edge on your devices by [monitoring the e
To get the latest edgeAgent module twin, run the following command from [Azure Cloud Shell](https://shell.azure.com/): ```azurecli-interactive
- az iot hub module-twin show --device-id <edge_device_id> --module-id $edgeAgent --hub-name <iot_hub_name>
+ az iot hub module-twin show --device-id <edge_device_id> --module-id '$edgeAgent' --hub-name <iot_hub_name>
``` This command will output all the edgeAgent [reported properties](./module-edgeagent-edgehub.md). Here are some helpful ones monitor the status of the device:
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-identity-registry.md
Use the identity registry when you need to:
* Provision devices or modules that connect to your IoT hub. * Control per-device/per-module access to your hub's device or module-facing endpoints.
-> [!NOTE]
-> * The identity registry does not contain any application-specific metadata.
-> * Module identity and module twin is in public preview. Below feature will be supported on module identity when it's general available.
->
- ## Identity registry operations The IoT Hub identity registry exposes the following operations:
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-construct.md
Previously updated : 07/22/2019 Last updated : 05/07/2021
For more information about how to encode and decode messages sent using differen
| iothub-connection-module-id |An ID set by IoT Hub on device-to-cloud messages. It contains the **moduleId** of the device that sent the message. | No | connectionModuleId | | iothub-connection-auth-generation-id |An ID set by IoT Hub on device-to-cloud messages. It contains the **connectionDeviceGenerationId** (as per [Device identity properties](iot-hub-devguide-identity-registry.md#device-identity-properties)) of the device that sent the message. | No |connectionDeviceGenerationId | | iothub-connection-auth-method |An authentication method set by IoT Hub on device-to-cloud messages. This property contains information about the authentication method used to authenticate the device sending the message.| No | connectionAuthMethod |
-| dt-dataschema | This value is set by IoT hub on device-to-cloud messages. It contains the device model ID set in the device connection. | No | N/A |
-| dt-subject | The name of the component that is sending the device-to-cloud messages. | Yes | N/A |
+| dt-dataschema | This value is set by IoT hub on device-to-cloud messages. It contains the device model ID set in the device connection. | No | $dt-dataschema |
+| dt-subject | The name of the component that is sending the device-to-cloud messages. | Yes | $dt-subject |
## System Properties of **C2D** IoT Hub messages
For more information about how to encode and decode messages sent using differen
| message-id |A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + `{'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''}`. |Yes| | sequence-number |A number (unique per device-queue) assigned by IoT Hub to each cloud-to-device message. |No| | to |A destination specified in [Cloud-to-Device](iot-hub-devguide-c2d-guidance.md) messages. |No|
-| absolute-expiry-time |Date and time of message expiration. |No|
+| absolute-expiry-time |Date and time of message expiration. |Yes|
| correlation-id |A string property in a response message that typically contains the MessageId of the request, in request-reply patterns. |Yes| | user-id |An ID used to specify the origin of messages. When messages are generated by IoT Hub, it is set to `{iot hub name}`. |Yes| | iothub-ack |A feedback message generator. This property is used in cloud-to-device messages to request IoT Hub to generate feedback messages as a result of the consumption of the message by the device. Possible values: **none** (default): no feedback message is generated, **positive**: receive a feedback message if the message was completed, **negative**: receive a feedback message if the message expired (or maximum delivery count was reached) without being completed by the device, or **full**: both positive and negative. |Yes|
iot-hub Iot Hub Devguide Routing Query Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-routing-query-syntax.md
System properties help identify contents and source of the messages.
| contentType | string | The user specifies the content type of the message. To allow query on the message body, this value should be set application/JSON. | | contentEncoding | string | The user specifies the encoding type of the message. Allowed values are UTF-8, UTF-16, UTF-32 if the contentType is set to application/JSON. | | iothub-connection-device-id | string | This value is set by IoT Hub and identifies the ID of the device. To query, use `$connectionDeviceId`. |
+| iothub-connection-module-id | string | This value is set by IoT Hub and identifies the ID of the edge module. To query, use `$connectionModuleId`. |
| iothub-enqueuedtime | string | This value is set by IoT Hub and represents the actual time of enqueuing the message in UTC. To query, use `enqueuedTime`. | | dt-dataschema | string | This value is set by IoT hub on device-to-cloud messages. It contains the device model ID set in the device connection. To query, use `$dt-dataschema`. | | dt-subject | string | The name of the component that is sending the device-to-cloud messages. To query, use `$dt-subject`. |
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/rbac-guide.md
only for specific scenarios:
More about Azure Key Vault management guidelines, see: -- [Azure Key Vault security features](security-features.md)
+- [Azure Key Vault best practices](best-practices.md)
- [Azure Key Vault service limits](service-limits.md) ## Azure built-in roles for Key Vault data plane operations
lighthouse Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/managed-services-offers.md
Public plans let you promote your services to new customers. These are usually m
If appropriate, you can include both public and private plans in the same offer. > [!IMPORTANT]
-> Once a plan has been published as public, you can't change it to private. To control which customers can accept your offer and delegate resources, use a private plan. With a public plan, you can't restrict availability to certain customers or even to a certain number of customers (although you can stop selling the plan completely if you choose to do so). You can [remove access to a delegation](../how-to/remove-delegation.md) after a customer accepts an offer only if you included an **Authorization** with the **Role Definition** set to [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when you published the offer. You can also reach out to the customer and ask them to [remove your access](../how-to/view-manage-service-providers.md#add-or-remove-service-provider-offers).
+> Once a plan has been published as public, you can't change it to private. To control which customers can accept your offer and delegate resources, use a private plan. With a public plan, you can't restrict availability to certain customers or even to a certain number of customers (although you can deprecate (formerly stop sell) the plan completely if you choose to do so). You can [remove access to a delegation](../how-to/remove-delegation.md) after a customer accepts an offer only if you included an **Authorization** with the **Role Definition** set to [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when you published the offer. You can also contact the customer and ask them to [remove your access](../how-to/view-manage-service-providers.md#add-or-remove-service-provider-offers).
## Publish Managed Service offers
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
Title: Limits and configuration
-description: Service limits, such as duration, throughput, and capacity, plus configuration values, such as IP addresses to allow, for Azure Logic Apps
+description: Reference guide to limits and configuration information for Azure Logic Apps
ms.suite: integration-- Previously updated : 04/16/2021++ Last updated : 05/05/2021 # Limits and configuration information for Azure Logic Apps
-This article describes the limits and configuration details for creating and running automated workflows with Azure Logic Apps. For Power Automate, see [Limits and configuration in Power Automate](/flow/limits-and-config).
+> For Power Automate, see [Limits and configuration in Power Automate](/flow/limits-and-config).
+
+This article describes the limits and configuration information for Azure Logic Apps and related resources. Many limits are the same for both the multi-tenant and single-tenant (preview) Logic Apps service with noted differences where they exist.
+
+The following table provides more information about the terms, *multi-tenant*, *single-tenant*, and *integration service environment*, that appear in this article:
+
+| Environment | Resource sharing and usage | [Pricing model](logic-apps-pricing.md) | Notes |
+|-|-|-|-|
+| Azure Logic Apps <br>(Multi-tenant) | Workflows in logic apps *across multiple tenants* share the same processing (compute), storage, network, and so on. | Consumption | Azure Logic Apps manages the default values for these limits, but you can change some of these values, if that option exists for a specific limit. |
+| Azure Logic Apps <br>(Single-tenant (preview)) | Workflows *in the same logic app and single tenant* share the same processing (compute), storage, network, and so on. | Preview, which is either the [Premium hosting plan](../azure-functions/functions-scale.md), or the [App Service hosting plan](../azure-functions/functions-scale.md) with a specific [pricing tier](../app-service/overview-hosting-plans.md) <p><p>If you have *stateful* workflows, which use [external storage](../azure-functions/storage-considerations.md#storage-account-requirements), the Azure Logic Apps runtime makes storage transactions that follow [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). | You can change the default values for many limits, based on your scenario's needs. <p><p>**Important**: Some limits have hard upper maximums. In Visual Studio Code, the changes you make to the default limit values in your logic app project configuration files won't appear in the designer experience. <p><p>For more information, see [Create workflows for single-tenant Azure Logic Apps using Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md). |
+| Integration service environment | Workflows in the *same environment* share the same processing (compute), storage, network, and so on. | Fixed | Azure Logic Apps manages the default values for these limits, but you can change some of these values, if that option exists for a specific limit. |
+|||||
+
+> [!TIP]
+> For scenarios that require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) to discuss your requirements.
<a name="definition-limits"></a>
-## Logic app definition limits
+## Workflow definition limits
-Here are the limits for a single logic app definition:
+The following tables list the values for a single workflow definition:
| Name | Limit | Notes | | - | -- | -- |
-| Actions per workflow | 500 | To extend this limit, you can add nested workflows as needed. |
-| Allowed nesting depth for actions | 8 | To extend this limit, you can add nested workflows as needed. |
-| Workflows per region per subscription | 1,000 | |
-| Triggers per workflow | 10 | When working in code view, not the designer |
-| Switch scope cases limit | 25 | |
-| Variables per workflow | 250 | |
-| Name for `action` or `trigger` | 80 characters | |
-| Characters per expression | 8,192 | |
-| Length of `description` | 256 characters | |
-| Maximum number of `parameters` | 50 | |
-| Maximum number of `outputs` | 10 | |
-| Maximum size for `trackedProperties` | 16,000 characters |
-| Inline Code action - Maximum number of code characters | 1,024 characters | To extend this limit to 100,000 characters, create your logic apps with the **Logic App (Preview)** resource type, either [by using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-stateful-stateless-workflows-visual-studio-code.md). |
-| Inline Code action - Maximum duration for running code | 5 seconds | To extend this limit to a 15 seconds, create your logic apps with the **Logic App (Preview)** resource type, either [by using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-stateful-stateless-workflows-visual-studio-code.md). |
+| Workflows per region per subscription | 1,000 workflows | |
+| Triggers per workflow | 10 triggers | This limit applies only when you work on the JSON workflow definition, whether in code view or an Azure Resource Manager (ARM) template, not the designer. |
+| Actions per workflow | 500 actions | To extend this limit, you can use nested workflows as necessary. |
+| Actions nesting depth | 8 actions | To extend this limit, you can use nested workflows as necessary. |
+| Trigger or action - Maximum name length | 80 characters | |
+| Trigger or action - Maximum input or output size | 104,857,600 bytes <br>(105 MB) |
+| Action - Maximum combined inputs and outputs size | 209,715,200 bytes <br>(210 MB) |
+| Expression character limit | 8,192 characters | |
+| `description` - Maximum length | 256 characters | |
+| `parameters` - Maximum number of items | 50 parameters | |
+| `outputs` - Maximum number items | 10 outputs | |
+| `trackedProperties` - Maximum size | 16,000 characters |
|||| <a name="run-duration-retention-limits"></a> ## Run duration and retention history limits
-Here are the limits for a single logic app run:
+The following table lists the values for a single workflow run:
-| Name | Multi-tenant limit | Integration service environment limit | Notes |
-||--||-|
-| Run duration | 90 days | 366 days | Run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <p><p>To change the default limit, see [Change run duration and history retention in storage](#change-duration). |
-| Run history retention in storage | 90 days | 366 days | If a run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage. Whether the run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>To change the default limit and for more information, see [Change duration and run history retention in storage](#change-retention). To increase the maximum limit, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) for help with your requirements. |
-| Minimum recurrence interval | 1 second | 1 second ||
-| Maximum recurrence interval | 500 days | 500 days ||
-|||||
+| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+||--|-||-|
+| Run history retention in storage | 90 days | 90 days | 366 days | The amount of time to keep workflow run history in storage after a run starts. If the run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage. <p>Whether the run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>For more information, review [Change duration and run history retention in storage](#change-retention). <p><p>**Tip**: For scenarios that require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) to discuss your requirements. |
+| Run duration | 90 days | - Stateful workflow: 90 days <p><p>- Stateless workflow: 5 min | 366 days | The amount of time that a workflow can continue running before forcing a timeout. <p>The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <p>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <p><p>For more information, review [Change run duration and history retention in storage](#change-duration). <p><p>**Tip**: For scenarios that require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) to discuss your requirements. |
+| Recurrence interval | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days | - Min: 1 sec <p><p>- Max: 500 days ||
+||||||
<a name="change-duration"></a> <a name="change-retention"></a> ### Change run duration and history retention in storage
-The same setting controls the maximum number of days that a workflow can run and for keeping run history in storage. To change the default or current limit for these properties, follow these steps.
+In the designer, the same setting controls the maximum number of days that a workflow can run and for keeping run history in storage.
-* For logic apps in multi-tenant Azure, the 90-day default limit is the same as the maximum limit. You can only decrease this value.
+* For the multi-tenant service, the 90-day default limit is the same as the maximum limit. You can only decrease this value.
-* For logic apps in an integration service environment, you can decrease or increase the 90-day default limit.
+* For the single-tenant service (preview), you can decrease or increase the 90-day default limit. For more information, see [Create workflows for single-tenant Azure Logic Apps using Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md).
+
+* For an integration service environment, you can decrease or increase the 90-day default limit.
+
+> [!TIP]
+> For scenarios that require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) to discuss your requirements.
For example, suppose that you reduce the retention limit from 90 days to 30 days. A 60-day-old run is removed from the runs history. If you increase the retention period from 30 days to 60 days, a 20-day-old run stays in the runs history for another 40 days. > [!IMPORTANT]
-> If a run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage.
+> If the run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage.
> To avoid losing run history, make sure that the retention limit is *always* more than the run's longest possible duration.
+To change the default value or current limit for these properties, follow these steps:
+
+#### [Portal (multi-tenant service)](#tab/azure-portal)
+ 1. In the [Azure portal](https://portal.azure.com) search box, find and select **Logic apps**.
-1. Find and select your logic app. Open your logic app in the Logic App Designer.
+1. Find and open your logic app in the Logic App Designer.
1. On the logic app's menu, select **Workflow settings**.
For example, suppose that you reduce the retention limit from 90 days to 30 days
1. When you're done, on the **Workflow settings** toolbar, select **Save**.
-If you generate an Azure Resource Manager template for your logic app, this setting appears as a property in your workflow's resource definition, which is described in the [Microsoft.Logic workflows template reference](/azure/templates/microsoft.logic/workflows):
+#### [Resource Manager template](#tab/azure-resource-manager)
+
+If you use an Azure Resource Manager template, this setting appears as a property in your workflow's resource definition, which is described in the [Microsoft.Logic workflows template reference](/azure/templates/microsoft.logic/workflows):
```json {
If you generate an Azure Resource Manager template for your logic app, this sett
} } ```+
+<a name="concurrency-looping-and-debatching-limits"></a>
<a name="looping-debatching-limits"></a>
-## Concurrency, looping, and debatching limits
+## Looping, concurrency, and debatching limits
-Here are the limits for a single logic app run:
+The following table lists the values for a single workflow run:
-### Loops
+### Loop actions
-| Name | Limit | Notes |
-| - | -- | -- |
-| Foreach array items | 100,000 | This limit describes the maximum number of array items that a "for each" loop can process. <p><p>To filter larger arrays, you can use the [query action](logic-apps-perform-data-operations.md#filter-array-action). |
-| Foreach concurrency | With concurrency off: 20 <p><p>With concurrency on: <p><p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | This limit is maximum number of "for each" loop iterations that can run at the same time, or in parallel. <p><p>To change this limit, see [Change "for each" concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run "for each" loops sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-for-each). |
-| Until iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | The maximum number of cycles that an "Until" loop can have during a logic app run. <p><p>To change this limit, in the "Until" loop shape, select **Change limits**, and specify the value for the **Count** property. |
-| Until timeout | - Default: PT1H (1 hour) | The most amount of time that the "Until" loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <p><p>To change this limit, in the "Until" loop shape, select **Change limits**, and specify the value for the **Timeout** property. |
-||||
+#### For each loop
+
+The following table lists the values for a **For each** loop:
+
+| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+||--|-||-|
+| Array items | 100,000 items | - Stateful workflow: 100,000 items <p><p>- Stateless workflow: 100 items | 100,000 items | The number of array items that a **For each** loop can process. <p><p>To filter larger arrays, you can use the [query action](logic-apps-perform-data-operations.md#filter-array-action). |
+| Concurrent iterations | Concurrency off: 20 <p><p>Concurrency on: <p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <p><p>Concurrency on: <p><p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <p><p>Concurrency on: <p>- Default: 20 <br>- Min: 1 <br>- Max: 50 | The number of **For each** loop iterations that can run at the same time, or in parallel. <p><p>To change this value in the multi-tenant service, see [Change **For each** concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run **For each** loops sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-for-each). |
+||||||
+
+#### Until loop
+
+The following table lists the values for an **Until** loop:
+
+| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+||--|-||-|
+| Iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | Stateful workflow: <p><p>- Default: 60 <br>- Min: 1 <br>- Max: 5,000 <p><p>Stateless workflow: <p><p>- Default: 60 <br>- Min: 1 <br>- Max: 100 | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | The number of cycles that an **Until** loop can have during a workflow run. <p><p>To change this value, in the **Until** loop shape, select **Change limits**, and specify the value for the **Count** property. |
+| Timeout | Default: PT1H (1 hour) | Stateful workflow: PT1H (1 hour) <p><p>Stateless workflow: PT5M (5 min) | Default: PT1H (1 hour) | The amount of time that the **Until** loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <p><p>To change this value, in the **Until** loop shape, select **Change limits**, and specify the value for the **Timeout** property. |
+||||||
<a name="concurrency-debatching"></a> ### Concurrency and debatching
-| Name | Limit | Notes |
-| - | -- | -- |
-| Trigger concurrency | With concurrency off: Unlimited <p><p>With concurrency on, which you can't undo after enabling: <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | This limit is the maximum number of logic app instances that can run at the same time, or in parallel. <p><p>**Note**: When concurrency is turned on, the SplitOn limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <p><p>To change this limit, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). |
-| Maximum waiting runs | With concurrency off: <p><p>- Min: 1 <br>- Max: 50 <p><p>With concurrency on: <p><p>- Min: 10 plus the number of concurrent runs (trigger concurrency) <br>- Max: 100 | This limit is the maximum number of logic app instances that can wait to run when your logic app is already running the maximum concurrent instances. <p><p>To change this limit, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). |
-| SplitOn items | With concurrency off: 100,000 <p><p>With concurrency on: 100 | For triggers that return an array, you can specify an expression that uses a 'SplitOn' property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a "Foreach" loop. This expression references the array to use for creating and running a workflow instance for each array item. <p><p>**Note**: When concurrency is turned on, the SplitOn limit is reduced to 100 items. |
-||||
+| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+||--|-||-|
+| Trigger - concurrent runs | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <p><p>Concurrency on (irreversible): <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <p><p>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <p><p>To change this value in the multi-tenant service, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). |
+| Maximum waiting runs | Concurrency off: <p><p>- Min: 1 run <p>- Max: 50 runs <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 100 runs | Concurrency off: <p><p>- Min: 1 run <p>- Max: 50 runs <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 100 runs | Concurrency off: <p><p>- Min: 1 run <p>- Max: 50 runs <p><p>Concurrency on: <p><p>- Min: 10 runs plus the number of concurrent runs <p>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. <p><p>To change this value in the multi-tenant service, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). |
+| **SplitOn** items | Concurrency off: 100,000 items <p><p>Concurrency on: 100 items | Concurrency off: 100,000 items <p><p>Concurrency on: 100 items | Concurrency off: 100,000 items <p><p>Concurrency on: 100 items | For triggers that return an array, you can specify an expression that uses a **SplitOn** property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a **For each** loop. This expression references the array to use for creating and running a workflow instance for each array item. <p><p>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items. |
+||||||
<a name="throughput-limits"></a> ## Throughput limits
-Here are the limits for a single logic app definition:
+The following table lists the values for a single workflow definition:
-### Multi-tenant Logic Apps service
+### Multi-tenant & single-tenant (preview)
| Name | Limit | Notes | | - | -- | -- |
-| Action: Executions per 5-minute rolling interval | - 100,000 executions (default) <p><p>- 300,000 executions (maximum in high throughput mode) | To raise the default limit to the maximum limit for your logic app, see [Run in high throughput mode](#run-high-throughput-mode), which is in preview. Or, you can [distribute the workload across more than one logic app](../logic-apps/handle-throttling-problems-429-errors.md#logic-app-throttling) as necessary. |
-| Action: Concurrent outbound calls | ~2,500 | You can reduce the number of concurrent requests or reduce the duration as necessary. |
-| Runtime endpoint: Concurrent inbound calls | ~1,000 | You can reduce the number of concurrent requests or reduce the duration as necessary. |
-| Runtime endpoint: Read calls per 5 minutes | 60,000 | This limit applies to calls that get the raw inputs and outputs from a logic app's run history. You can distribute the workload across more than one app as necessary. |
-| Runtime endpoint: Invoke calls per 5 minutes | 45,000 | You can distribute workload across more than one app as necessary. |
-| Content throughput per 5 minutes | 600 MB | You can distribute workload across more than one app as necessary. |
+| Action - Executions per 5-minute rolling interval | - Default: 100,000 executions <p><p>- High throughput mode: 300,000 executions | To raise the default value to the maximum value for your workflow, see [Run in high throughput mode](#run-high-throughput-mode), which is in preview. Or, you can [distribute the workload across more than one workflow](handle-throttling-problems-429-errors.md#logic-app-throttling) as necessary. |
+| Action - Concurrent outbound calls | ~2,500 calls | You can reduce the number of concurrent requests or reduce the duration as necessary. |
+| Managed connector throttling | - Multi-tenant: Throttling limit varies based on connector <p><p>- Single-tenant: 50 requests per minute per connection | For multi-tenant, review [each managed connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors). <p><p>For more information about handling connector throttling, review [Handle throttling problems ("429 - Too many requests" errors)](handle-throttling-problems-429-errors.md#connector-throttling). |
+| Runtime endpoint - Concurrent inbound calls | ~1,000 calls | You can reduce the number of concurrent requests or reduce the duration as necessary. |
+| Runtime endpoint - Read calls per 5 min | 60,000 read calls | This limit applies to calls that get the raw inputs and outputs from a workflow's run history. You can distribute the workload across more than one workflow as necessary. |
+| Runtime endpoint - Invoke calls per 5 min | 45,000 invoke calls | You can distribute workload across more than one workflow as necessary. |
+| Content throughput per 5 min | 600 MB | You can distribute workload across more than one workflow as necessary. |
|||| <a name="run-high-throughput-mode"></a>
-#### Run in high throughput mode
+### Run in high throughput mode
+
+For a single workflow definition, the number of actions that run every 5 minutes has a [default limit](../logic-apps/logic-apps-limits-and-config.md#throughput-limits). To raise the default value to the [maximum value](../logic-apps/logic-apps-limits-and-config.md#throughput-limits) for your workflow, which is three times the default value, you can enable high throughput mode, which is in preview. Or, you can [distribute the workload across more than one workflow](../logic-apps/handle-throttling-problems-429-errors.md#logic-app-throttling) as necessary.
-For a single logic app definition, the number of actions that execute every 5 minutes has a [default limit](../logic-apps/logic-apps-limits-and-config.md#throughput-limits). To raise the default limit to the [maximum limit](../logic-apps/logic-apps-limits-and-config.md#throughput-limits) for your logic app, which is three times the default limit, you can enable high throughput mode, which is in preview. Or, you can [distribute the workload across more than one logic app](../logic-apps/handle-throttling-problems-429-errors.md#logic-app-throttling) as necessary.
+#### [Portal (multi-tenant service)](#tab/azure-portal)
-1. In the Azure portal, on your logic app menu, under **Settings**, select **Workflow settings**.
+1. In the Azure portal, on your logic app's menu, under **Settings**, select **Workflow settings**.
1. Under **Runtime options** > **High throughput**, change the setting to **On**. ![Screenshot that shows logic app menu in Azure portal with "Workflow settings" and "High throughput" set to "On".](./media/logic-apps-limits-and-config/run-high-throughput-mode.png)
+#### [Resource Manager Template](#tab/azure-resource-manager)
+ To enable this setting in an ARM template for deploying your logic app, in the `properties` object for your logic app's resource definition, add the `runtimeConfiguration` object with the `operationOptions` property set to `OptimizedForHighThroughput`: ```json
To enable this setting in an ARM template for deploying your logic app, in the `
For more information about your logic app resource definition, see [Overview: Automate deployment for Azure Logic Apps by using Azure Resource Manager templates](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#logic-app-resource-definition). ++ ### Integration service environment (ISE) * [Developer ISE SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level): Provides up to 500 executions per minute, but note these considerations:
For more information about your logic app resource definition, see [Overview: Au
| Name | Limit | Notes | ||-|-| | Base unit execution limit | System-throttled when infrastructure capacity reaches 80% | Provides ~4,000 action executions per minute, which is ~160 million action executions per month |
- | Scale unit execution limit | System-throttled when infrastructure capacity reaches 80% | Each scale unit can provide ~2,000 additional action executions per minute, which is ~80 million more action executions per month |
- | Maximum scale units that you can add | 10 | |
+ | Scale unit execution limit | System-throttled when infrastructure capacity reaches 80% | Each scale unit can provide ~2,000 more action executions per minute, which is ~80 million more action executions per month |
+ | Maximum scale units that you can add | 10 scale units | |
|||| <a name="gateway-limits"></a>
-## Gateway limits
+## Data gateway limits
+
+Azure Logic Apps supports write operations, including inserts and updates, through the on-premises data gateway. However, these operations have [limits on their payload size](/data-integration/gateway/service-gateway-onprem#considerations).
+
+<a name="variables-action-limits"></a>
-Azure Logic Apps supports write operations, including inserts and updates, through the gateway. However, these operations have [limits on their payload size](/data-integration/gateway/service-gateway-onprem#considerations).
+## Variables action limits
+
+The following table lists the values for a single workflow definition:
+
+| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+||--|-||-|
+| Variables per workflow | 250 variables | 250 variables | 250 variables ||
+| Variable - Maximum content size | 104,857,600 characters | Stateful workflow: 104,857,600 characters <p><p>Stateless workflow: 1,024 characters | 104,857,600 characters ||
+| Variable (Array type) - Maximum number of array items | 100,000 items | 100,000 items | Premium SKU: 100,000 items <p><p>Developer SKU: 5,000 items ||
+||||||
<a name="http-limits"></a>
-## HTTP limits
+## HTTP request limits
-Here are the limits for a single inbound or outbound call:
+The following tables list the values for a single inbound or outbound call:
<a name="http-timeout-limits"></a>
-#### Timeout duration
+### Timeout duration
-Some connector operations make asynchronous calls or listen for webhook requests, so the timeout for these operations might be longer than these limits. For more information, see the technical details for the specific connector and also [Workflow triggers and actions](../logic-apps/logic-apps-workflow-actions-triggers.md#http-action).
+By default, the HTTP action and APIConnection actions follow the [standard asynchronous operation pattern](https://docs.microsoft.com/azure/architecture/patterns/async-request-reply), while the Response action follows the *synchronous operation pattern*. Some managed connector operations make asynchronous calls or listen for webhook requests, so the timeout for these operations might be longer than the following limits. For more information, review [each connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors) and also the [Workflow triggers and actions](../logic-apps/logic-apps-workflow-actions-triggers.md#http-action) documentation.
-| Name | Logic Apps (multi-tenant) | Logic Apps (preview) | Integration service environment | Notes |
-|||-||-|
-| Outbound request | 120 seconds <br>(2 minutes) | 230 seconds <br>(3.9 minutes) | 240 seconds <br>(4 minutes) | Examples of outbound requests include calls made by the HTTP trigger or action. For more information about the preview version, see [Azure Logic Apps Preview](logic-apps-overview-preview.md). <p><p>**Tip**: For longer running operations, use an [asynchronous polling pattern](../logic-apps/logic-apps-create-api-app.md#async-pattern) or an [until loop](../logic-apps/logic-apps-workflow-actions-triggers.md#until-action). To work around timeout limits when you call another logic app that has a [callable endpoint](logic-apps-http-endpoint.md), you can use the built-in Azure Logic Apps action instead, which you can find in the connector picker under **Built-in**. |
-| Inbound request | 120 seconds <br>(2 minutes) | 230 seconds <br>(3.9 minutes) | 240 seconds <br>(4 minutes) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. For more information about the preview version, see [Azure Logic Apps Preview](logic-apps-overview-preview.md). <p><p>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another logic app as a nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). |
+| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+||--|-||-|
+| Outbound request | 120 sec <br>(2 min) | 230 sec <br>(3.9 min) | 240 sec <br>(4 min) | Examples of outbound requests include calls made by the HTTP trigger or action. <p><p>**Tip**: For longer running operations, use an [asynchronous polling pattern](../logic-apps/logic-apps-create-api-app.md#async-pattern) or an ["Until" loop](../logic-apps/logic-apps-workflow-actions-triggers.md#until-action). To work around timeout limits when you call another workflow that has a [callable endpoint](logic-apps-http-endpoint.md), you can use the built-in Azure Logic Apps action instead, which you can find in the designer's operation picker under **Built-in**. |
+| Inbound request | 120 sec <br>(2 min) | 230 sec <br>(3.9 min) | 240 sec <br>(4 min) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. <p><p>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). |
|||||| <a name="message-size-limits"></a>
-#### Message size
+### Messages
-| Name | Multi-tenant limit | Integration service environment limit | Notes |
-||--||-|
-| Message size | 100 MB | 200 MB | To work around this limit, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). However, some connectors and APIs might not support chunking or even the default limit. <p><p>- Connectors such as AS2, X12, and EDIFACT have their own [B2B message limits](#b2b-protocol-limits). <br>- ISE connectors use the ISE limit, not their non-ISE connector limits. |
-| Message size with chunking | 1 GB | 5 GB | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <p><p>If you're using an ISE, the Logic Apps engine supports this limit, but connectors have their own chunking limits up to the engine limit, for example, see the [Azure Blob Storage connector's API reference](/connectors/azureblob/). For more information about chunking, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). |
-|||||
+| Name | Chunking enabled | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+|||--|-||-|
+| Content download - Maximum number of requests | Yes | 1,000 requests | 1,000 requests | 1,000 requests ||
+| Message size | No | 100 MB | 100 MB | 200 MB | To work around this limit, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). However, some connectors and APIs don't support chunking or even the default limit. <p><p>- Connectors such as AS2, X12, and EDIFACT have their own [B2B message limits](#b2b-protocol-limits). <p>- ISE connectors use the ISE limit, not the non-ISE connector limits. |
+| Message size | Yes | 1 GB | 1,073,741,824 bytes <br>(1 GB) | 5 GB | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <p><p>If you're using an ISE, the Logic Apps engine supports this limit, but connectors have their own chunking limits up to the engine limit, for example, see the [Azure Blob Storage connector's API reference](/connectors/azureblob/). For more information about chunking, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). |
+| Content chunk size | Yes | Varies per connector | 52,428,800 bytes (52 MB) | Varies per connector | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. |
+|||||||
-#### Character limits
+### Character limits
| Name | Limit | Notes | ||-|-|
Some connector operations make asynchronous calls or listen for webhook requests
<a name="retry-policy-limits"></a>
-#### Retry policy
+### Retry policy
| Name | Limit | Notes | | - | -- | -- |
-| Retry attempts | 90 | The default is 4. To change the default, use the [retry policy parameter](../logic-apps/logic-apps-workflow-actions-triggers.md). |
-| Retry max delay | 1 day | To change the default, use the [retry policy parameter](../logic-apps/logic-apps-workflow-actions-triggers.md). |
-| Retry min delay | 5 seconds | To change the default, use the [retry policy parameter](../logic-apps/logic-apps-workflow-actions-triggers.md). |
+| Retry attempts | - Default: 4 attempts <br> - Max: 90 attempts | To change the default, use the [retry policy parameter](../logic-apps/logic-apps-workflow-actions-triggers.md). |
+| Retry max delay | - Default: 1 day | To change the default, use the [retry policy parameter](../logic-apps/logic-apps-workflow-actions-triggers.md). |
+| Retry min delay | - Default: 5 sec | To change the default, use the [retry policy parameter](../logic-apps/logic-apps-workflow-actions-triggers.md). |
|||| <a name="authentication-limits"></a> ### Authentication limits
-Here are the limits for a logic app that starts with a Request trigger and enables [Azure Active Directory Open Authentication](../active-directory/develop/index.yml) (Azure AD OAuth) for authorizing inbound calls to the Request trigger:
+The following table lists the values for a workflow that starts with a Request trigger and enables [Azure Active Directory Open Authentication](../active-directory/develop/index.yml) (Azure AD OAuth) for authorizing inbound calls to the Request trigger:
+
+| Name | Limit | Notes |
+| - | -- | -- |
+| Azure AD authorization policies | 5 policies | |
+| Claims per authorization policy | 10 claims | |
+| Claim value - Maximum number of characters | 150 characters |
+||||
+
+<a name="switch-action-limits"></a>
+
+## Switch action limits
+
+The following table lists the values for a single workflow definition:
| Name | Limit | Notes | | - | -- | -- |
-| Azure AD authorization policies | 5 | |
-| Claims per authorization policy | 10 | |
-| Claim value - Maximum number of characters | 150 |
+| Maximum number of cases per action | 25 ||
||||
+<a name="inline-code-action-limits"></a>
+
+## Inline Code action limits
+
+The following table lists the values for a single workflow definition:
+
+| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+||--|-||-|
+| Maximum number of code characters | 1,024 characters | 100,000 characters | 1,024 characters | To use the higher limit, create a **Logic App (Preview)** resource, which runs in single-tenant (preview) Logic Apps, either [by using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-stateful-stateless-workflows-visual-studio-code.md). |
+| Maximum duration for running code | 5 sec | 15 sec | 1,024 characters | To use the higher limit, create a **Logic App (Preview)** resource, which runs in single-tenant (preview) Logic Apps, either [by using the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Preview)** extension](create-stateful-stateless-workflows-visual-studio-code.md). |
+||||||
+ <a name="custom-connector-limits"></a> ## Custom connector limits
-Here are the limits for custom connectors that you can create from web APIs.
+For multi-tenant and integration service environment only, you can create and use [custom managed connectors](/connectors/custom-connectors), which are wrappers around an existing REST API or SOAP API. For single-tenant (preview) only, you can create and use [custom built-in connectors](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
-| Name | Multi-tenant limit | Integration service environment limit | Notes |
-||--||-|
-| Number of custom connectors | 1,000 per Azure subscription | 1,000 per Azure subscription ||
-| Number of requests per minute for a custom connector | 500 requests per minute per connection | 2,000 requests per minute per *custom connector* ||
-|||
+The following table lists the values for custom connectors:
+
+| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes |
+||--|-||-|
+| Custom connectors | 1,000 per Azure subscription | Unlimited | 1,000 per Azure subscription ||
+| Requests per minute for a custom connector | 500 requests per minute per connection | Based on your implementation | 2,000 requests per minute per *custom connector* ||
+| Connection timeout | 2 min | Idle connection: <br>4 min <p><p>Active connection: <br>10 min | 2 min ||
+||||||
+
+For more information, review the following documentation:
+
+* [Custom managed connectors overview](/connectors/custom-connectors)
+* [Enable built-in connector authoring - Visual Studio Code with Azure Logic Apps (Preview)](create-stateful-stateless-workflows-visual-studio-code.md#enable-built-in-connector-authoring)
<a name="managed-identity"></a>
-## Managed identities
+## Managed identity limits
| Name | Limit | ||-|
Here are the limits for custom connectors that you can create from web APIs.
| Number of logic apps that have a managed identity in an Azure subscription per region | 1,000 | |||
+> [!NOTE]
+> By default, a Logic App (Preview) resource has its system-assigned managed identity automatically enabled to
+> authenticate connections at runtime. This identity differs from the authentication credentials or connection
+> string that you use when you create a connection. If you disable this identity, connections won't work at
+> runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**.
+ <a name="integration-account-limits"></a> ## Integration account limits
Each Azure subscription has these integration account limits:
* 1,000 total integration accounts, including integration accounts in any [integration service environments (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) across both [Developer and Premium SKUs](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level).
-* Each ISE, whether [Developer or Premium](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level), can use a single integration account at no additional cost, although the included account type varies by ISE SKU. You can create more integration accounts for your ISE up to the total limit for an [additional cost](logic-apps-pricing.md#fixed-pricing):
+* Each ISE, whether [Developer or Premium](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level), can use a single integration account at no extra cost, although the included account type varies by ISE SKU. You can create more integration accounts for your ISE up to the total limit for an [extra cost](logic-apps-pricing.md#fixed-pricing).
| ISE SKU | Integration account limits | ||-|
- | **Premium** | 20 total accounts, including one Standard account at no additional cost. With this SKU, you can have only [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. No Free or Basic accounts are permitted. |
- | **Developer** | 20 total accounts, including one [Free](../logic-apps/logic-apps-pricing.md#integration-accounts) account (limited to 1). With this SKU, you can have either combination: <p>- A Free account and up to 19 [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. <br>- No Free account and up to 20 Standard accounts. <p>No Basic or additional Free accounts are permitted. <p><p>**Important**: Use the [Developer SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) for experimenting, development, and testing, but not for production or performance testing. |
+ | **Premium** | 20 total accounts, including one Standard account at no extra cost. With this SKU, you can have only [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. No Free or Basic accounts are permitted. |
+ | **Developer** | 20 total accounts, including one [Free](../logic-apps/logic-apps-pricing.md#integration-accounts) account (limited to 1). With this SKU, you can have either combination: <p>- A Free account and up to 19 [Standard](../logic-apps/logic-apps-pricing.md#integration-accounts) accounts. <br>- No Free account and up to 20 Standard accounts. <p>No Basic or more Free accounts are permitted. <p><p>**Important**: Use the [Developer SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) for experimenting, development, and testing, but not for production or performance testing. |
||| To learn how pricing and billing work for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#fixed-pricing). For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
To learn how pricing and billing work for ISEs, see the [Logic Apps pricing mode
### Artifact limits per integration account
-Here are the limits on the number of artifacts for each integration account tier. For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). To learn how pricing and billing work for integration accounts, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#integration-accounts).
+The following tables list the values for the number of artifacts limited to each integration account tier. For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). To learn how pricing and billing work for integration accounts, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#integration-accounts).
> [!NOTE]
-> Use the Free tier only for exploratory scenarios,
-> not production scenarios. This tier restricts
-> throughput and usage, and has no service-level agreement (SLA).
+> Use the Free tier only for exploratory scenarios, not production scenarios.
+> This tier restricts throughput and usage, and has no service-level agreement (SLA).
| Artifact | Free | Basic | Standard | |-||-|-|
Here are the limits on the number of artifacts for each integration account tier
| Runtime endpoint | Free | Basic | Standard | Notes | |||-|-|-|
-| Read calls per 5 minutes | 3,000 | 30,000 | 60,000 | This limit applies to calls that get the raw inputs and outputs from a logic app's run history. You can distribute the workload across more than one account as necessary. |
-| Invoke calls per 5 minutes | 3,000 | 30,000 | 45,000 | You can distribute the workload across more than one account as necessary. |
-| Tracking calls per 5 minutes | 3,000 | 30,000 | 45,000 | You can distribute the workload across more than one account as necessary. |
+| Read calls per 5 min | 3,000 | 30,000 | 60,000 | This limit applies to calls that get the raw inputs and outputs from a logic app's run history. You can distribute the workload across more than one account as necessary. |
+| Invoke calls per 5 min | 3,000 | 30,000 | 45,000 | You can distribute the workload across more than one account as necessary. |
+| Tracking calls per 5 min | 3,000 | 30,000 | 45,000 | You can distribute the workload across more than one account as necessary. |
| Blocking concurrent calls | ~1,000 | ~1,000 | ~1,000 | Same for all SKUs. You can reduce the number of concurrent requests or reduce the duration as necessary. | ||||
Here are the limits on the number of artifacts for each integration account tier
### B2B protocol (AS2, X12, EDIFACT) message size
-Here are the message size limits that apply to B2B protocols:
+The following table lists the message size limits that apply to B2B protocols:
-| Name | Multi-tenant limit | Integration service environment limit | Notes |
-||--||-|
-| AS2 | v2 - 100 MB<br>v1 - 25 MB | v2 - 200 MB <br>v1 - 25 MB | Applies to decode and encode |
-| X12 | 50 MB | 50 MB | Applies to decode and encode |
-| EDIFACT | 50 MB | 50 MB | Applies to decode and encode |
+| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+||--|||-|
+| AS2 | v2 - 100 MB<br>v1 - 25 MB | Unavailable | v2 - 200 MB <br>v1 - 25 MB | Applies to decode and encode |
+| X12 | 50 MB | Unavailable | 50 MB | Applies to decode and encode |
+| EDIFACT | 50 MB | Unavailable | 50 MB | Applies to decode and encode |
||||
-<a name="disable-delete"></a>
-
-## Disabling or deleting logic apps
-
-When you disable a logic app, no new runs are instantiated. All in-progress and pending runs continue until they finish, which might take time to complete.
-
-When you delete a logic app, no new runs are instantiated. All in-progress and pending runs are canceled. If you have thousands of runs, cancellation might take significant time to complete.
- <a name="configuration"></a> <a name="firewall-ip-configuration"></a> ## Firewall configuration: IP addresses and service tags
-When your logic app needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](#inbound) and [outbound](#outbound) IP addresses used by the Logic Apps service or runtime in the Azure region where your logic app exists. *All* logic apps in the same region use the same IP address ranges.
+When your workflow needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](#inbound) and [outbound](#outbound) IP addresses used by the Logic Apps service or runtime in the Azure region where your logic app resource exists. *All* logic apps in the same region use the same IP address ranges.
For example, to support calls that logic apps in the West US region send or receive through built-in triggers and actions, such as the [HTTP trigger or action](../connectors/connectors-native-http.md), your firewall needs to allow access for *all* the Logic Apps service inbound IP addresses *and* outbound IP addresses that exist in the West US region.
-If your logic app also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](#outbound) in your logic app's Azure region. Plus, if you use custom connectors that access on-premises resources through the [on-premises data gateway resource in Azure](logic-apps-gateway-connection.md), you need to set up the gateway installation to allow access for the corresponding *managed connectors [outbound IP addresses](#outbound)*.
+If your workflow also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](#outbound) in your logic app's Azure region. Plus, if you use custom connectors that access on-premises resources through the [on-premises data gateway resource in Azure](logic-apps-gateway-connection.md), you need to set up the gateway installation to allow access for the corresponding *managed connectors [outbound IP addresses](#outbound)*.
For more information about setting up communication settings on the gateway, see these topics:
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
A compute instance is a fully managed cloud-based workstation optimized for your
|Preconfigured&nbsp;for&nbsp;ML|Save time on setup tasks with pre-configured and up-to-date ML packages, deep learning frameworks, GPU drivers.| |Fully customizable|Broad support for Azure VM types including GPUs and persisted low-level customization such as installing packages and drivers makes advanced scenarios a breeze. |
-You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can [create a compute instance for you](how-to-create-manage-compute-instance.md?tabs=python#on-behalf).
+You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#on-behalf)**.
+
+You can also **[use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script)** for an automated way to customize and configure the compute instance as per your needs.
## <a name="contents"></a>Tools and environments
For more about managing the compute instance, see [Create and manage an Azure Ma
### <a name="create"></a>Create a compute instance
-As an administrator, you can [create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#on-behalf). You can also [use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script) for an automated way to customize and configure the compute instance.
+As an administrator, you can **[create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#on-behalf)**.
+
+You can also **[use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script)** for an automated way to customize and configure the compute instance.
To create your a compute instance for yourself, use your workspace in Azure Machine Learning studio, [create a new compute instance](how-to-create-attach-compute-studio.md#compute-instance) from either the **Compute** section or in the **Notebooks** section when you are ready to run one of your notebooks.
A compute instance:
You can use compute instance as a local inferencing deployment target for test/debug scenarios. > [!TIP]
-> The compute instance has 120GB OS disk. If you run out of disk space and get into an unusable state, please clear at least 5 GB disk space on OS disk (/dev/sdb1/ filesystem mounted on /) through the compute instance terminal by removing files/folders and then do sudo reboot. To access the terminal go to compute list page or compute instance details page and click on Terminal link. Clear at least 5 GB before you [stop or restart](how-to-create-manage-compute-instance.md#manage) the compute instance. You can check available disk space by running df -h on the terminal.
+> The compute instance has 120GB OS disk. If you run out of disk space and get into an unusable state, please clear at least 5 GB disk space on OS disk (*/dev/sdb1/* filesystem mounted on /) through the compute instance terminal by removing files/folders and then do `sudo reboot`. To access the terminal go to compute list page or compute instance details page and click on **Terminal** link. Clear at least 5 GB before you [stop or restart](how-to-create-manage-compute-instance.md#manage) the compute instance. You can check available disk space by running `df -h` on the terminal.
## Next steps
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
Use a setup script for an automated way to customize and configure the compute i
Some examples of what you can do in a setup script:
-* Install packages and tools
+* Install packages, tools, and software
* Mount data * Create custom conda environment and Jupyter kernels
-* Clone git repositories
+* Clone git repositories and set git config
+* Set network proxies
+* Set environment variables
+* Install JupyterLab extensions
### Create the setup script
-The setup script is a shell script which runs as *azureuser*. Create or upload the script into your **Notebooks** files:
+The setup script is a shell script which runs as *rootuser*. Create or upload the script into your **Notebooks** files:
1. Sign into the [studio](https://ml.azure.com) and select your workspace.
-1. On the left, select **Notebooks**
-1. Use the **Add files** tool to create or upload your setup shell script. Make sure the script filename ends in ".sh". When you create a new file, also change the **File type** to *bash(.sh)*.
+2. On the left, select **Notebooks**
+3. Use the **Add files** tool to create or upload your setup shell script. Make sure the script filename ends in ".sh". When you create a new file, also change the **File type** to *bash(.sh)*.
:::image type="content" source="media/how-to-create-manage-compute-instance/create-or-upload-file.png" alt-text="Create or upload your setup script to Notebooks file in studio":::
-When the script runs, the current working directory is the directory where it was uploaded. If you upload the script to **Users>admin**, the location of the the file is */mnt/batch/tasks/shared/LS_root/mounts/clusters/**ciname**/code/Users/admin* when provisioning the compute instance named **ciname**.
+When the script runs, the current working directory of the script is the directory where it was uploaded. For example, if you upload the script to **Users>admin**, the location of the script on the compute instance and current working directory when the script runs is */home/azureuser/cloudfiles/code/Users/admin*. This would enable you to use relative paths in the script.
-Script arguments can be referred to in the script as $1, $2, etc. For example, if you execute `scriptname ciname` then in the script you can `cd /mnt/batch/tasks/shared/LS_root/mounts/clusters/$1/code/admin` to navigate to the directory where the script is stored.
+Script arguments can be referred to in the script as $1, $2, etc.
-You can also retrieve the path inside the script:
+If your script was doing something specific to azureuser such as installing conda environment or jupyter kernel you will have to put it within *sudo -u azureuser* block like this
```shell
-#!/bin/bash
-SCRIPT=$(readlink -f "$0")
-SCRIPT_PATH=$(dirname "$SCRIPT")
+sudo -u azureuser -i <<'EOF'
+
+EOF
```
+Please note *sudo -u azureuser* does change the current working directory to */home/azureuser*. You also can't access the script arguments in this block.
+
+You can also use the following environment variables in your script:
+
+1. CI_RESOURCE_GROUP
+2. CI_WORKSPACE
+3. CI_NAME
+4. CI_LOCAL_UBUNTU_USER. This points to azureuser
### Use the script in the studio
machine-learning How To Integrate Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-integrate-azure-policy.md
description: Learn how to use Azure Policy to use built-in policies for Azure Machine Learning to make sure your workspaces are compliant with your requirements. Previously updated : 03/25/2021 Last updated : 05/03/2021
machine-learning Explore Data Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/explore-data-blob.md
To explore and manipulate a dataset, it must first be downloaded from the blob s
dataframe_blobdata = pd.read_csv(LOCALFILENAME) ```
-If you need more general information on reading from an Azure Storage Blob, look at our documentation [Azure Storage Blobs client library for Python](https://docs.microsoft.com/python/api/overview/azure/storage-blob-readme?view=azure-python).
+If you need more general information on reading from an Azure Storage Blob, look at our documentation [Azure Storage Blobs client library for Python](/python/api/overview/azure/storage-blob-readme).
Now you are ready to explore the data and generate features on this dataset.
managed-instance-apache-cassandra Create Multi Region Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/create-multi-region-cluster.md
+
+ Title: Quickstart - Configure a multi-region cluster with Azure Managed Instance for Apache Cassandra
+description: This quickstart shows how to configure a multi-region cluster with Azure Managed Instance for Apache Cassandra.
++++ Last updated : 05/05/2021+
+# Quickstart: Create a multi-region cluster with Azure Managed Instance for Apache Cassandra (Preview)
+
+Azure Managed Instance for Apache Cassandra provides automated deployment and scaling operations for managed open-source Apache Cassandra datacenters. This service helps you accelerate hybrid scenarios and reduce ongoing maintenance.
+
+> [!IMPORTANT]
+> Azure Managed Instance for Apache Cassandra is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This quickstart demonstrates how to use the Azure CLI commands to configure a multi-region cluster in Azure.
++
+* This article requires the Azure CLI version 2.12.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
+
+* [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premise environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article.
+
+## <a id="create-account"></a>Setting up the network environment
+
+As all datacenters provisioned with this service must be deployed into dedicated subnets using VNet injection, it is necessary to configure appropriate network peering in advance of deployment to ensure that your multi-region cluster will function properly. We are going to create a cluster with two datacenters in separate regions: East US and East US 2. First, we need to create the Virtual Networks for each region.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Create a resource group named "cassandra-mi-multi-region"
+
+ ```azurecli-interactive
+ az group create -l eastus2 -n cassandra-mi-multi-region
+ ```
+
+1. Create the first VNet in East US 2 with a dedicated subnet:
+
+ ```azurecli-interactive
+ az network vnet create -n vnetEastUs2 -l eastus2 -g cassandra-mi-multi-region --address-prefix 10.0.0.0/16 --subnet-name dedicated-subnet
+ ```
+
+1. Now create the second VNet in East US, also with a dedicated subnet:
+
+ ```azurecli-interactive
+ az network vnet create -n vnetEastUs -l eastus -g cassandra-mi-multi-region --address-prefix 192.168.0.0/16 --subnet-name dedicated-subnet
+ ```
+
+ > [!NOTE]
+ > We explicitly add different IP address ranges to ensure no errors when peering.
+
+1. Now we need to peer the first VNet to the second VNet:
+
+ ```azurecli-interactive
+ az network vnet peering create -g cassandra-mi-multi-region -n MyVnet1ToMyVnet2 --vnet-name vnetEastUs2 \
+ --remote-vnet vnetEastUs --allow-vnet-access --allow-forwarded-traffic
+ ```
+
+1. In order to connect the two VNets, create another peering between the second VNet and the first:
+
+ ```azurecli-interactive
+ az network vnet peering create -g cassandra-mi-multi-region -n MyVnet2ToMyVnet1 --vnet-name vnetEastUs \
+ --remote-vnet vnetEastUs2 --allow-vnet-access --allow-forwarded-traffic
+ ```
+
+ > [!NOTE]
+ > If adding more regions, each VNet will require peering from it to all other VNets, and from all other VNets to it.
+
+1. Check the output of the previous command, and make sure the value of "peeringState" is now "Connected". You can also check this by running the following command:
+
+ ```azurecli-interactive
+ az network vnet peering show \
+ --name MyVnet1ToMyVnet2 \
+ --resource-group cassandra-mi-multi-region \
+ --vnet-name vnetEastUs2 \
+ --query peeringState
+ ```
+
+1. Next, apply some special permissions to both Virtual Networks, which are required by Azure Managed Instance for Apache Cassandra. Run the following and make sure to replace `<Subscription ID>` with your subscription ID:
+
+ ```azurecli-interactive
+ az role assignment create --assignee a232010e-820c-4083-83bb-3ace5fc29d0b --role 4d97b98b-1d4f-4787-a291-c67834d212e7 --scope /subscriptions/<Subscription ID>/resourceGroups/cassandra-mi-multi-region/providers/Microsoft.Network/virtualNetworks/vnetEastUs2
+ az role assignment create --assignee a232010e-820c-4083-83bb-3ace5fc29d0b --role 4d97b98b-1d4f-4787-a291-c67834d212e7 --scope /subscriptions/<Subscription ID>/resourceGroups/cassandra-mi-multi-region/providers/Microsoft.Network/virtualNetworks/vnetEastUs
+ ```
+ > [!NOTE]
+ > The `assignee` and `role` values in the previous command are fixed values, enter these values exactly as mentioned in the command. Not doing so will lead to errors when creating the cluster. If you encounter any errors when executing this command, you may not have permissions to run it, please reach out to your admin for permissions.
+
+## <a id="create-account"></a>Create a multi-region cluster
+
+1. Now that we have the appropriate networking in place, we are ready to deploy the cluster resource (replace `<Subscription ID>` with your subscription ID). The can take between 5-10 minutes:
+
+ ```azurecli-interactive
+ resourceGroupName='cassandra-mi-multi-region'
+ clusterName='test-multi-region'
+ location='eastus2'
+ delegatedManagementSubnetId='/subscriptions/<Subscription ID>/resourceGroups/cassandra-mi-multi-region/providers/Microsoft.Network/virtualNetworks/vnetEastUs2/subnets/dedicated-subnet'
+ initialCassandraAdminPassword='myPassword'
+
+ az managed-cassandra cluster create \
+ --cluster-name $clusterName \
+ --resource-group $resourceGroupName \
+ --location $location \
+ --delegated-management-subnet-id $delegatedManagementSubnetId \
+ --initial-cassandra-admin-password $initialCassandraAdminPassword \
+ --debug
+ ```
+
+1. When the cluster resource is created, you are ready to create a data center. First, create a datacenter in East US 2 (replace `<Subscription ID>` with your subscription ID). This can take up to 10 minutes:
+
+ ```azurecli-interactive
+ resourceGroupName='cassandra-mi-multi-region'
+ clusterName='test-multi-region'
+ dataCenterName='dc-eastus2'
+ dataCenterLocation='eastus2'
+ delegatedManagementSubnetId='/subscriptions/<Subscription ID>/resourceGroups/cassandra-mi-multi-region/providers/Microsoft.Network/virtualNetworks/vnetEastUs2/subnets/dedicated-subnet'
+
+ az managed-cassandra datacenter create \
+ --resource-group $resourceGroupName \
+ --cluster-name $clusterName \
+ --data-center-name $dataCenterName \
+ --data-center-location $dataCenterLocation \
+ --delegated-subnet-id $delegatedManagementSubnetId \
+ --node-count 3
+ ```
+
+1. Next, create a datacenter in East US (replace `<Subscription ID>` with your subscription ID):
+
+ ```azurecli-interactive
+ resourceGroupName='cassandra-mi-multi-region'
+ clusterName='test-multi-region'
+ dataCenterName='dc-eastus'
+ dataCenterLocation='eastus'
+ delegatedManagementSubnetId='/subscriptions/<Subscription ID>/resourceGroups/cassandra-mi-multi-region/providers/Microsoft.Network/virtualNetworks/vnetEastUs/subnets/dedicated-subnet'
+
+ az managed-cassandra datacenter create \
+ --resource-group $resourceGroupName \
+ --cluster-name $clusterName \
+ --data-center-name $dataCenterName \
+ --data-center-location $dataCenterLocation \
+ --delegated-subnet-id $delegatedManagementSubnetId \
+ --node-count 3
+ ```
+
+1. Once the second datacenter is created, get the node status to verify that all the Cassandra nodes came up successfully:
+
+ ```azurecli-interactive
+ resourceGroupName='cassandra-mi-multi-region'
+ clusterName='test-multi-region'
+
+ az managed-cassandra cluster node-status \
+ --cluster-name $clusterName \
+ --resource-group $resourceGroupName
+ ```
++
+1. Finally, [connect to your cluster](create-cluster-cli.md#connect-to-your-cluster) using CQLSH, and use the following CQL query to update the replication strategy in each keyspace to include all datacenters across the cluster:
+
+ ```bash
+ ALTER KEYSPACE "ks" WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'dc-eastus2': 3, 'dc-eastus': 3};
+ ```
+ You also need to update the password tables:
+
+ ```bash
+ ALTER KEYSPACE "system_auth" WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'dc-eastus2': 3, 'dc-eastus': 3}
+ ```
+
+## Troubleshooting
+
+If you encounter an error when applying permissions to your virtual network, such as *Cannot find user or service principal in a graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*, you can apply the same permission manually from the Azure portal.
+
+To apply permissions from the Azure portal, go to the **Access control (IAM)** pane of your existing virtual network and add a role assignment for "Azure Cosmos DB" to the "Network Administrator" role. If two entries appear when you search for "Azure Cosmos DB", add both the entries as shown in the following image:
+
+ :::image type="content" source="./media/create-cluster-cli/apply-permissions.png" alt-text="Apply permissions" lightbox="./media/create-cluster-cli/apply-permissions.png" border="true":::
+
+> [!NOTE]
+> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
+
+## Clean up resources
+
+If you're not going to continue to use this managed instance cluster, delete it with the following steps:
+
+1. From the left-hand menu of Azure portal, select **Resource groups**.
+1. From the list, select the resource group you created for this quickstart.
+1. On the resource group **Overview** pane, select **Delete resource group**.
+3. In the next window, enter the name of the resource group to delete, and then select **Delete**.
+
+## Next steps
+
+In this quickstart, you learned how to create a multi-region cluster using Azure CLI and Azure Managed Instance for Apache Cassandra. You can now start working with the cluster.
+
+> [!div class="nextstepaction"]
+> [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
managed-instance-apache-cassandra Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/manage-resources-cli.md
The following sections demonstrate how to manage Azure Managed Instance for Apac
* [Create a datacenter](#create-datacenter) * [Delete a datacenter](#delete-datacenter) * [Get datacenter details](#get-datacenter-details)
-* [Update or scale a datacenter](#update-datacenter)
* [Get datacenters in a cluster](#get-datacenters-cluster)
+* [Update or scale a datacenter](#update-datacenter)
+* [Update Cassandra configuration](#update-yaml)
+ ### <a id="create-datacenter"></a>Create a datacenter
resourceGroupName='MyResourceGroup'
clusterName='cassandra-hybrid-cluster' dataCenterName='dc1' dataCenterLocation='eastus'
-delegatedSubnetId= '/subscriptions/<Subscription_ID>/resourceGroups/customer-vnet-rg/providers/Microsoft.Network/virtualNetworks/customer-vnet/subnets/dc1-subnet'
az managed-cassandra datacenter update \ --resource-group $resourceGroupName \
az managed-cassandra datacenter update \
--node-count 13 ```
+### <a id="update-yaml"></a>Update Cassandra configuration
+
+Change Cassandra configuration on a datacenter by using the [az managed-cassandra datacenter update](/cli/azure/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#az_managed_cassandra_datacenter_update) command. You will need to base64 encode the YAML fragment by using an [online tool](https://www.base64encode.org/). The following YAML settings are supported:
+
+- column_index_size_in_kb
+- compaction_throughput_mb_per_sec
+- read_request_timeout_in_ms
+- range_request_timeout_in_ms
+- aggregated_request_timeout_in_ms
+- write_request_timeout_in_ms
+- internode_compression
+- batchlog_replay_throttle_in_kb
+
+For example, the following YAML fragment:
+
+```yaml
+column_index_size_in_kb: 16
+read_request_timeout_in_ms: 10000
+```
+
+When encoded, the YAML is converted to:
+`Y29sdW1uX2luZGV4X3NpemVfaW5fa2I6IDE2CnJlYWRfcmVxdWVzdF90aW1lb3V0X2luX21zOiAxMDAwMA==`.
+
+See below:
+
+```azurecli-interactive
+resourceGroupName='MyResourceGroup'
+clusterName='cassandra-hybrid-cluster'
+dataCenterName='dc1'
+dataCenterLocation='eastus'
+yamlFragment='Y29sdW1uX2luZGV4X3NpemVfaW5fa2I6IDE2CnJlYWRfcmVxdWVzdF90aW1lb3V0X2luX21zOiAxMDAwMA=='
+
+az managed-cassandra datacenter update \
+ --resource-group $resourceGroupName \
+ --cluster-name $clusterName \
+ --data-center-name $dataCenterName \
+ --base64-encoded-cassandra-yaml-fragment $yamlFragment
+```
+ ### <a id="get-datacenters-cluster"></a>Get the datacenters in a cluster Get datacenters in a cluster by using the [az managed-cassandra datacenter list](/cli/azure/managed-cassandra/datacenter?view=azure-cli-latest&preserve-view=true#az_managed_cassandra_datacenter_list) command:
marketplace Azure Vm Create Certification Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-certification-faq.md
Previously updated : 01/18/2021 Last updated : 04/13/2021 # Troubleshoot virtual machine certification
Contact [Partner Center support](https://aka.ms/marketplacepublishersupport) to
This section describes how to provide a new VM image when a vulnerability or exploit is discovered with one of your VM images. It only applies to Azure VM offers published to Azure Marketplace. > [!NOTE]
-> You can't remove the last VM image from a plan or stop-sell the last plan for an offer.
+> You can't remove the last VM image from a plan or deprecate (formerly stop sell) the last plan for an offer.
Do one of the following actions: - If you have a new VM image to replace the vulnerable VM image, see [Provide a fixed VM image](#provide-a-fixed-vm-image).-- If you don't have a new VM image to replace the only VM image in a plan, or if you're done with the plan, [stop selling the plan](partner-center-portal/update-existing-offer.md#stop-selling-an-offer-or-plan).-- If you don't plan to replace the only VM image in the offer, we recommend you [stop selling the offer](partner-center-portal/update-existing-offer.md#stop-selling-an-offer-or-plan).
+- If you don't have a new VM image to replace the only VM image in a plan, or if you're done with the plan, [deprecate (formerly stop sell) the plan](partner-center-portal/update-existing-offer.md#deprecate-an-offer-or-plan).
+- If you don't plan to replace the only VM image in the offer, we recommend you [deprecate (formerly stop sell) the offer](partner-center-portal/update-existing-offer.md#deprecate-an-offer-or-plan).
### Provide a fixed VM image
marketplace Cloud Partner Portal Migration Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/cloud-partner-portal-migration-faq.md
Previously updated : 07/14/2020 Last updated : 04/14/2021 # Frequently asked questions about transitioning from the Cloud Partner Portal to Partner Center
Your offer ID is now shown on the left-navigation bar of the offer:
![Shows the Partner Center Offer ID location](media/cpp-pc-faq/offer-id.png)
-### Stop selling an offer
+### Deprecate an offer
-You can request to [stop selling an offer](partner-center-portal/update-existing-offer.md#stop-selling-an-offer-or-plan) on the marketplace directly from the Partner Center portal. The option is available on the **Offer overview** page for your offer.
+> [!IMPORTANT]
+> Stop sell has changed to deprecate.
-[![Screenshot shows the Partner Center page to stop selling an offer.](media/cpp-pc-faq/stop-sell.png "Shows the Partner Center page to stop selling an offer")](media/cpp-pc-faq/stop-sell.png#lightbox)
+You can request to [deprecate (formerly stop sell) an offer](partner-center-portal/update-existing-offer.md#deprecate-an-offer-or-plan) on the marketplace directly from the Partner Center portal. The option is available on the **Offer overview** page for your offer.
+
+[![Screenshot shows the Partner Center page to deprecate (formerly stop sell) an offer.](media/cpp-pc-faq/stop-sell.png "Shows the Partner Center page to deprecate an offer")](media/cpp-pc-faq/stop-sell.png#lightbox)
<br><br> ## Are the Cloud Partner Portal REST APIs still supported?
marketplace Create Managed Service Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-plans.md
Previously updated : 12/23/2020 Last updated : 04/15/2021 # How to create plans for your Managed Service offer
You can configure each plan to be visible to everyone (public) or to only a spec
> Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. > [!IMPORTANT]
-> Once a plan has been published as public, you can't change it to private. To control which customers can accept your offer and delegate resources, use a private plan. With a public plan, you can't restrict availability to certain customers or even to a certain number of customers (although you can stop selling the plan completely if you choose to do so). You can remove access to a delegation after a customer accepts an offer only if you included an Authorization with the Role Definition set to Managed Services Registration Assignment Delete Role when you published the offer. You can also reach out to the customer and ask them to remove your access.
+> Once a plan has been published as public, you can't change it to private. To control which customers can accept your offer and delegate resources, use a private plan. With a public plan, you can't restrict availability to certain customers or even to a certain number of customers (although you can deprecate (formerly stop sell) the plan completely if you choose to do so). You can remove access to a delegation after a customer accepts an offer only if you included an Authorization with the Role Definition set to Managed Services Registration Assignment Delete Role when you published the offer. You can also reach out to the customer and ask them to remove your access.
## Make your plan public
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
description: This article describes pricing, billing, invoicing, and payout cons
Previously updated : 04/06/2021 Last updated : 04/14/2021
Depending on the transaction option used, subscription charges are as follows:
> [!NOTE] > Offers that are billed according to consumption after a solution has been used are not eligible for refunds.
-Publishers who want to change the usage fees associated with an offer, should first remove the offer (or the specific plan within the offer) from the commercial marketplace. Removal should be done in accordance with the requirements of the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). Then the publisher can publish a new offer (or plan within an offer) that includes the new usage fees. For information, about removing an offer or plan, see [Stop selling an offer or plan](./partner-center-portal/update-existing-offer.md#stop-selling-an-offer-or-plan).
+Publishers who want to change the usage fees associated with an offer, should first remove the offer (or the specific plan within the offer) from the commercial marketplace. Removal should be done in accordance with the requirements of the [Microsoft Publisher Agreement](https://go.microsoft.com/fwlink/?LinkID=699560). Then the publisher can publish a new offer (or plan within an offer) that includes the new usage fees. For information, about removing an offer or plan, see [deprecate an offer or plan](./partner-center-portal/update-existing-offer.md#deprecate-an-offer-or-plan) (deprecate was formerly stop sell).
### Free, Contact me, and bring-your-own-license (BYOL) pricing
marketplace Azure Iot Edge Module Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/azure-iot-edge-module-creation.md
Previously updated : 08/07/2020 Last updated : 04/14/2021 # Create an IoT Edge module offer
The **Offer overview** page shows a visual representation of the steps required
This page includes links to perform operations on this offer based on the selection you make. For example: - If the offer is a draft - Delete draft offer-- If the offer is live - [Stop selling the offer](update-existing-offer.md#stop-selling-an-offer-or-plan)
+- If the offer is live - [Deprecate (formerly stop sell) the offer](update-existing-offer.md#deprecate-an-offer-or-plan)
- If the offer is in preview - [Go-live](../review-publish-offer.md#previewing-and-approving-your-offer) - If you haven't completed publisher sign-out - [Cancel publishing.](../review-publish-offer.md#cancel-publishing)
After you create your plans, the **Plan overview** tab shows:
The actions available in the Plan overview vary depending on the current status of your plan. They include: - **Delete draft**: If the plan status is a Draft.-- **Stop sell plan**: If the plan status is published live.
+- **Deprecate plan** (formerly Stop sell plan): If the plan status is published live.
### Create new plan
marketplace Commercial Marketplace Lead Management Instructions Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-https.md
Previously updated : 04/09/2021 Last updated : 04/14/2021 # Use an HTTPS endpoint to manage commercial marketplace leads
marketplace Create Power Bi App Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/create-power-bi-app-offer.md
Previously updated : 07/22/2020 Last updated : 04/14/2021 # Create a Power BI app offer
This page shows a visual representation of the steps required to publish this of
It includes links to perform operations on this offer based on the selection you make. For example: - If the offer is a draft - Delete draft offer-- If the offer is live - [Stop selling the offer](update-existing-offer.md#stop-selling-an-offer-or-plan)
+- If the offer is live - [deprecate (formerly stop sell) the offer](update-existing-offer.md#deprecate-an-offer-or-plan)
- If the offer is in preview - [Go-live](../review-publish-offer.md#previewing-and-approving-your-offer) - If you haven't completed publisher sign-out - [Cancel publishing.](../review-publish-offer.md#cancel-publishing)
marketplace Update Existing Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/update-existing-offer.md
Previously updated : 10/27/2020 Last updated : 04/14/2021
This article explains how to make updates to existing offers and plans, and also
Use these steps to update an offer that's been successfully published to Preview or Live state.
-1. Select the name of the offer you would like to update. The status of the offer may be listed as **Preview**, **Live**, **Publish in progress**, **Draft**, **Attention needed**, or **Not available** (if you've previously chosen to stop selling the offer). Once selected, the **Offer overview** page for that offer will open.
+1. Select the name of the offer you would like to update. The status of the offer may be listed as **Preview**, **Live**, **Publish in progress**, **Draft**, **Attention needed**, or **Not available** (if you've previously chosen to deprecate (formerly stop sell) the offer). Once selected, the **Offer overview** page for that offer will open.
1. Select the offer page you want to update, such as **Properties**, **Offer listing**, or **Preview** (or select **Update** from the applicable card on the **Offer overview** page). 1. Make your changes and select **Save draft**. Repeat this process until all your changes are complete. 1. Review your changes on the **[Compare](#compare-changes-to-your-offer)** page.
Now that you have hidden the plan with the old price, create a copy of that plan
2. Select **Create new plan**. Enter a **Plan ID** and a **Plan name**, then select **Create**. 1. To reuse the technical configuration from the plan youΓÇÖve hidden, select the **Reuse technical configuration** checkbox. Read [Create plans for a VM offer](../azure-vm-create-plans.md) to learn more. > [!IMPORTANT]
- > If you select **This plan reuses technical configuration from another plan**, you wonΓÇÖt be able to stop selling the parent plan later. DonΓÇÖt use this option if you want to stop selling the parent plan.
+ > If you select **This plan reuses technical configuration from another plan**, you wonΓÇÖt be able to deprecate (formerly stop sell) the parent plan later. DonΓÇÖt use this option if you want to deprecate (formerly stop sell) the parent plan.
3. Complete all the required sections for the new plan, including the new price. 1. Select **Save draft**. 1. After you've completed all the required sections for the new plan, select **Review and publish**. This will submit your offer for review and publication. Read [Review and publish an offer to the commercial marketplace](../review-publish-offer.md) for more details.
If you have changes in preview that aren't live, you can compare new changes wit
Remember to republish your offer after making updates for the changes to take effect.
-## Stop selling an offer or plan
+## Deprecate an offer or plan
+
+> [!IMPORTANT]
+> The name of the stop sell option has changed to deprecate.
You can remove offer listings and plans from the Microsoft commercial marketplace, which will prevent new customers from finding and purchasing them. Any customers who previously acquired the offer or plan can still use it, and they can download it again if needed. However, they won't get updates if you decide to republish the offer or plan at a later time. -- To stop selling an offer after you've published it, select **Stop selling** from the **Offer overview** page. Within a few hours of your confirmation, the offer will no longer be visible in the commercial marketplace.
+- To deprecate (formerly stop sell) an offer after you've published it, select **Deprecate** from the **Offer overview** page. Within a few hours of your confirmation, the offer will no longer be visible in the commercial marketplace.
-- To stop selling a plan, select **Stop selling** from the **Plan overview** page. The option to stop selling a plan is only available if you have more than one plan in the offer. You can choose to stop selling one plan without impacting other plans within your offer.
+- To deprecate (formerly stop sell) a plan, select **Deprecate** from the **Plan overview** page. The option to deprecate (formerly stop sell) a plan is only available if you have more than one plan in the offer. You can choose to deprecate (formerly stop sell) one plan without impacting other plans within your offer.
>[!NOTE]
- > Once you confirm you want to stop selling the plan, you must republish the offer for the change to take effect.
+ > Once you confirm you want to deprecate (formerly stop sell) the plan, you must republish the offer for the change to take effect.
-After you stop selling an offer or plan, you'll still see it in Partner Center with a **Not available** status. If you decide to list or sell this offer or plan again, follow the instructions to [update a published offer](#update-a-published-offer). Don't forget that you will need to **publish** the offer or plan again after making any changes.
+After you deprecate (formerly stop sell) an offer or plan, you'll still see it in Partner Center with a **Not available** status. If you decide to list or sell this offer or plan again, follow the instructions to [update a published offer](#update-a-published-offer). Don't forget that you will need to **publish** the offer or plan again after making any changes.
## Remove offers from existing customers
marketplace Review Publish Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/review-publish-offer.md
Previously updated : 03/10/2021 Last updated : 04/13/2021 # How to review and publish an offer to the commercial marketplace
You can review your offer status on the **Overview** tab of the commercial marke
| Attention needed | We discovered a critical issue during certification or during another publishing phase. | | Preview | We certified the offer, which now awaits a final verification by the publisher. Select **Go live** to publish the offer live. | | Live | Offer is live in the marketplace and can be seen and acquired by customers. |
-| Pending stop sell | Publisher selected "stop sell" on an offer or plan, but the action has not yet been completed. |
+| Pending deprecation | Publisher selected "deprecate (formerly stop sell)" on an offer or plan, but the action has not yet been completed. |
| Not available in the marketplace | A previously published offer in the marketplace has been removed. | |||
To view the history of your offer:
||| |Created offer |The offer was created in Partner Center. A user selected the offer type, offer ID, and offer alias in **Commercial Marketplace** > **Overview**. | |Created plan: *plan name* |A user created a new plan by entering the plan ID and plan name in the **Plan overview** tab.</br>*This event applies only to offer types that support plans*. |
-|Deleted plan |A user deleted a draft plan that had not been published by selecting **Delete draft** from the **Plan overview** page.</br>*This event applies only to offer types that support plans*. |
-|Initiated plan stop sell: *plan name* |A user initiated a plan stop-sell by selecting **Stop selling** from the **Plan overview** page.</br>*This event applies only to offer types that support plans*. |
-|Undo plan stop sell: *plan name* |A user canceled a plan stop-sell by selecting **Undo stop selling** from the **Plan overview** page.</br>*This event applies only to offer types that support plans*. |
+|Deleted plan |A user deleted a draft plan that had not been published by selecting **Delete draft** from the **Plan overview** page.</br>*This event applies only to offer types that support plans*. |
+|Initiated plan deprecate: *plan name* |A user initiated a plan deprecate (formerly stop sell) by selecting **deprecate** from the **Plan overview** page.</br>*This event applies only to offer types that support plans*. |
+|Undo plan deprecate: *plan name* |A user canceled a plan deprecate (formerly stop sell) by selecting **Undo deprecate (formerly stop sell)** from the **Plan overview** page.</br>*This event applies only to offer types that support plans*. |
|Submitted offer to preview |A user submitted the offer to preview by selecting **Publish** from the **Review and publish** page. | |Initiated submit to preview cancellation |A user requested to cancel the offer publication to preview by selecting **Cancel publish** from the **Offer overview** page after the submission to preview.</br>*This event is displayed as the cancellation request is being processed*. | |Canceled submission to preview |A user canceled the offer publication to preview by selecting **Cancel publish** from the **Offer overview** page after the submission to preview.</br>*This event is displayed after the cancellation request is successfully processed*. |
To view the history of your offer:
|Initiated publish to marketplace cancellation |A user requested to cancel the offer publication by selecting **Cancel publish** from the **Offer overview** page after the sign-off to go live.</br>*This event is displayed as the cancellation request is being processed*. | |Canceled publishing to the commercial marketplace |A user canceled the offer publication by selecting **Cancel publish** from the **Offer overview** page after the sign-off to go live.</br>*This event is displayed after the cancellation request is successfully processed*. | |Sync private audience |A user updated and synced the private audience by selecting **Sync private audience** from the **Plan overview** page or the **Plan pricing & availability** page.</br>*This event applies only to offer types that support private plans*. |
-|Stop sell offer |A user stopped selling the offer by selecting **Stop selling** from the **Offer overview** page. |
+|deprecate (formerly stop sell) offer |A user stopped selling the offer by selecting **deprecate** from the **Offer overview** page. |
> [!NOTE] > The History page doesnΓÇÖt say when an offer draft was saved.
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/release-notes.md
When you choose to see Insights of your video on the Video Indexer website, the
The feature is also available in the JSON file generated by Video Indexer. For more information, see [Trace observed people in a video](observed-people-tracing.md).
+### Acoustic event detection(AED) available in closed captions
+
+Video Indexer Closed captions file can now include the detected acoustic events. It can be downloaded from the Video Indexer portal and available as an artifact in the GetArtifact API.
+
+### Improved upload experience in the portal
+
+Video Indexer has a new upload experience in the portal:
+
+* New developer portal in available in Fairfax
+
+Video Indexer new [Developer Portal](https://api-portal.videoindexer.ai), is now also available in Gov-cloud.
+ ## March 2021 ### Audio analysis
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Also, when a NSG is deleted, by default the associated flow log resource is dele
**Incompatible Services**: Due to current platform limitations, a small set of Azure services are not supported by NSG Flow Logs. The current list of incompatible services is - [Azure Kubernetes Services (AKS)](https://azure.microsoft.com/services/kubernetes-service/)
+- [Azure Container Instances (ACI)](https://azure.microsoft.com/services/container-instances/)
- [Logic Apps](https://azure.microsoft.com/services/logic-apps/) ## Best practices
Flow Logs version 2 introduces the concept of _Flow State_ & stores information
NSG Flow Logs are charged per GB of logs collected and come with a free tier of 5 GB/month per subscription. For the current pricing in your region, see the [Network Watcher pricing page](https://azure.microsoft.com/pricing/details/network-watcher/).
-Storage of logs is charged separately, see [Azure Storage Block blob pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for relevant prices.
+Storage of logs is charged separately, see [Azure Storage Block blob pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for relevant prices.
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-solutions/datadog/create.md
Title: Create Datadog - Azure partner solutions
description: This article describes how to use the Azure portal to create an instance of Datadog. Previously updated : 02/19/2021 Last updated : 05/05/2021
# QuickStart: Get started with Datadog
-In this quickstart, you'll create an instance of Datadog. You can either create a new Datadog organization or link to an existing Datadog organization.
+In this quickstart, you'll create an instance of Datadog. You can either create a new Datadog organization or link to an existing Datadog organization. Azure only links to existing Datadog organizations in **US3**.
-## Pre-requisites
+## Prerequisites
To set up the Azure Datadog integration, you must have **Owner** access on the Azure subscription. Ensure you have the appropriate access before starting the setup.
If you're linking to an existing Datadog organization, see the next section. Oth
## Link to existing Datadog organization
-You can link your new Datadog resource in Azure to an existing Datadog organization.
+You can link your new Datadog resource in Azure to an existing Datadog organization in **US3**.
Select **Existing** for Data organization, and then select **Link to Datadog org**.
There are two types of logs that can be emitted from Azure to Datadog.
To send subscription level logs to Datadog, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Datadog.
-To send Azure resource logs to Datadog, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). To filter the set of Azure resources sending logs to Datadog, use Azure resource tags.
+To send Azure resource logs to Datadog, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). To filter the set of Azure resources sending logs to Datadog, use Azure resource tags.
The logs sent to Datadog will be charged by Azure. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
postgresql Tutorial Design Database Hyperscale Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/tutorial-design-database-hyperscale-multi-tenant.md
CREATE TABLE campaigns (
); ```
+>[!NOTE]
+> This article contains references to the term *blacklisted*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+ Each campaign will pay to run ads. Add a table for ads too, by running the following code in psql after the code above: ```sql
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) / management | privatelink.{region}.azmk8s.io | {region}.azmk8s.io | | Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.net | search.windows.net | | Azure Container Registry (Microsoft.ContainerRegistry/registries) / registry | privatelink.azurecr.io | azurecr.io |
-| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStore | privatelink.azconfig.io | azconfig.io |
+| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStores | privatelink.azconfig.io | azconfig.io |
| Azure Backup (Microsoft.RecoveryServices/vaults) / vault | privatelink.{region}.backup.windowsazure.com | {region}.backup.windowsazure.com | | Azure Site Recovery (Microsoft.RecoveryServices/vaults) / vault | {region}.privatelink.siterecovery.windowsazure.com | {region}.hypervrecoverymanager.windowsazure.com | | Azure Event Hubs (Microsoft.EventHub/namespaces) / namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
purview How To Lineage Spark Atlas Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-spark-atlas-connector.md
+
+ Title: Metadata and Lineage from Apache Atlas Spark connector
+description: This article describes the data lineage extraction from Spark using Atlas Spark connector.
+++++ Last updated : 04/28/2021+
+# How to use Apache Atlas connector to collect Spark lineage
+
+Apache Atlas Spark Connector is a hook to track Spark SQL/DataFrame data movements and push metadata changes to Purview Atlas endpoint.
+
+## Supported scenarios
+
+This connector supports following tracking:
+1. SQL DDLs like "CREATE/ALTER DATABASE", "CREATE/ALTER TABLE".
+2. SQL DMLs like "CREATE TABLE HelloWorld AS SELECT", "INSERT INTO...", "LOAD DATA [LOCAL] INPATH", "INSERT OVERWRITE [LOCAL] DIRECTORY" and so on.
+3. DataFrame movements that have inputs and outputs.
+
+This connector relies on query listener to retrieve query and examine the impacts. It will correlate with other systems like Hive, HDFS to track the life cycle of data in Atlas.
+Since Purview supports Atlas API and Atlas native hook, the connector can report lineage to Purview after configured with Spark. The connector could be configured per job or configured as the cluster default setting.
+
+## Configuration requirement
+
+The connectors require a version of Spark 2.4.0+. But Spark version 3 is not supported. The Spark supports three types of listener required to be set:
+
+| Listener | Since Spark Version|
+| - | - |
+| spark.extraListeners | 1.3.0 |
+| spark.sql.queryExecutionListeners | 2.3.0 |
+| spark.sql.streaming.streamingQueryListeners | 2.4.0 |
+
+If the Spark cluster version is below 2.4.0, Stream query lineage and most of the query lineage will not be captured.
+
+### Step 1. Prepare Spark Atlas connector package
+The following steps are documented based on DataBricks as an example:
+
+1. Generate package
+ 1. Pull code from GitHub: https://github.com/hortonworks-spark/spark-atlas-connector
+ 2. [For Windows] Comment out the **maven-enforcer-plugin** in spark-atlas-connector\pom.xml to remove the dependency on Unix.
+
+ ```web
+ <requireOS>
+ <family>unix</family>
+ </requireOS>
+ ```
+
+ c. Run command **mvn package -DskipTests** in the project root to build.
+
+ d. Get jar from *~\spark-atlas-connector-assembly\target\spark-atlas-connector-assembly-0.1.0-SNAPSHOT.jar*
+
+ e. Put the package where the spark cluster could access. For DataBricks cluster, the package could upload to dbfs folder, such as /FileStore/jars.
+
+2. Prepare Connector config
+ 1. Get Kafka Endpoint and credential in Azure portal of the Purview Account
+ 1. Provide your account with *ΓÇ£Purview Data CuratorΓÇ¥* permission
+
+ :::image type="content" source="./media/how-to-lineage-spark-atlas-connector/assign-purview-data-curator-role.png" alt-text="Screenshot showing data curator role assignment" lightbox="./media/how-to-lineage-spark-atlas-connector/assign-purview-data-curator-role.png":::
+
+ 1. Endpoint: Get from *Atlas Kafka endpoint primary connection string*. Endpoint part
+ 1. Credential: Entire *Atlas Kafka endpoint primary connection string*
+
+ :::image type="content" source="./media/how-to-lineage-spark-atlas-connector/atlas-kafka-endpoint.png" alt-text="Screenshot showing atlas kafka endpoint" lightbox="./media/how-to-lineage-spark-atlas-connector/atlas-kafka-endpoint.png":::
+
+ 1. Prepare *atlas-application.properties* file, replace the *atlas.kafka.bootstrap.servers* and the password value in *atlas.kafka.sasl.jaas.config*
+
+ ```script
+ atlas.client.type=kafka
+ atlas.kafka.sasl.mechanism=PLAIN
+ atlas.kafka.security.protocol=SASL_SSL
+ atlas.kafka.bootstrap.servers= atlas-46c097e6-899a-44aa-9a30-6ccd0b2a2a91.servicebus.windows.net:9093
+ atlas.kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="<connection string got from your Purview account>";
+ ```
+
+ c. Make sure the atlas configuration file is in the DriverΓÇÖs classpath generated in [step 1 Generate package section above](../purview/how-to-lineage-spark-atlas-connector.md#step-1-prepare-spark-atlas-connector-package). In cluster mode, ship this config file to the remote Drive *--files atlas-application.properties*
++
+### Step 2. Prepare your Purview account
+After the Atlas Spark model definition is successfully created, follow below steps
+1. Get spark type definition from GitHub https://github.com/apache/atlas/blob/release-2.1.0-rc3/addons/models/1000-Hadoop/1100-spark_model.json
+
+2. Assign role:
+ 1. Open Purview management center and choose Assign roles
+ 1. Add Users and grant your service principal *Purview Data source administrator* role
+3. Get auth token:
+ 1. Open "postman" or similar tools
+ 1. Use the service principal used in previous step to get the bearer token:
+ * Endpoint: https://login.windows.net/microsoft.com/oauth2/token
+ * grant_type: client_credentials
+ * client_id: {service principal ID}
+ * client_secret: {service principal key}
+ * resource: https://projectbabylon.azure.net
+
+ :::image type="content" source="./media/how-to-lineage-spark-atlas-connector/postman-examples.png" alt-text="Screenshot showing postman example" lightbox="./media/how-to-lineage-spark-atlas-connector/postman-examples.png":::
+
+4. Post Spark Atlas model definition to Purview Account:
+ 1. Get Atlas Endpoint of the Purview account from properties section of Azure portal.
+ 1. Post Spark type definition into the Purview account:
+ * Post: {{endpoint}}/api/atlas/v2/types/typedefs
+ * Use the generated access token
+ * Body: choose raw and copy all content from GitHub https://github.com/apache/atlas/blob/release-2.1.0-rc3/addons/models/1000-Hadoop/1100-spark_model.json
++
+### Step 3. Prepare Spark job
+1. Write your Spark job as normal
+2. Add connector settings in your Spark jobΓÇÖs source code.
+Set *'atlas.conf'* system property value in code like below to make sure *atlas-application.properties* file could be found.
+
+ **System.setProperty("atlas.conf", "/dbfs/FileStore/jars/")**
+
+3. Build your spark job source code to generate jar file.
+4. Put the Spark application jar file in a location where your cluster could access. For example, put the jar file in *"/dbfs/FileStore/jars/"DataBricks*
+
+### Step 4. Prepare to run job
+
+1. Below instructions are for each job Setting:
+To capture specific jobsΓÇÖ lineage, use spark-submit to kick off a job with their parameter.
+
+ In the job parameter set:
+* Path of the connector Jar file.
+* Three listeners: extraListeners, queryExecutionListeners, streamingQueryListeners as the connector.
+
+| Listener | Details |
+| - | - |
+| spark.extraListeners | com.hortonworks.spark.atlas.SparkAtlasEventTracker|
+| spark.sql.queryExecutionListeners | com.hortonworks.spark.atlas.SparkAtlasEventTracker
+| spark.sql.streaming.streamingQueryListeners | com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker |
+
+* The path of your Spark job application Jar file.
+
+Setup Databricks job: Key part is to use spark-submit to run a job with listeners setup properly. Set the listener info in task parameter.
+
+Below is an example parameter for the spark job.
+
+```script
+["--jars","dbfs:/FileStore/jars/spark-atlas-connector-assembly-0.1.0-SNAPSHOT.jar ","--conf","spark.extraListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker","--conf","spark.sql.queryExecutionListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker","--conf","spark.sql.streaming.streamingQueryListeners=com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker","--class","com.microsoft.SparkAtlasTest","dbfs:/FileStore/jars/08cde51d_34d8_4913_a930_46f376606d7f-sparkatlas_1_6_SNAPSHOT-17452.jar"]
+```
+
+Below is an example of spark submit from command line:
+
+```script
+spark-submit --class com.microsoft.SparkAtlasTest --master yarn --deploy-mode --files /data/atlas-application.properties --jars /data/ spark-atlas-connector-assembly-0.1.0-SNAPSHOT.jar
+--conf spark.extraListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker
+--conf spark.sql.queryExecutionListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker
+--conf spark.sql.streaming.streamingQueryListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker
+/data/worked/sparkApplication.jar
+```
+
+2. Below instructions are for Cluster Setting:
+The connector jar and listenerΓÇÖs setting should be put in Spark clustersΓÇÖ: *conf/spark-defaults.conf*. Spark-submit will read the options in *conf/spark-defaults.conf* and pass them to your application.
+
+### Step 5. Run and Check lineage in Purview account
+Kick off The Spark job and check the lineage info in your Purview account.
++
+## Known limitations with the connector for Spark lineage
+1. Supports SQL/DataFrame API (in other words, it does not support RDD). This connector relies on query listener to retrieve query and examine the impacts.
+
+2. All "inputs" and "outputs" from multiple queries are combined into single "spark_process" entity.
+
+ "spark_process" maps to an "applicationId" in Spark. It allows admin to track all changes that occurred as part of an application. But also causes lineage/relationship graph in "spark_process" to be complicated and less meaningful.
+3. Only part of inputs is tracked in Streaming query.
+
+* Kafka source supports subscribing with "pattern" and this connector does not enumerate all existing matching topics, or even all possible topics
+
+* The "executed plan" provides actual topics with (micro) batch reads and processes. As a result, only inputs that participate in (micro) batch are included as "inputs" of "spark_process" entity.
+
+4. This connector doesn't support columns level lineage.
+
+5. It doesn't track tables that are dropped (Spark models).
+
+ The "drop table" event from Spark only provides db and table name, which is NOT sufficient to create the unique key to recognize the table.
+
+ The connector depends on reading the Spark Catalog to get table information. Spark have already dropped the table when this connector notices the table is dropped, so drop table will not work.
++
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [Web Plan Contributor](#web-plan-contributor) | Lets you manage the web plans for websites, but not access to them. | 2cc479cb-7b4d-49a8-b449-8c00fd0f0a4b | > | [Website Contributor](#website-contributor) | Lets you manage websites (not web plans), but not access to them. | de139f84-1756-47ae-9be6-808fbbe84772 | > | **Containers** | | |
-> | [AcrDelete](#acrdelete) | acr delete | c2f4ef07-c644-48eb-af81-4b1b4947fb11 |
-> | [AcrImageSigner](#acrimagesigner) | acr image signer | 6cef56e8-d556-48e5-a04f-b8e64114680f |
-> | [AcrPull](#acrpull) | acr pull | 7f951dda-4ed3-4680-a7ca-43fe172d538d |
-> | [AcrPush](#acrpush) | acr push | 8311e382-0749-4cb8-b61a-304f252e45ec |
-> | [AcrQuarantineReader](#acrquarantinereader) | acr quarantine data reader | cdda3590-29a3-44f6-95f2-9f980659eb04 |
-> | [AcrQuarantineWriter](#acrquarantinewriter) | acr quarantine data writer | c8d4ff99-41c3-41a8-9f60-21dfdad59608 |
+> | [AcrDelete](#acrdelete) | Delete repositories, tags, or manifests from a container registry. | c2f4ef07-c644-48eb-af81-4b1b4947fb11 |
+> | [AcrImageSigner](#acrimagesigner) | Push trusted images to or pull trusted images from a container registry enabled for content trust. | 6cef56e8-d556-48e5-a04f-b8e64114680f |
+> | [AcrPull](#acrpull) | Pull artifacts from a container registry. | 7f951dda-4ed3-4680-a7ca-43fe172d538d |
+> | [AcrPush](#acrpush) | Push artifacts to or pull artifacts from a container registry. | 8311e382-0749-4cb8-b61a-304f252e45ec |
+> | [AcrQuarantineReader](#acrquarantinereader) | Pull quarantined images from a container registry. | cdda3590-29a3-44f6-95f2-9f980659eb04 |
+> | [AcrQuarantineWriter](#acrquarantinewriter) | Push quarantined images to or pull quarantined images from a container registry. | c8d4ff99-41c3-41a8-9f60-21dfdad59608 |
> | [Azure Kubernetes Service Cluster Admin Role](#azure-kubernetes-service-cluster-admin-role) | List cluster admin credential action. | 0ab0b1a8-8aac-4efd-b8c2-3ee1fb270be8 | > | [Azure Kubernetes Service Cluster User Role](#azure-kubernetes-service-cluster-user-role) | List cluster user credential action. | 4abbcc35-e782-43d8-92c5-2d3f1bd2253f | > | [Azure Kubernetes Service Contributor Role](#azure-kubernetes-service-contributor-role) | Grants access to read and write Azure Kubernetes Service clusters | ed7f3fbd-7b88-4dd4-9017-9adb7ce333f8 |
Lets you manage websites (not web plans), but not access to them.
### AcrDelete
-acr delete [Learn more](../container-registry/container-registry-roles.md)
+Delete repositories, tags, or manifests from a container registry. [Learn more](../container-registry/container-registry-roles.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
acr delete [Learn more](../container-registry/container-registry-roles.md)
### AcrImageSigner
-acr image signer [Learn more](../container-registry/container-registry-roles.md)
+Push trusted images to or pull trusted images from a container registry enabled for content trust. [Learn more](../container-registry/container-registry-roles.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
acr image signer [Learn more](../container-registry/container-registry-roles.md)
### AcrPull
-acr pull [Learn more](../container-registry/container-registry-roles.md)
+Pull artifacts from a container registry. [Learn more](../container-registry/container-registry-roles.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
acr push [Learn more](../container-registry/container-registry-roles.md)
### AcrQuarantineReader
-acr quarantine data reader
+Pull quarantined images from a container registry. [Learn more](../container-registry/container-registry-roles.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
acr quarantine data reader
### AcrQuarantineWriter
-acr quarantine data writer
+Push quarantined images to or pull quarantined images from a container registry. [Learn more](../container-registry/container-registry-roles.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-cosmosdb-gremlin.md
Last updated 04/11/2021
> For this preview, we recommend using the [REST API version 2020-06-30-Preview](search-api-preview.md). There is currently limited portal support and no .NET SDK support. > [!WARNING]
-> In order for Azure Cognitive Search to index data in Cosmos DB through the Gremlin API, [Cosmos DB's own indexing](https://docs.microsoft.com/azure/cosmos-db/index-overview) must also be enabled and set to [Consistent](https://docs.microsoft.com/azure/cosmos-db/index-policy#indexing-mode). This is the default configuration for Cosmos DB. Azure Cognitive Search indexing will not work without Cosmos DB indexing already enabled.
+> In order for Azure Cognitive Search to index data in Cosmos DB through the Gremlin API, [Cosmos DB's own indexing](../cosmos-db/index-overview.md) must also be enabled and set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration for Cosmos DB. Azure Cognitive Search indexing will not work without Cosmos DB indexing already enabled.
-[Azure Cosmos DB indexing](https://docs.microsoft.com/azure/cosmos-db/index-overview) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are distinct operations, unique to each service. Before you start Azure Cognitive Search indexing, your Azure Cosmos DB database must already exist.
+[Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are distinct operations, unique to each service. Before you start Azure Cognitive Search indexing, your Azure Cosmos DB database must already exist.
This article shows you how to configure Azure Cognitive Search to index content from Azure Cosmos DB using the Gremlin API. This workflow creates an Azure Cognitive Search index and loads it with existing text extracted from Azure Cosmos DB using the Gremlin API.
Ensure that the schema of your target index is compatible with your graph.
For partitioned collections, the default document key is Azure Cosmos DB's `_rid` property, which Azure Cognitive Search automatically renames to `rid` because field names cannot start with an underscore character. Also, Azure Cosmos DB `_rid` values contain characters that are invalid in Azure Cognitive Search keys. For this reason, the `_rid` values should be Base64 encoded if you would like to make it your document key. ### Mapping between JSON Data Types and Azure Cognitive Search Data Types+ | JSON data type | Compatible target index field types | | | | | Bool |Edm.Boolean, Edm.String |
Notice how the Output Field Mapping starts with `/document` and does not include
## Next steps
-* To learn more about Azure Cosmos DB Gremlin API, see the [Introduction to Azure Cosmos DB: Gremlin API](https://docs.microsoft.com/azure/cosmos-db/graph-introduction).
-* To learn more about Azure Cognitive Search, see the [Search service page](https://azure.microsoft.com/services/search/).
++ To learn more about Azure Cosmos DB Gremlin API, see the [Introduction to Azure Cosmos DB: Gremlin API]()../cosmos-db/graph-introduction.md).+++ For more information about Azure Cognitive Search scenarios and pricing, see the [Search service page on azure.microsoft.com](https://azure.microsoft.com/services/search/).
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/whats-new.md
Previously updated : 04/07/2021 Last updated : 05/07/2021 # What's new in Azure Cognitive Search Learn what's new in the service. Bookmark this page to keep up to date with the service. Check out the [Preview feature list](search-api-preview.md) to view a comprehensive list of features that are not yet generally available.
+## April 2021
+
+|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
+||||
+| [Gremlin API support](search-howto-index-cosmosdb-gremlin.md) | For indexer-based indexing, you can now create a data source that retrieves content from Cosmos DB accessed through the Gremlin API. | Public preview ([by request](https://aka.ms/azure-cognitive-search/indexer-preview)), using api-version=2020-06-30-Preview. |
+ ## March 2021 |Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
spring-cloud How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-deploy-in-azure-virtual-network.md
Select the virtual network **azure-spring-cloud-vnet** you previously created.
![Screenshot that shows the Access control screen.](./media/spring-cloud-v-net-injection/access-control.png)
-1. Assign the [azure-spring-cloud-data-reader](../role-based-access-control/built-in-roles.md#azure-spring-cloud-data-reader) role to the [user | group | service-principal | managed-identity] at [management-group | subscription | resource-group | resource] scope.
+1. Assign the [Owner](../role-based-access-control/built-in-roles.md#owner) role to the [user | group | service-principal | managed-identity] at [management-group | subscription | resource-group | resource] scope.
For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
spring-cloud How To Outbound Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-outbound-public-ip.md
To find the outbound public IP addresses currently used by your service instance
You can find the same information by running the following command in the Cloud Shell ```Azure CLI
-az spring-cloud show --resource-group <group_name> --name <service_name> --query properties.networkProfile.outboundIPs.publicIPs --output tsv
+az spring-cloud show --resource-group <group_name> --name <service_name> --query properties.networkProfile.outboundIps.publicIps --output tsv
``` ## Next steps > [!div class="nextstepaction"] * [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
-* [Learn more about key vault in Azure Spring Cloud](./tutorial-managed-identities-key-vault.md)
+* [Learn more about key vault in Azure Spring Cloud](./tutorial-managed-identities-key-vault.md)
spring-cloud Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/vnet-customer-responsibilities.md
The following is a list of resource requirements for Azure Spring Cloud services
| *.servicebus.windows.net:443 *Or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hub. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
-## Azure Spring Cloud FQDN requirements/application rules
+## Azure Spring Cloud FQDN requirements / application rules
-Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the following configurations:
+Azure Firewall provides a fully qualified domain name (FQDN) tag **AzureKubernetesService** to simplify the following configurations.
| Destination FQDN | Port | Use | ||||
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the
| <i>mcr.microsoft.com</i> | HTTPS:443 | Microsoft Container Registry (MCR). | | *.cdn.mscr.io | HTTPS:443 | MCR storage backed by the Azure CDN. | | *.data.mcr.microsoft.com | HTTPS:443 | MCR storage backed by the Azure CDN. |
- | <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
- | <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
- | <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication. |
+ | <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. ΓÇï|
+ | <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication.ΓÇï |
+ | <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication.ΓÇï |
|<i>packages.microsoft.com</i> | HTTPS:443 | Microsoft packages repository. | | <i>acs-mirror.azureedge.net</i> | HTTPS:443 | Repository required to install required binaries like kubenet and Azure CNI.ΓÇï | | *mscrl.microsoft.com* | HTTPS:80 | Required Microsoft Certificate Chain Paths. | | *crl.microsoft.com* | HTTPS:80 | Required Microsoft Certificate Chain Paths. | | *crl3.digicert.com* | HTTPS:80 | 3rd Party SSL Certificate Chain Paths. |
-
-## Azure Spring Cloud optional FQDN for third-party application performance management
-
-Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the following configurations:
-
- | Destination FQDN | Port | Use |
- | - | - | |
- | collector*.newrelic.com | TCP:443/80 | Required networks of New Relic APM agents from US region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). |
- | collector*.eu01.nr-data.net | TCP:443/80 | Required networks of New Relic APM agents from EU region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). |
## See also * [Access your application in a private network](access-app-virtual-network.md)
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-access-control.md
To set file and directory level permissions, see any of the following articles:
| Environment | Article | |--|--|
-|Azure Storage Explorer |[Use Azure Storage Explorer to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-explorer-acl.md)|
-|.NET |[Use .NET to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-dotnet.md)|
-|Java|[Use Java to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-java.md)|
-|Python|[Use Python to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-python.md)|
-|JavaScript (Node.js)|[Use the JavaScript SDK in Node.js to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-javascript.md)|
-|PowerShell|[Use PowerShell to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-powershell.md)|
-|Azure CLI|[Use Azure CLI to set ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-cli.md)|
+|Azure Storage Explorer |[Use Azure Storage Explorer to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-explorer-acl.md)|
+|Azure portal |[Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-azure-portal.md)|
+|.NET |[Use .NET to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-dotnet.md)|
+|Java|[Use Java to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-java.md)|
+|Python|[Use Python to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-python.md)|
+|JavaScript (Node.js)|[Use the JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-javascript.md)|
+|PowerShell|[Use PowerShell to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-powershell.md)|
+|Azure CLI|[Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-cli.md)|
|REST API |[Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update)| > [!IMPORTANT]
storage Data Lake Storage Acl Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-azure-portal.md
+
+ Title: Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2
+description: Use the Azure portal to manage access control lists (ACLs) in storage accounts that has hierarchical namespace (HNS) enabled.
++++ Last updated : 04/15/2021++++
+# Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2
+
+This article shows you how to use [Azure portal](https://ms.portal.azure.com/) to manage the access control list (ACL) of a directory or blob in storage accounts that have the hierarchical namespace featured enabled on them.
+
+For information about the structure of the ACL, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md).
+
+To learn about how to use ACLs and Azure roles together, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md).
+
+## Prerequisites
+
+- An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
+
+- A storage account that has the hierarchical namespace featured enabled on it. Follow [these](create-data-lake-storage-account.md) instructions to create one.
+
+- You must have one of the following security permissions:
+
+ - Your user identity has been assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role in the scope of the either the target container, storage account, parent resource group or subscription.
+
+ - You are the owning user of the target container, directory, or blob to which you plan to apply ACL settings.
+
+## Manage an ACL
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) to get started.
+
+2. Locate your storage account and display the account overview.
+
+3. Select **Containers** under **Data storage**.
+
+ The containers in the storage account appear.
+
+ > [!div class="mx-imgBorder"]
+ > ![location of storage account containers in the Azure portal](./media/data-lake-storage-acl-azure-portal/find-containers-in-azure-portal.png)
+
+5. Navigate to any container, directory, or blob. Right-click the object, and then select **Manage ACL**.
+
+ > [!div class="mx-imgBorder"]
+ > ![context menu for managing an acl](./media/data-lake-storage-acl-azure-portal/manage-acl-menu-item.png)
+
+ The **Access permissions** tab of the **Manage ACL** page appears. Use the controls in this tab to manage access to the object.
+
+ > [!div class="mx-imgBorder"]
+ > ![access ACL tab of the Manage ACL page](./media/data-lake-storage-acl-azure-portal/access-acl-page.png)
+
+7. To add a *security principal* to the ACL, select the **Add principal** button.
+
+ > [!TIP]
+ > A security principal is an object that represents a user, group, service principal, or managed identity that is defined in Azure Active Directory (AD).
+
+ Find the security principal by using the search box, and then click the **Select** button.
+
+ > [!div class="mx-imgBorder"]
+ > ![Add a security principal to the ACL](./media/data-lake-storage-acl-azure-portal/get-security-principal.png)
+
+ > [!NOTE]
+ > We recommend that you create a security group in Azure AD, and then maintain permissions on the group rather than for individual users. For details on this recommendation, as well as other best practices, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-explorer-acl.md).
+
+8. To manage the *default ACL*, select the **default permissions** tab, and then select the **Configure default permissions** checkbook.
+
+ > [!TIP]
+ > A default ACL is a template of an ACL that determines the access ACLs for any child items that are created under a directory. A blob doesn't have a default ACL, so this tab appears only for directories.
+
+ > [!div class="mx-imgBorder"]
+ > ![default ACL tab of the Manage ACL page](./media/data-lake-storage-acl-azure-portal/default-acl-page.png)
+
+## Apply an ACL recursively
+
+You can apply ACL entries recursively on the existing child items of a parent directory without having to make these changes individually for each child item. However, you can't apply ACL entries recursively by using the Azure portal.
+
+To apply ACLs recursively, use Azure Storage Explorer, PowerShell, or the Azure CLI. If you prefer to write code, you can also use the .NET, Java, Python, or Node.js APIs.
+
+You can find the complete list of guides here: [How to set ACLs](data-lake-storage-access-control.md#how-to-set-acls).
+
+## Next steps
+
+Learn about the Data Lake Storage Gen2 permission model.
+
+> [!div class="nextstepaction"]
+> [Access control model in Azure Data Lake Storage Gen2](./data-lake-storage-access-control-model.md)
storage Data Lake Storage Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-cli.md
Title: Use Azure CLI to set ACLs in Azure Data Lake Storage Gen2
+ Title: Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2
description: Use the Azure CLI to manage access control lists (ACL) in storage accounts that have a hierarchical namespace.
storage Data Lake Storage Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-dotnet.md
Title: Use .NET to set ACLs in Azure Data Lake Storage Gen2
+ Title: Use .NET to manage ACLs in Azure Data Lake Storage Gen2
description: Use .NET to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled.
storage Data Lake Storage Acl Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-java.md
Title: Use Java to set ACLs in Azure Data Lake Storage Gen2
+ Title: Use Java to manage ACLs in Azure Data Lake Storage Gen2
description: Use Azure Storage libraries for Java to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled.
storage Data Lake Storage Acl Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-javascript.md
Title: Use JavaScript (Node.js) to set ACLs in Azure Data Lake Storage Gen2
+ Title: Use JavaScript (Node.js) to manage ACLs in Azure Data Lake Storage Gen2
description: Use Azure Storage Data Lake client library for JavaScript to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled.
storage Network File System Protocol Support Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support-performance.md
Each bar in the following chart shows the difference in achieved bandwidth betwe
It takes longer time to complete an overwrite operation than a new write operation. That's because an NFS overwrite operation, especially a partial in-place file edit, is a combination of several underlying blob operations: a read, a modify, and a write operation. Therefore, an application that requires frequent in place edits is not suited for NFS enabled blob storage accounts.
+## Deploy Azure HPC Cache for latency senstive applications
+
+Some applications may require low latency in addition to high throughput. You can deploy [Azure HPC Cache](../../hpc-cache/nfs-blob-considerations.md) to improve latency significantly.
+Learn more about [Latency in Blob storage](storage-blobs-latency.md).
+ ## Other best practice recommendations - Use VMs with sufficient network bandwidth.
It takes longer time to complete an overwrite operation than a new write operati
- To learn more about NFS 3.0 support in Azure Blob Storage, see [Network File System (NFS) 3.0 protocol support in Azure Blob storage (preview)](network-file-system-protocol-support.md). -- To get started, see [Mount Blob storage by using the Network File System (NFS) 3.0 protocol (preview)](network-file-system-protocol-support-how-to.md).
+- To get started, see [Mount Blob storage by using the Network File System (NFS) 3.0 protocol (preview)](network-file-system-protocol-support-how-to.md).
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support.md
The status of items that appear in this tables will change over time as support
| [Append blobs](storage-blobs-introduction.md#blobs) | ✔️ | ⛔ | [Page blobs](storage-blobs-introduction.md#blobs) | ⛔ | ⛔ | | [Azure Active Directory (AD) security](../common/storage-auth-aad.md?toc=/azure/storage/blobs/toc.json) | ⛔ | ⛔ | [Encryption scopes](encryption-scope-overview.md) | ⛔ | ⛔ | | [Object replication for block blobs](object-replication-overview.md) | ⛔ | ⛔ | [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | ⛔ | ⛔ |
+| [Blob storage events](storage-blob-event-overview.md)| Γ¢ö | Γ¢ö
## Known issues
storage Storage Custom Domain Name https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-custom-domain-name.md
The host name is the storage endpoint URL without the protocol identifier and th
1. In the [Azure portal](https://portal.azure.com), go to your storage account.
-2. In the menu pane, under **Settings**, select **Properties**.
+2. In the menu pane, under **Settings**, select **Endpoints**.
3. Copy the value of the **Blob service** endpoint or the **Static website** endpoint to a text file.
Create a CNAME record to point to your host name. A CNAME record is a type of Do
1. In the [Azure portal](https://portal.azure.com), go to your storage account.
-2. In the menu pane, under **Settings**, select **Networking**.
+2. In the menu pane, under **Security + networking**, select **Networking**.
3. In the **Networking** page, choose the **Custom domain** tab. > [!NOTE] > This option does not appear in accounts that have the hierarchical namespace feature enabled. For those accounts, use either PowerShell or the Azure CLI to complete this step.
-3. In the **Domain name** text box, enter the name of your custom domain, including the subdomain
+3. In the **Domain name** text box, enter the name of your custom domain, including the subdomain.
For example, if your domain is *contoso.com* and your subdomain alias is *www*, enter `www.contoso.com`. If your subdomain is *photos*, enter `photos.contoso.com`.
The host name is the storage endpoint URL without the protocol identifier and th
1. In the [Azure portal](https://portal.azure.com), go to your storage account.
-2. In the menu pane, under **Settings**, select **Properties**.
+2. In the menu pane, under **Settings**, select **Endpoints**.
3. Copy the value of the **Blob service** endpoint or the **Static website** endpoint to a text file.
When you pre-register your custom domain with Azure, you permit Azure to recogni
1. In the [Azure portal](https://portal.azure.com), go to your storage account.
-2. In the menu pane, under **Settings**, select **Networking**.
+2. In the menu pane, under **Security + networking**, select **Networking**.
3. In the **Networking** page, choose the **Custom domain** tab. > [!NOTE] > This option does not appear in accounts that have the hierarchical namespace feature enabled. For those accounts, use either PowerShell or the Azure CLI to complete this step.
-3. In the **Domain name** text box, enter the name of your custom domain, including the subdomain
+3. In the **Domain name** text box, enter the name of your custom domain, including the subdomain.
For example, if your domain is *contoso.com* and your subdomain alias is *www*, enter `www.contoso.com`. If your subdomain is *photos*, enter `photos.contoso.com`.
To remove a custom domain mapping, deregister the custom domain. Use one of the
1. In the [Azure portal](https://portal.azure.com), go to your storage account.
-2. In the menu pane, under **Settings**, select **Networking**.
+2. In the menu pane, under **Security + networking**, select **Networking**.
3. In the **Networking** page, choose the **Custom domain** tab.
storage Storage Lifecycle Management Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-lifecycle-management-concepts.md
There are two ways to add a policy through the Azure portal.
1. In the Azure portal, search for and select your storage account.
-1. Under **Blob service**, select **Lifecycle Management** to view or change your rules.
+1. Under **Data management**, select **Lifecycle Management** to view or change your rules.
1. Select the **List View** tab.
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-keys-manage.md
You can view and copy your account access keys with the Azure portal, PowerShell
To view and copy your storage account access keys or connection string from the Azure portal: 1. Navigate to your storage account in the [Azure portal](https://portal.azure.com).
-1. Under **Settings**, select **Access keys**. Your account access keys appear, as well as the complete connection string for each key.
+1. Under **Security + networking**, select **Access keys**. Your account access keys appear, as well as the complete connection string for each key.
1. Locate the **Key** value under **key1**, and click the **Copy** button to copy the account key. 1. Alternately, you can copy the entire connection string. Find the **Connection string** value under **key1**, and click the **Copy** button to copy the connection string.
To rotate your storage account access keys in the Azure portal:
1. Update the connection strings in your application code to reference the secondary access key for the storage account. 1. Navigate to your storage account in the [Azure portal](https://portal.azure.com).
-1. Under **Settings**, select **Access keys**.
+1. Under **Security + networking**, select **Access keys**.
1. To regenerate the primary access key for your storage account, select the **Regenerate** button next to the primary access key. 1. Update the connection strings in your code to reference the new primary access key. 1. Regenerate the secondary access key in the same manner.
storage Storage Auth Aad Rbac Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-auth-aad-rbac-portal.md
Before you assign a role to a security principal, be sure to consider the scope
The procedure shown here assigns a role scoped to a container, but you can follow the same steps to assign a role scoped to a queue:
-1. In the [Azure portal](https://portal.azure.com), go to your storage account and display the **Overview** for the account.
-1. Under Services, select **Blobs**.
+1. In the [Azure portal](https://portal.azure.com), under **Data storage** select **Blob containers**.
1. Locate the container for which you want to assign a role, and display the container's settings. 1. Select **Access control (IAM)** to display access control settings for the container. Select the **Role assignments** tab to see the list of role assignments. :::image type="content" source="media/storage-auth-aad-rbac-portal/portal-access-control-container.png" alt-text="Screenshot showing container access control settings":::
-1. Click the **Add role assignment** button to add a new role.
+1. Click **Add**, and then **Add role assignment** to add a new role.
1. In the **Add role assignment** window, select the Azure Storage role that you want to assign. Then search to locate the security principal to which you want to assign that role. :::image type="content" source="media/storage-auth-aad-rbac-portal/add-rbac-role.png" alt-text="Screenshot showing how to assign an Azure role":::
If your users need to be able to access blobs in the Azure portal, then assign t
Follow these steps to assign the **Reader** role so that a user can access blobs from the Azure portal. In this example, the assignment is scoped to the storage account: 1. In the [Azure portal](https://portal.azure.com), navigate to your storage account.
-1. Select **Access control (IAM)** to display the access control settings for the storage account. Select the **Role assignments** tab to see the list of role assignments.
-1. In the **Add role assignment** window, select the **Reader** role.
-1. From the **Assign access to** field, select **Azure AD user, group, or service principal**.
-1. Search to locate the security principal to which you want to assign the role.
-1. Save the role assignment.
+2. Select **Access control (IAM)** to display the access control settings for the storage account. Select the **Role assignments** tab to see the list of role assignments.
+3. Click **Add**, and then **Add role assignment** to add a new role.
+4. In the **Add role assignment** window, select the **Reader** role.
+5. From the **Assign access to** field, select **Azure AD user, group, or service principal**.
+6. Search to locate the security principal to which you want to assign the role.
+7. Save the role assignment.
Assigning the **Reader** role is necessary only for users who need to access blobs or queues using the Azure portal.
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-disaster-recovery-guidance.md
Previously updated : 03/22/2021 Last updated : 05/07/2021 -
Write access is restored for geo-redundant accounts once the DNS entry has been
> [!IMPORTANT] > After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint. To resume replication to the new secondary, configure the account for geo-redundancy again. >
-> Keep in mind that converting an LRS account to use geo-redundancy incurs a cost. This cost applies to updating the storage account in the new primary region after a failover.
+> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [Important implications of account failover](storage-initiate-account-failover.md#important-implications-of-account-failover).
### Anticipate data loss
Keep in mind that any data stored in a temporary disk is lost when the VM is shu
The following features and services are not supported for account failover: - Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files.-- ADLS Gen2 storage accounts (accounts that have hierarchical namespace enabled) are not supported at this time.
+- Storage accounts that have hierarchical namespace enabled (such as for Data Lake Storage Gen2) are not supported at this time.
- A storage account containing premium block blobs cannot be failed over. Storage accounts that support premium block blobs do not currently support geo-redundancy. - A storage account containing any [WORM immutability policy](../blobs/storage-blob-immutable-storage.md) enabled containers cannot be failed over. Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-initiate-account-failover.md
Previously updated : 12/29/2020 Last updated : 05/07/2021 -
For more information about Azure Storage redundancy, see [Azure Storage redundan
Keep in mind that the following features and services are not supported for account failover: - Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files.-- ADLS Gen2 storage accounts (accounts that have hierarchical namespace enabled) are not supported at this time.
+- Storage accounts that have hierarchical namespace enabled (such as for Data Lake Storage Gen2) are not supported at this time.
- A storage account containing premium block blobs cannot be failed over. Storage accounts that support premium block blobs do not currently support geo-redundancy. - A storage account containing any [WORM immutability policy](../blobs/storage-blob-immutable-storage.md) enabled containers cannot be failed over. Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.
To estimate the extent of likely data loss before you initiate a failover, check
The time it takes to failover after initiation can vary though typically less than one hour.
-After the failover, your storage account type is automatically converted to locally redundant storage (LRS) in the new primary region. You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account. Note that converting from LRS to GRS or RA-GRS incurs an additional cost. For additional information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
+After the failover, your storage account type is automatically converted to locally redundant storage (LRS) in the new primary region. You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account. Note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost is due to the network egress charges to re-replicate the data to the new secondary region. For additional information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
-After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. Replication time is dependent on the amount of data being replicated.
+After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. Replication time depends on many factors, which include:
+- The number and size of the objects in the storage account. Many small objects can take longer than fewer and larger objects.
+- The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live traffic takes priority over geo replication.
+- If using Blob storage, the number of snapshots per blob.
+- If using Table storage, the [data partitioning strategy](/rest/api/storageservices/designing-a-scalable-partitioning-strategy-for-azure-table-storage). The replication process can't scale beyond the number of partition keys that you use.
+
## Next steps - [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md)
storage Storage Files Migration Nas Hybrid Databox https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-migration-nas-hybrid-databox.md
Title: On-premises NAS migration to Azure File Sync via Azure DataBox
-description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to a hybrid cloud deployment with Azure File Sync via Azure DataBox.
+ Title: On-premises NAS migration to Azure File Sync via Data Box
+description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to a hybrid cloud deployment by using Azure File Sync via Azure Data Box.
-# Use DataBox to migrate from Network Attached Storage (NAS) to a hybrid cloud deployment with Azure File Sync
+# Use Data Box to migrate from Network Attached Storage (NAS) to a hybrid cloud deployment by using Azure File Sync
-This migration article is one of several involving the keywords NAS, Azure File Sync, and Azure DataBox. Check if this article applies to your scenario:
+This migration article is one of several that apply to the keywords NAS, Azure File Sync, and Azure Data Box. Check if this article applies to your scenario:
> [!div class="checklist"] > * Data source: Network Attached Storage (NAS)
-> * Migration route: NAS &rArr; DataBox &rArr; Azure file share &rArr; sync with Windows Server
-> * Caching files on-premises: Yes, the final goal is an Azure File Sync deployment.
+> * Migration route: NAS &rArr; Data Box &rArr; Azure file share &rArr; sync with Windows Server
+> * Caching files on-premises: Yes, the final goal is an Azure File Sync deployment
If your scenario is different, look through the [table of migration guides](storage-files-migration-overview.md#migration-guides).
-Azure File Sync works on Direct Attached Storage (DAS) locations and does not support sync to Network Attached Storage (NAS) locations.
-This fact makes a migration of your files necessary and this article guides you through the planning and execution of such a migration.
+Azure File Sync works on Direct Attached Storage (DAS) locations. It doesn't support sync to Network Attached Storage (NAS) locations.
+So you need to migrate your files. This article guides you through the planning and implementation of that migration.
## Migration goals
-The goal is to move the shares that you have on your NAS appliance to a Windows Server. Then utilize Azure File Sync for a hybrid cloud deployment. This migration needs to be done in a way that guarantees the integrity of the production data and availability during the migration. The latter requires keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
+The goal is to move the shares that you have on your NAS appliance to Windows Server. You'll then use Azure File Sync for a hybrid cloud deployment. This migration needs to be done in a way that guarantees the integrity of the production data and availability during the migration. The latter requires keeping downtime to a minimum so that it meets or only slightly exceeds regular maintenance windows.
## Migration overview
-The migration process consists of several phases. You'll need to deploy Azure storage accounts and file shares, deploy an on-premises Windows Server, configure Azure File Sync, migrate using RoboCopy, and finally, do the cut-over. The following sections describe the phases of the migration process in detail.
+The migration process consists of several phases. You'll need to:
+- Deploy Azure storage accounts and file shares.
+- Deploy an on-premises computer running Windows Server.
+- Configure Azure File Sync.
+- Migrate files by using Robocopy.
+- Do the cutover.
+
+The following sections describe the phases of the migration process in detail.
> [!TIP]
-> If you are returning to this article, use the navigation on the right side to jump to the migration phase where you left off.
+> If you're returning to this article, use the navigation on the right side of the screen to jump to the migration phase where you left off.
-## Phase 1: Identify how many Azure file shares you need
+## Phase 1: Determine how many Azure file shares you need
[!INCLUDE [storage-files-migration-namespace-mapping](../../../includes/storage-files-migration-namespace-mapping.md)]
In this phase, consult the mapping table from Phase 1 and use it to provision th
[!INCLUDE [storage-files-migration-provision-azfs](../../../includes/storage-files-migration-provision-azure-file-share.md)]
-## Phase 3: Determine how many Azure DataBox appliances you need
+## Phase 3: Determine how many Azure Data Box appliances you need
-Start this step only, when you have completed the previous phase. Your Azure storage resources (storage accounts and file shares) should be created at this time. During your DataBox order, you need to specify into which storage accounts the DataBox is moving data.
+Start this step only after you've finished the previous phase. Your Azure storage resources (storage accounts and file shares) should be created at this time. When you order your Data Box, you need to specify the storage accounts into which the Data Box is moving data.
-In this phase, you need to map the results of the migration plan from the previous phase to the limits of the available DataBox options. These considerations will help you make a plan for which DataBox options you should choose and how many of them you will need to move your NAS shares to Azure file shares.
+In this phase, you need to map the results of the migration plan from the previous phase to the limits of the available Data Box options. These considerations will help you make a plan for which Data Box options to choose and how many of them you'll need to move your NAS shares to Azure file shares.
-To determine how many devices of which type you need, consider these important limits:
+To determine how many devices you need and their types, consider these important limits:
-* Any Azure DataBox can move data into up to 10 storage accounts.
-* Each DataBox option comes at their own usable capacity. See [DataBox options](#databox-options).
+* Any Azure Data Box appliance can move data into up to 10 storage accounts.
+* Each Data Box option comes with its own usable capacity. See [Data Box options](#data-box-options).
-Consult your migration plan for the number of storage accounts you have decided to create and the shares in each one. Then look at the size of each of the shares on your NAS. Combining this information will allow you to optimize and decide which appliance should be sending data to which storage accounts. You can have two DataBox devices move files into the same storage account, but don't split content of a single file share across 2 DataBoxes.
+Consult your migration plan to find the number of storage accounts you've decided to create and the shares in each one. Then look at the size of each of the shares on your NAS. Combining this information will allow you to optimize and decide which appliance should be sending data to which storage accounts. Two Data Box devices can move files into the same storage account, but don't split content of a single file share across two Data Boxes.
-### DataBox options
+### Data Box options
-For a standard migration, one or a combination of these three DataBox options should be chosen:
+For a standard migration, choose one or a combination of these Data Box options:
-* DataBox Disks
- Microsoft will send you one and up to five SSD disks with a capacity of 8 TiB each, for a maximum total of 40 TiB. The usable capacity is about 20% less, due to encryption and file system overhead. For more information, see [DataBox Disks documentation](../../databox/data-box-disk-overview.md).
-* DataBox
- This is the most common option. A ruggedized DataBox appliance, that works similar to a NAS, will be shipped to you. It has a usable capacity of 80 TiB. For more information, see [DataBox documentation](../../databox/data-box-overview.md).
-* DataBox Heavy
- This option features a ruggedized DataBox appliance on wheels, that works similar to a NAS, with a capacity of 1 PiB. The usable capacity is about 20% less, due to encryption and file system overhead. For more information, see [DataBox Heavy documentation](../../databox/data-box-heavy-overview.md).
+* **Data Box Disk**.
+ Microsoft will send you between one and five SSD disks that have a capacity of 8 TiB each, for a maximum total of 40 TiB. The usable capacity is about 20 percent less because of encryption and file-system overhead. For more information, see [Data Box Disk documentation](../../databox/data-box-disk-overview.md).
+* **Data Box**.
+ This option is the most common one. Microsoft will send you a ruggedized Data Box appliance that works similar to a NAS. It has a usable capacity of 80 TiB. For more information, see [Data Box documentation](../../databox/data-box-overview.md).
+* **Data Box Heavy**.
+ This option features a ruggedized Data Box appliance on wheels that works similar to a NAS. It has a capacity of 1 PiB. The usable capacity is about 20 percent less because of encryption and file-system overhead. For more information, see [Data Box Heavy documentation](../../databox/data-box-heavy-overview.md).
-## Phase 4: Provision a suitable Windows Server on-premises
+## Phase 4: Provision a suitable Windows Server instance on-premises
-While you wait for your Azure DataBox(es) to arrive, you can already start reviewing the needs of one or more Windows Servers you will be using with Azure File Sync.
+While you wait for your Azure Data Box devices to arrive, you can start reviewing the needs of one or more Windows Server instances you'll be using with Azure File Sync.
-* Create a Windows Server 2019 - at a minimum 2012R2 - as a virtual machine or physical server. A Windows Server fail-over cluster is also supported.
-* Provision or add Direct Attached Storage (DAS as compared to NAS, which is not supported).
+* Create a Windows Server 2019 instance (at a minimum, Windows Server 2012 R2) as a virtual machine or physical server. A Windows Server failover cluster is also supported.
+* Provision or add Direct Attached Storage. (DAS, as opposed to NAS, which isn't supported.)
-The resource configuration (compute and RAM) of the Windows Server you deploy depends mostly on the number of items (files and folders) you will be syncing. A higher performance configuration is recommended if you have any concerns.
+The resource configuration (compute and RAM) of the Windows Server instance you deploy depends mostly on the number of files and folders you'll be syncing. We recommend a higher performance configuration if you have any concerns.
-[Learn how to size a Windows Server based on the number of items (files and folders) you need to sync.](../file-sync/file-sync-planning.md#recommended-system-resources)
+[Learn how to size a Windows Server instance based on the number of items you need to sync.](../file-sync/file-sync-planning.md#recommended-system-resources)
> [!NOTE]
-> The previously linked article presents a table with a range for server memory (RAM). You can orient towards the smaller number for your server but expect that initial sync can take significantly more time.
+> The previously linked article includes a table with a range for server memory (RAM). You can use numbers at the lower end of the range for your server, but expect the initial sync to take significantly longer.
-## Phase 5: Copy files onto your DataBox
+## Phase 5: Copy files onto your Data Box
-When your DataBox arrives, you need to set up your DataBox in the line of sight to your NAS appliance. Follow the setup documentation for the DataBox type you ordered.
+When your Data Box arrives, you need to set it up in the line of sight to your NAS appliance. Follow the setup documentation for the type of Data Box you ordered:
-* [Set up Data Box](../../databox/data-box-quickstart-portal.md)
-* [Set up Data Box Disk](../../databox/data-box-disk-quickstart-portal.md)
-* [Set up Data Box Heavy](../../databox/data-box-heavy-quickstart-portal.md)
+* [Set up Data Box](../../databox/data-box-quickstart-portal.md).
+* [Set up Data Box Disk](../../databox/data-box-disk-quickstart-portal.md).
+* [Set up Data Box Heavy](../../databox/data-box-heavy-quickstart-portal.md).
-Depending on the DataBox type, there maybe DataBox copy tools available to you. At this point, they are not recommended for migrations to Azure file shares as they do not copy your files with full fidelity to the DataBox. Use RoboCopy instead.
+Depending on the type of Data Box, Data Box copy tools might be available. At this point, we don't recommend them for migrations to Azure file shares because they don't copy your files to the Data Box with full fidelity. Use Robocopy instead.
-When your DataBox arrives, it will have pre-provisioned SMB shares available for each storage account you specified at the time of ordering it.
+When your Data Box arrives, it will have pre-provisioned SMB shares available for each storage account you specified when you ordered it.
* If your files go into a premium Azure file share, there will be one SMB share per premium "File storage" storage account.
-* If your files go into a standard storage account, there will be three SMB shares per standard (GPv1 and GPv2) storage account. Only the file shares ending with `_AzFiles` are relevant for your migration. Ignore any block and page blob shares.
+* If your files go into a standard storage account, there will be three SMB shares per standard (GPv1 and GPv2) storage account. Only the file shares that end with `_AzFiles` are relevant for your migration. Ignore any block and page blob shares.
-Follow the steps in the Azure DataBox documentation:
+Follow the steps in the Azure Data Box documentation:
-1. [Connect to Data Box](../../databox/data-box-deploy-copy-data.md)
-1. Copy data to Data Box
-1. [Prepare your DataBox for departure to Azure](../../databox/data-box-deploy-picked-up.md)
+1. [Connect to Data Box](../../databox/data-box-deploy-copy-data.md).
+1. Copy data to Data Box.
+1. [Prepare your Data Box for upload to Azure](../../databox/data-box-deploy-picked-up.md).
-The linked DataBox documentation specifies a RoboCopy command. However, the command is not suitable to preserve the full file and folder fidelity. Use this command instead:
+The linked Data Box documentation specifies a Robocopy command. That command isn't suitable for preserving the full file and folder fidelity. Use this command instead:
[!INCLUDE [storage-files-migration-robocopy](../../../includes/storage-files-migration-robocopy.md)] ## Phase 6: Deploy the Azure File Sync cloud resource
-Before continuing with this guide, wait until all of your files have arrived in the correct Azure file shares. The process of shipping and ingesting DataBox data will take time.
+Before you continue with this guide, wait until all of your files have arrived in the correct Azure file shares. The process of shipping and ingesting Data Box data will take time.
[!INCLUDE [storage-files-migration-deploy-afs-sss](../../../includes/storage-files-migration-deploy-azure-file-sync-storage-sync-service.md)]
Before continuing with this guide, wait until all of your files have arrived in
[!INCLUDE [storage-files-migration-deploy-afs-agent](../../../includes/storage-files-migration-deploy-azure-file-sync-agent.md)]
-## Phase 8: Configure Azure File Sync on the Windows Server
+## Phase 8: Configure Azure File Sync on the Windows Server instance
-Your registered on-premises Windows Server must be ready and connected to the internet for this process.
+Your registered on-premises Windows Server instance must be ready and connected to the internet for this process.
[!INCLUDE [storage-files-migration-configure-sync](../../../includes/storage-files-migration-configure-sync.md)]
-Turn on the cloud tiering feature and select "Namespace only" in the initial download section.
+Turn on the cloud tiering feature and select **Namespace only** in the initial download section.
> [!IMPORTANT]
-> Cloud tiering is the AFS feature that allows the local server to have less storage capacity than is stored in the cloud, yet have the full namespace available. Locally interesting data is also cached locally for fast access performance. Cloud tiering is an optional feature per Azure File Sync "server endpoint". You need to use this feature if you do not have enough local disk capacity on the Windows Server to hold all cloud data and if you want to avoid downloading all data from the cloud!
+> Cloud tiering is the Azure File Sync feature that allows the local server to have less storage capacity than is stored in the cloud but have the full namespace available. Locally interesting data is also cached locally for fast access performance. Cloud tiering is optional. You can set it individually for each Azure File Sync server endpoint. You need to use this feature if you don't have enough local disk capacity on the Windows Server instance to hold all cloud data and you want to avoid downloading all data from the cloud.
-Repeat the steps of sync group creation and addition of the matching server folder as a server endpoint for all Azure file shares / server locations, that need to be configured for sync. Wait until sync of the namespace is complete. The following section will detail how you can ensure that.
+For all Azure file shares / server locations that you need to configure for sync, repeat the steps to create sync groups and to add the matching server folders as server endpoints. Wait until the sync of the namespace is complete. The following section will explain how you can ensure the sync is complete.
> [!NOTE]
-> After the creation of a server endpoint, sync is working. However, sync needs to enumerate (discover) the files and folders you moved via DataBox into the Azure file share. Depending on the size of the namespace, this can take significant time before the namespace of the cloud starts to appear on the server.
+> After you create a server endpoint, sync is working. But sync needs to enumerate (discover) the files and folders you moved via Data Box into the Azure file share. Depending on the size of the namespace, it can take a long time before the namespace from the cloud appears on the server.
## Phase 9: Wait for the namespace to fully appear on the server
-It's imperative that you wait with any next steps of your migration that the server has fully downloaded the namespace from the cloud share. If you start moving files too early onto the server, you can risk unnecessary uploads and even file sync conflicts.
+Before you continue with the next steps of your migration, wait until the server has fully downloaded the namespace from the cloud share. If you start moving files onto the server too early, you risk unnecessary uploads and even file sync conflicts.
-To tell if your server has completed initial download sync, open Event Viewer on your syncing Windows Server and use the Azure File Sync telemetry event log.
-The telemetry event log is located in Event Viewer under Applications and Services\Microsoft\FileSync\Agent.
+To determine if your server has completed the initial download sync, open Event Viewer on your syncing Windows Server instance and use the Azure File Sync telemetry event log.
+The telemetry event log is in Event Viewer under Applications and Services\Microsoft\FileSync\Agent.
Search for the most recent 9102 event.
-Event ID 9102 is logged once a sync session completes. In the event text there, is a field for the download sync direction. (`HResult` needs to be zero and files downloaded as well).
+Event ID 9102 is logged when a sync session completes. In the event text, there's a field for the download sync direction. (`HResult` needs to be zero, and files need to be downloaded.)
-You want to see two consecutive events of this type and content to tell that the server has finished downloading the namespace. It's OK if there are different events firing between two 9102 events.
+You want to see two consecutive events of this type, with this content, to ensure that the server has finished downloading the namespace. It's OK if there are other events between the two 9102 events.
-## Phase 10: Catch-up RoboCopy from your NAS
+## Phase 10: Run Robocopy from your NAS
-Once your server has completed initial sync of the entire namespace from the cloud share, you can proceed with this step. It's imperative that this step is complete, before you continue with this step. Check the previous section for details.
+After your server completes the initial sync of the entire namespace from the cloud share, you can continue with this step. The initial sync must be complete before you continue with this step. See the previous section for details.
-In this step, you will run RoboCopy jobs to catch up your cloud shares with the latest changes on your NAS since the time you forked your shares onto the DataBox.
-This catch-up RoboCopy may finish quickly or take a while, depending on the amount of churn that happened on your NAS shares.
+In this step, you'll run Robocopy jobs to sync your cloud shares with the latest changes on your NAS that occurred since you forked your shares onto the Data Box.
+This Robocopy run might finish quickly or take a while, depending on the amount of churn that happened on your NAS shares.
> [!WARNING]
-> Due to a regressed RoboCopy behavior in Windows Server 2019, /MIR switch of RoboCopy is not compatible with a tiered target directory. You must not use Windows Server 2019 or Windows 10 client for this Phase of the migration. Use RoboCopy on an intermediate Windows Server 2016.
+> Because of regressed Robocopy behavior in Windows Server 2019, the Robocopy `/MIR` switch isn't compatible with tiered target directories. You can't use Windows Server 2019 or Windows 10 client for this phase of the migration. Use Robocopy on an intermediate Windows Server 2016 instance.
-The basic migration approach is a RoboCopy from your NAS appliance to your Windows Server, and Azure File Sync to Azure file shares.
+Here's the basic migration approach:
+ - Run Robocopy from your NAS appliance to sync your Windows Server instance.
+ - Use Azure File Sync to sync the Azure file shares from Windows Server.
Run the first local copy to your Windows Server target folder: 1. Identify the first location on your NAS appliance.
-1. Identify the matching folder on the Windows Server, that already has Azure File Sync configured on it.
-1. Start the copy using RoboCopy
+1. Identify the matching folder on the Windows Server instance that already has Azure File Sync configured on it.
+1. Start the copy by using Robocopy.
-The following RoboCopy command will copy only the differences (updated files and folders) from your NAS storage to your Windows Server target folder. The Windows Server will then sync them to the Azure file share(s).
+The following Robocopy command will copy only the differences (updated files and folders) from your NAS storage to your Windows Server target folder. The Windows Server instance will then sync them to the Azure file shares.
[!INCLUDE [storage-files-migration-robocopy](../../../includes/storage-files-migration-robocopy.md)]
-If you provisioned less storage on your Windows Server than your files take up on the NAS appliance, then you have configured cloud tiering. As the local Windows Server volume gets full, [cloud tiering](../file-sync/file-sync-cloud-tiering-overview.md) will kick in and tier files that have successfully synced already. Cloud tiering will generate enough space to continue the copy from the NAS appliance. Cloud tiering checks once an hour to see what has synced and to free up disk space to reach the 99% volume free space.
-It is possible, that RoboCopy needs to move numerous files, more than you have local storage for on the Windows Server. On average, you can expect RoboCopy to move a lot faster than Azure File Sync can upload your files over and tier them off your local volume. RoboCopy will fail. It is recommended that you work through the shares in a sequence that prevents that. For example, not starting RoboCopy jobs for all shares at the same time, or only moving shares that fit on the current amount of free space on the Windows Server, to mention a few. The good news is that the /MIR switch will only move deltas and once a delta has been moved, a restarted job will not need to move this file again.
+If you provisioned less storage on your Windows Server instance than your files use on the NAS appliance, you've configured cloud tiering. As the local Windows Server volume becomes full, [cloud tiering](../file-sync/file-sync-cloud-tiering-overview.md) will kick in and tier files that have already successfully synced. Cloud tiering will generate enough space to continue the copy from the NAS appliance. Cloud tiering checks once an hour to determine what has synced and to free up disk space to reach the 99 percent volume free space.
+
+Robocopy might need to move more files than you can store locally on the Windows Server instance. You can expect Robocopy to move faster than Azure File Sync can upload your files and tier them off your local volume. In this situation, Robocopy will fail. We recommend that you work through the shares in a sequence that prevents this scenario. For example, move only shares that fit in the free space available on the Windows Server instance. Or avoid starting Robocopy jobs for all shares at the same time. The good news is that the `/MIR` switch will ensure that only deltas are moved. After a delta has been moved, a restarted job won't need to move the file again.
-### User cut-over
+### Do the cutover
-When you run the RoboCopy command for the first time, your users and applications are still accessing files on the NAS and potentially change them. It is possible, that RoboCopy has processed a directory, moves on to the next and then a user on the source location (NAS) adds, changes, or deletes a file that will now not be processed in this current RoboCopy run. This behavior is expected.
+When you run the Robocopy command for the first time, your users and applications will still be accessing files on the NAS and potentially changing them. Robocopy will process a directory and then move on to the next one. A user on the NAS might then add, change, or delete a file on the first directory that won't be processed during the current Robocopy run. This behavior is expected.
-The first run is about moving the bulk of the churned data to your Windows Server and into the cloud via Azure File Sync. This first copy can take a long time, depending on:
+The first run is about moving the bulk of the churned data to your Windows Server instance and into the cloud via Azure File Sync. This first copy can take a long time, depending on:
-* the upload bandwidth
-* the local network speed and number of how optimally the number of RoboCopy threads matches it
-* the number of items (files and folders), that need to be processed by RoboCopy and Azure File Sync
+* The upload bandwidth.
+* The local network speed and how optimally the number of Robocopy threads matches it.
+* The number of items (files and folders) that need to be processed by Robocopy and Azure File Sync.
-Once the initial run is complete, run the command again.
+After the initial run is complete, run the command again.
-A second time you run RoboCopy for the same share, it will finish faster, because it only needs to transport changes that happened since the last run. You can run repeated jobs for the same share.
+Robocopy will finish faster the second time you run it for a share. It needs to transport only changes that happened since the last run. You can run repeated jobs for the same share.
-When you consider the downtime acceptable, then you need to remove user access to your NAS-based shares. You can do that by any steps that prevent users from changing the file and folder structure and content. An example is to point your DFS-Namespace to a non-existing location or change the root ACLs on the share.
+When you consider downtime acceptable, you need to remove user access to your NAS-based shares. You can do that in any way that prevents users from changing the file and folder structure and the content. For example, you can point your DFS namespace to a location that doesn't exist or change the root ACLs on the share.
-Run one last RoboCopy round. It will pick up any changes, that might have been missed.
-How long this final step takes, is dependent on the speed of the RoboCopy scan. You can estimate the time (which is equal to your downtime) by measuring how long the previous run took.
+Run Robocopy one last time. It will pick up any changes that have been missed.
+How long this final step takes depends on the speed of the Robocopy scan. You can estimate the time (which is equal to your downtime) by measuring the length of the previous run.
-Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure to set the same share-level permissions as on your NAS SMB share. If you had an enterprise-class domain-joined NAS, then the user SIDs will automatically match as the users exist in Active Directory and RoboCopy copies files and metadata at full fidelity. If you have used local users on your NAS, you need to re-create these users as Windows Server local users and map the existing SIDs RoboCopy moved over to your Windows Server to the SIDs of your new, Windows Server local users.
+Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure to set the same share-level permissions that are on your NAS SMB share. If you had an enterprise-class, domain-joined NAS, the user SIDs will automatically match because the users are in Active Directory and Robocopy copies files and metadata at full fidelity. If you have used local users on your NAS, you need to:
+- Re-create these users as Windows Server local users.
+- Map the existing SIDs that Robocopy moved over to your Windows Server instance to the SIDs of your new Windows Server local users.
-You have finished migrating a share / group of shares into a common root or volume. (Depending on your mapping from Phase 1)
+You've finished migrating a share or group of shares into a common root or volume (depending on your mapping from Phase 1).
-You can try to run a few of these copies in parallel. We recommend processing the scope of one Azure file share at a time.
+You can try to run a few of these copies in parallel. We recommend that you process the scope of one Azure file share at a time.
-## Troubleshoot
+## Troubleshooting
-The most likely issue you can run into, is that the RoboCopy command fails with *"Volume full"* on the Windows Server side. Cloud tiering acts once every hour to evacuate content from the local Windows Server disk, that has synced. Its goal is to reach your 99% free space on the volume.
+The most common problem is for the Robocopy command to fail with "Volume full" on the Windows Server side. Cloud tiering acts once every hour to evacuate content from the local Windows Server disk that has synced. Its goal is to reach your 99 percent free space on the volume.
-Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows Server.
+Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows Server instance.
-When your Windows Server has sufficient available capacity, rerunning the command will resolve the problem. Nothing breaks when you get into this situation and you can move forward with confidence. Inconvenience of running the command again is the only consequence.
+When your Windows Server instance has enough available capacity, run the command again to resolve the problem. Nothing breaks in this situation. You can move forward with confidence. The inconvenience of running the command again is the only consequence.
-Check the link in the following section for troubleshooting Azure File Sync issues.
+To troubleshoot Azure File Sync problems, see the article listed in the next section.
## Next steps
-There is more to discover about Azure file shares and Azure File Sync. The following articles help understand advanced options, best practices, and also contain troubleshooting help. These articles link to [Azure file share documentation](storage-files-introduction.md) as appropriate.
+There's more to discover about Azure file shares and Azure File Sync. The following articles will help you understand advanced options and best practices. They also provide help with troubleshooting. These articles contain links to the [Azure file share documentation](storage-files-introduction.md) where appropriate.
* [Migration overview](storage-files-migration-overview.md) * [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md)
storage Storage Java How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-java-how-to-use-file-storage.md
Learn how to upload a file from local storage.
# [Azure Java SDK v12](#tab/java)
-The following code uploads a local file to Azure File storage by calling the [ShareFileClient.uploadFromFile](/java/api/com.azure.storage.file.share.sharefileclient.uploadfromfile) method. The following example method returns a `Boolean` value indicating if it successfully uploaded the specified file.
+The following code uploads a local file to Azure Files by calling the [ShareFileClient.uploadFromFile](/java/api/com.azure.storage.file.share.sharefileclient.uploadfromfile) method. The following example method returns a `Boolean` value indicating if it successfully uploaded the specified file.
:::code language="java" source="~/azure-storage-snippets/files/howto/java/java-v12/files-howto-v12/src/main/java/com/files/howto/App.java" id="Snippet_uploadFile":::
storage Table Storage Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/tables/table-storage-design-patterns.md
This article describes some patterns appropriate for use with Table service solu
The pattern map above highlights some relationships between patterns (blue) and anti-patterns (orange) that are documented in this guide. There are of many other patterns that are worth considering. For example, one of the key scenarios for Table Service is to use the [Materialized View Pattern](/previous-versions/msp-n-p/dn589782(v=pandp.10)) from the [Command Query Responsibility Segregation (CQRS)](/previous-versions/msp-n-p/jj554200(v=pandp.10)) pattern. ## Intra-partition secondary index pattern
-Store multiple copies of each entity using different **RowKey** values (in the same partition) to enable fast and efficient lookups and alternate sort orders by using different **RowKey** values. Updates between copies can be kept consistent using EGTs.
+Store multiple copies of each entity using different **RowKey** values (in the same partition) to enable fast and efficient lookups and alternate sort orders by using different **RowKey** values. Updates between copies can be kept consistent using entity group transactions (EGTs).
### Context and problem The Table service automatically indexes entities using the **PartitionKey** and **RowKey** values. This enables a client application to retrieve an entity efficiently using these values. For example, using the table structure shown below, a client application can use a point query to retrieve an individual employee entity by using the department name and the employee ID (the **PartitionKey** and **RowKey** values). A client can also retrieve entities sorted by employee ID within each department.
The client application can call multiple asynchronous methods like this one, and
- [Modeling relationships](table-storage-design-modeling.md) - [Design for querying](table-storage-design-for-query.md) - [Encrypting table data](table-storage-design-encrypt-data.md)-- [Design for data modification](table-storage-design-for-modification.md)
+- [Design for data modification](table-storage-design-for-modification.md)
synapse-analytics Get Started Knowledge Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-knowledge-center.md
There are three items in this section:
1. Scroll to the first query (lines 28 to 32) and select the query text. 1. Click Run. It will run only code you have selected.
-## Gallery: A collectiopn of sample data sets and sample code
+## Gallery: A collection of sample datasets and sample code
1. Go to the **Knowledge Center**, click **Browse gallery**. 1. Select the **SQL scripts** tab at the top.
synapse-analytics Get Started Visualize Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-visualize-power-bi.md
Title: 'Tutorial: Get started with Azure Synapse Analytics - visualize workspace data with Power BI'
-description: In this tutorial, you'll learn how to create a Power BI workspace, link your Azure Synapse workspace, and create a Power BI data set that utilizes data in the Azure Synapse workspace.
+description: In this tutorial, you'll learn how to use Power BI to visualize data in Azure Synapse Analytics.
virtual-desktop Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/agent-overview.md
The Windows Virtual Desktop agent is initially installed in one of two ways. If
The Windows Virtual Desktop service updates the agent whenever an update becomes available. Agent updates can include new functionality or fixes for previous issues. You must always have the latest stable version of the agent installed so your VMs don't lose connectivity or security. Once the initial version of the Windows Virtual Desktop agent is installed, the agent regularly queries the Windows Virtual Desktop service to determine if thereΓÇÖs a newer version of the agent, stack, or monitoring component available. If a newer version of any of the components has already been deployed, the updated component is automatically installed by the flighting system.
-New versions of the agent are deployed at regular intervals in weeklong periods to all Azure subscriptions. These update periods are called "flights." When a flight happens, you may see VMs in your host pool receive the agent update at different times. All VM agents in all subscriptions will be updated by the end of the deployment period. The Windows Virtual Desktop flighting system enhances the reliability of the service by ensuring the stability and quality of the agent update.
+New versions of the agent are deployed at regular intervals in five-day periods to all Azure subscriptions. These update periods are called "flights". It takes 24 hours for all VMs in a single broker region to receive the agent update in a flight. Because of this, when a flight happens, you may see VMs in your host pool receive the agent update at different times. Also, if the VMs are in different regions, they might update on different days in the five-day period. The flight will update all VM agents in all subscriptions by the end of the deployment period. The Windows Virtual Desktop flighting system enhances service reliability by ensuring the stability and quality of the agent update.
Other important things you should keep in mind:
+- The agent update isn't connected to Windows Virtual Desktop infrastructure build updates. When the Windows Virtual Desktop infrastructure updates, that doesn't mean that the agent has updated along with it.
- Because VMs in your host pool may receive agent updates at different times, you'll need to be able to tell the difference between flighting issues and failed agent updates. If you go to the event logs for your VM at **Event Viewer** > **Windows Logs** > **Application** and see an event labeled "ID 3277," that means the Agent update didn't work. If you don't see that event, then the VM is in a different flight and will be updated later. - When the Geneva Monitoring agent updates to the latest version, the old GenevaTask task is located and disabled before creating a new task for the new monitoring agent. The earlier version of the monitoring agent isn't deleted in case that the most recent version of the monitoring agent has a problem that requires reverting to the earlier version to fix. If the latest version has a problem, the old monitoring agent will be re-enabled to continue delivering monitoring data. All versions of the monitor that are earlier than the last one you installed before the update will be deleted from your VM. - Your VM keeps three versions of the side-by-side stack at a time. This allows for quick recovery if something goes wrong with the update. The earliest version of the stack is removed from the VM whenever the stack updates.
virtual-desktop App Attach File Share https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/app-attach-file-share.md
MSIX app attach doesn't have any dependencies on the type of storage fabric the
## Performance requirements
-MSIX app attach image size limits for your system depend on the storage type you're using to store the VHD or VHDx files, as well as the size limitations of the VHD, VHSD or CIM files and the file system.
+MSIX app attach image size limits for your system depend on the storage type you're using to store the VHD or VHDx files, as well as the size limitations of the VHD, VHDX or CIM files and the file system.
The following table gives an example of how many resources a single 1 GB MSIX image with one MSIX app inside of it requires for each VM:
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-agent.md
To resolve this issue:
8. Under **ClusterSettings**, find **SessionDirectoryListener** and make sure its data value is **rdp-sxs...**. 9. If **SessionDirectoryListener** isn't set to **rdp-sxs...**, you'll need to follow the steps in the [Uninstall the agent and boot loader](#step-1-uninstall-all-agent-boot-loader-and-stack-component-programs) section to first uninstall the agent, boot loader, and stack components, and then [Reinstall the agent and boot loader](#step-4-reinstall-the-agent-and-boot-loader). This will reinstall the side-by-side stack.
-## Error: Heartbeat issue where users keep getting disconnected from session hosts
-
-If your server isn't picking up a heartbeat from the Windows Virtual Desktop service, you'll need to change the heartbeat threshold. This will temporarily mitigate the issue symptoms, but won't fix the underlying network issue. Follow the instructions in this section if one or more of the following scenarios apply to you:
--- You're receiving a **CheckSessionHostDomainIsReachableAsync** error-- You're receiving a **ConnectionBrokenMissedHeartbeatThresholdExceeded** error-- You're receiving a **ConnectionEstablished:UnexpectedNetworkDisconnect** error-- User clients keep getting disconnected-- Users keep getting disconnected from their session hosts-
-To change the heartbeat threshold:
-1. Open your command prompt as an administrator.
-2. Enter the **qwinsta** command and run it.
-3. There should be two stack components displayed: **rdp-tcp** and **rdp-sxs**.
- - Depending on the version of the OS you're using, **rdp-sxs** may be followed by the build number. If it is, make sure to write down this number for later.
-4. Open the Registry Editor.
-5. Go to **HKEY_LOCAL_MACHINE** > **SYSTEM** > **CurrentControlSet** > **Control** > **Terminal Server** > **WinStations**.
-6. Under **WinStations**, you may see several folders for different stack versions. Select the folder that matches the version number from step 3.
-7. Create a new registry DWORD by right-clicking the registry editor, then selecting **New** > **DWORD (32-bit) Value**. When you create the DWORD, enter the following values:
- - HeartbeatInterval: 10000
- - HeartbeatWarnCount: 30
- - HeartbeatDropCount: 60
-8. Restart your VM.
-
->[!NOTE]
->If changing the heartbeat threshold doesn't resolve your issue, you may have an underlying network issue that you'll need need to contact the Azure Networking team about.
- ## Error: DownloadMsiException Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **DownloadMsiException** in the description, there isn't enough space on the disk for the RDAgent.
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/maintenance-and-updates.md
For greater control on all maintenance activities including zero-impact and rebo
Live migration is an operation that doesn't require a reboot and that preserves memory for the VM. It causes a pause or freeze, typically lasting no more than 5 seconds. Except for G, M, N, and H series, all infrastructure as a service (IaaS) VMs, are eligible for live migration. Eligible VMs represent more than 90 percent of the IaaS VMs that are deployed to the Azure fleet.
+> [!NOTE]
+> You won't recieve a notification in the Azure portal for live migration operations that don't require a reboot. To see a list of live migrations that don't require a reboot, [query for scheduled events](./windows/scheduled-events.md#query-for-events).
+ The Azure platform starts live migration in the following scenarios: - Planned maintenance - Hardware failure
Some planned-maintenance scenarios use live migration, and you can use Scheduled
Live migration can also be used to move VMs when Azure Machine Learning algorithms predict an impending hardware failure or when you want to optimize VM allocations. For more information about predictive modeling that detects instances of degraded hardware, see [Improving Azure VM resiliency with predictive machine learning and live migration](https://azure.microsoft.com/blog/improving-azure-virtual-machine-resiliency-with-predictive-ml-and-live-migration/?WT.mc_id=thomasmaurer-blog-thmaure). Live-migration notifications appear in the Azure portal in the Monitor and Service Health logs as well as in Scheduled Events if you use these services. ++ ## Maintenance that requires a reboot In the rare case where VMs need to be rebooted for planned maintenance, you'll be notified in advance. Planned maintenance has two phases: the self-service phase and a scheduled maintenance phase.
Each infrastructure update rolls out zone by zone, within a single region. But,
## Next steps
-You can use the [Azure CLI](maintenance-notifications-cli.md), [Azure PowerShell](maintenance-notifications-powershell.md) or the [portal](maintenance-notifications-portal.md) to manage planned maintenance.
+You can use the [Azure CLI](maintenance-notifications-cli.md), [Azure PowerShell](maintenance-notifications-powershell.md), or the [portal](maintenance-notifications-portal.md) to manage planned maintenance.
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/configure.md
Also note that all the above VM sizes support "Gen 2" VMs, though some older one
#### SR-IOV enabled VMs For SR-IOV enabled [RDMA capable VMs](../../sizes-hpc.md#rdma-capable-instances), CentOS-HPC VM images version 7.6 and later are suitable. These VM images come optimized and pre-loaded with the Mellanox OFED drivers for RDMA and various commonly used MPI libraries and scientific computing packages.-- The available or latest versions of the VM images can be listed with the following information using [CLI](https://docs.microsoft.com/cli/azure/vm/image?view=azure-cli-latest#az_vm_image_list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview).
+- The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az_vm_image_list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview).
```bash "publisher": "OpenLogic", "offer": "CentOS-HPC",
For non-SR-IOV enabled [RDMA capable VMs](../../sizes-hpc.md#rdma-capable-instan
### Ubuntu-HPC VM images For SR-IOV enabled [RDMA capable VMs](../../sizes-hpc.md#rdma-capable-instances), Ubuntu-HPC VM images version 18.04 are suitable. These VM images come optimized and pre-loaded with the Mellanox OFED drivers for RDMA, Nvidia GPU drivers, GPU compute software stack (CUDA, NCCL), and various commonly used MPI libraries and scientific computing packages.-- The available or latest versions of the VM images can be listed with the following information using [CLI](https://docs.microsoft.com/cli/azure/vm/image?view=azure-cli-latest#az_vm_image_list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-hpc?tab=overview).
+- The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az_vm_image_list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-hpc?tab=overview).
```bash "publisher": "Microsoft-DSVM", "offer": "Ubuntu-HPC",
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vs-azure-tools-storage-explorer-files.md
Title: Using Storage Explorer with Azure File storage | Microsoft Docs
+ Title: Using Storage Explorer with Azure Files | Microsoft Docs
description: Learn how learn how to use Storage Explorer to work with file shares and files. documentationcenter: na
Last updated 03/09/2017
-# Using Storage Explorer with Azure File storage
+# Using Storage Explorer with Azure Files
-Azure File storage is a service that offers file shares in the cloud using the standard Server Message Block (SMB) Protocol. Both SMB 2.1 and SMB 3.0 are supported. With Azure File storage, you can migrate legacy applications that rely on file shares to Azure quickly and without costly rewrites. You can use File storage to expose data publicly to the world, or to store application data privately. In this article, you'll learn how to use Storage Explorer to work with file shares and files.
+Azure Files is a service that offers file shares in the cloud using the standard Server Message Block (SMB) Protocol. Both SMB 2.1 and SMB 3.0 are supported. With Azure Files, you can migrate legacy applications that rely on file shares to Azure quickly and without costly rewrites. You can use File storage to expose data publicly to the world, or to store application data privately. In this article, you'll learn how to use Storage Explorer to work with file shares and files.
## Prerequisites
To complete the steps in this article, you'll need the following:
- [Connect to an Azure storage account or service](./vs-azure-tools-storage-manage-with-storage-explorer.md#connect-to-a-storage-account-or-service)
-## Create a File Share
+## Create a file share
All files must reside in a file share, which is simply a logical grouping of files. An account can contain an unlimited number of file shares, and each share can store an unlimited number of files.
The following steps illustrate how to create a file share within Storage Explore
1. Open Storage Explorer.
-1. In the left pane, expand the storage account within which you wish to create the File Share
+1. In the left pane, expand the storage account within which you wish to create the file share
1. Right-click **File Shares**, and - from the context menu - select **Create File Share**.
- ![Create File Share](media/vs-azure-tools-storage-explorer-files/image1.png)
+ ![Create file share](media/vs-azure-tools-storage-explorer-files/image1.png)
1. A text box will appear below the **File Shares** folder. Enter the name for your file share. See the [Share naming rules](./storage/blobs/storage-quickstart-blobs-dotnet.md) section for a list of rules and restrictions on naming file shares.
The following steps illustrate how to create a SAS for a file share:+
## Manage Access Policies for a file share
-The following steps illustrate how to manage (add and remove) access policies for a file share:+ . The Access Policies is used for creating SAS URLs through which people can use to access the Storage File resource during a defined period of time.
+The following steps illustrate how to manage (add and remove) access policies for a file share:+ . The Access Policies is used for creating SAS URLs through which people can use to access the Azure Files resource during a defined period of time.
1. Open Storage Explorer.
The following steps illustrate how to manage the files (and folders) within a fi
- View the [latest Storage Explorer release notes and videos](https://www.storageexplorer.com/). -- Learn how to [create applications using Azure blobs, tables, queues, and files](https://azure.microsoft.com/documentation/services/storage/).
+- Learn how to [create applications using Azure blobs, tables, queues, and files](https://azure.microsoft.com/documentation/services/storage/).
web-application-firewall Waf Front Door Configure Custom Response Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/afds/waf-front-door-configure-custom-response-code.md
Last updated 06/10/2020 -+ # Configure a custom response for Azure Web Application Firewall (WAF)
web-application-firewall Waf Front Door Configure Ip Restriction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/afds/waf-front-door-configure-ip-restriction.md
Last updated 12/22/2020-+ # Configure an IP restriction rule with a Web Application Firewall for Azure Front Door
web-application-firewall Waf Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/afds/waf-front-door-geo-filtering.md
Last updated 03/10/2020 -+ # What is geo-filtering on a domain for Azure Front Door Service?
web-application-firewall Waf Front Door Tuning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/afds/waf-front-door-tuning.md
Last updated 12/11/2020 -+ # Tuning Web Application Firewall (WAF) for Azure Front Door
web-application-firewall Waf Front Door Tutorial Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/afds/waf-front-door-tutorial-geo-filtering.md
Last updated 03/10/2020 -+ # Set up a geo-filtering WAF policy for your Front Door